MCP Integration
Use Jungle Grid From AI Agents and Apps
The Jungle Grid MCP server exposes GPU workload execution as tools inside any MCP-aware host — Claude Desktop, Cursor, Windsurf, and others. Connect once, then submit, monitor, and cancel jobs from inside your AI workflow.
What you get
- •Eight tools: submit_job, get_job, list_jobs, cancel_job, get_job_logs, stream_job_logs, estimate_job, and list_nodes.
- •Authentication via your existing Jungle Grid API key — no extra accounts or dashboards.
- •Agents handle the full loop: submit a workload, poll until complete, read logs, and branch on the result.
- •Works with Claude Desktop, Cursor, Windsurf, and any host that implements the Model Context Protocol.
01
How it works
The MCP server runs as a local process on your machine and communicates with your AI host over stdio. It forwards tool calls to the Jungle Grid REST API using your API key, so the host never sees your credentials directly. Jobs run on the platform exactly as they would from the CLI or API — the MCP layer is just a new entry point.
02
Add to your AI host
Paste the configuration snippet into your host's MCP settings file. The exact file path depends on the host — see the note below for common locations.
- •Claude Desktop: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
- •Cursor: .cursor/mcp.json in your project root, or the global MCP settings in Cursor preferences.
- •JUNGLE_GRID_API_URL defaults to https://api.junglegrid.dev and only needs to be set when using a self-hosted orchestrator.
- •Restart your AI host after editing the config file.
03
Available tools
Each tool maps directly to a Jungle Grid REST endpoint. The descriptions below are what the AI host sees when deciding which tool to call — they are written for LLM consumption, not just humans.
- •submit_job — Submit a GPU workload. Requires workload_type, image, and command. Returns a job_id immediately. The job runs asynchronously.
- •get_job — Get the current status and full detail of a job by its ID. Poll this after submit_job until status is completed or failed.
- •list_jobs — List your recent jobs, newest first. Optional limit and status filter.
- •cancel_job — Cancel a pending, queued, or running job. Has no effect on already-terminal jobs.
- •get_job_logs — Fetch stdout and stderr for a completed or running job.
- •stream_job_logs — Stream live stdout and stderr until the job completes or the stream timeout is reached.
- •estimate_job — Estimate the credit cost and GPU tier before committing. Accepts the same workload parameters as submit_job.
- •list_nodes — List currently available GPU nodes with tier, region, VRAM, price per hour, and queue depth.
04
Long-running job pattern
GPU jobs can run for seconds to hours. MCP tool calls are synchronous, so submit_job returns immediately with a job_id and the agent polls get_job until the job reaches a terminal state. The tool descriptions tell the model exactly how to do this — you do not need to prompt it manually.
05
Troubleshooting
Most issues come down to a missing API key, a stale npx cache, or a host that was not restarted after config changes.
- •Error: JUNGLE_GRID_API_KEY is required — the env block in your host config is missing or the key name has a typo. Check the exact field name matches JUNGLE_GRID_API_KEY.
- •Tools not appearing in the host — restart the AI host after editing the config file. Some hosts require a full quit-and-reopen, not just a reload.
- •Stale package version — clear the npx cache with npx clear-npx-cache or pin a version in the args array: ["-y", "@jungle-grid/mcp@0.1.0"].
- •Verify the server starts correctly by running it directly in a terminal before adding it to the host config.