Jungle Grid
Docs
Get startedCLI ReferenceAPI ReferencePortalMCP IntegrationRelease notes

MCP Integration

Use Jungle Grid From AI Agents and Apps

The Jungle Grid MCP server exposes GPU workload execution as tools inside any MCP-aware host — Claude Desktop, Cursor, Windsurf, and others. Connect once, then submit, monitor, and cancel jobs from inside your AI workflow.

What you get

  • Eight tools: submit_job, get_job, list_jobs, cancel_job, get_job_logs, stream_job_logs, estimate_job, and list_nodes.
  • Authentication via your existing Jungle Grid API key — no extra accounts or dashboards.
  • Agents handle the full loop: submit a workload, poll until complete, read logs, and branch on the result.
  • Works with Claude Desktop, Cursor, Windsurf, and any host that implements the Model Context Protocol.

01

How it works

The MCP server runs as a local process on your machine and communicates with your AI host over stdio. It forwards tool calls to the Jungle Grid REST API using your API key, so the host never sees your credentials directly. Jobs run on the platform exactly as they would from the CLI or API — the MCP layer is just a new entry point.

One-line test
JUNGLE_GRID_API_KEY=jg_... npx @jungle-grid/mcp
The server exits immediately if JUNGLE_GRID_API_KEY is not set. Generate a key in the portal under Settings → API Keys before proceeding. This environment variable belongs to the MCP server, not the jungle CLI submit flow.

02

Add to your AI host

Paste the configuration snippet into your host's MCP settings file. The exact file path depends on the host — see the note below for common locations.

  • Claude Desktop: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
  • Cursor: .cursor/mcp.json in your project root, or the global MCP settings in Cursor preferences.
  • JUNGLE_GRID_API_URL defaults to https://api.junglegrid.dev and only needs to be set when using a self-hosted orchestrator.
  • Restart your AI host after editing the config file.
Claude Desktop (claude_desktop_config.json)
{
  "mcpServers": {
    "junglegrid": {
      "command": "npx",
      "args": ["-y", "@jungle-grid/mcp"],
      "env": {
        "JUNGLE_GRID_API_KEY": "jg_..."
      }
    }
  }
}
Custom API URL (self-hosted orchestrator)
{
  "mcpServers": {
    "junglegrid": {
      "command": "npx",
      "args": ["-y", "@jungle-grid/mcp"],
      "env": {
        "JUNGLE_GRID_API_KEY": "jg_...",
        "JUNGLE_GRID_API_URL": "https://your-orchestrator.example.com"
      }
    }
  }
}
The server uses Node.js 18+ and is fetched on demand via npx. If you prefer to avoid the npx round-trip, install globally with npm install -g @jungle-grid/mcp and replace the command with junglegrid-mcp.

03

Available tools

Each tool maps directly to a Jungle Grid REST endpoint. The descriptions below are what the AI host sees when deciding which tool to call — they are written for LLM consumption, not just humans.

  • submit_job — Submit a GPU workload. Requires workload_type, image, and command. Returns a job_id immediately. The job runs asynchronously.
  • get_job — Get the current status and full detail of a job by its ID. Poll this after submit_job until status is completed or failed.
  • list_jobs — List your recent jobs, newest first. Optional limit and status filter.
  • cancel_job — Cancel a pending, queued, or running job. Has no effect on already-terminal jobs.
  • get_job_logs — Fetch stdout and stderr for a completed or running job.
  • stream_job_logs — Stream live stdout and stderr until the job completes or the stream timeout is reached.
  • estimate_job — Estimate the credit cost and GPU tier before committing. Accepts the same workload parameters as submit_job.
  • list_nodes — List currently available GPU nodes with tier, region, VRAM, price per hour, and queue depth.

04

Long-running job pattern

GPU jobs can run for seconds to hours. MCP tool calls are synchronous, so submit_job returns immediately with a job_id and the agent polls get_job until the job reaches a terminal state. The tool descriptions tell the model exactly how to do this — you do not need to prompt it manually.

Agent workflow (pseudocode)
# 1. Submit
result = submit_job(workload_type="training", image="pytorch/pytorch:2.4.0-cuda12.1-cudnn9-runtime", command=["python", "train.py"])
job_id = result.id

# 2. Poll until done
while True:
    job = get_job(job_id=job_id)
    if job.status in ("completed", "failed", "cancelled"):
        break
    sleep(10)

# 3. Read output
logs = get_job_logs(job_id=job_id)
print(logs.stdout)
Use estimate_job before submit_job when cost matters — it returns the expected tier, region, and credit cost without starting a real job.

05

Troubleshooting

Most issues come down to a missing API key, a stale npx cache, or a host that was not restarted after config changes.

  • Error: JUNGLE_GRID_API_KEY is required — the env block in your host config is missing or the key name has a typo. Check the exact field name matches JUNGLE_GRID_API_KEY.
  • Tools not appearing in the host — restart the AI host after editing the config file. Some hosts require a full quit-and-reopen, not just a reload.
  • Stale package version — clear the npx cache with npx clear-npx-cache or pin a version in the args array: ["-y", "@jungle-grid/mcp@0.1.0"].
  • Verify the server starts correctly by running it directly in a terminal before adding it to the host config.
Verify the server starts
JUNGLE_GRID_API_KEY=jg_... npx @jungle-grid/mcp
Inspect tools with MCP Inspector
npx @modelcontextprotocol/inspector npx @jungle-grid/mcp