Overview
Features
LLM-initiated task orchestration
Start and manage deep research tasks or task groups directly from an LLM client via MCP.
Batch task/group support
Coordinate and execute grouped tasks as a batch through the MCP server.
Proxy to hosted MCP endpoint
Provides a proxy to the MCP at task-mcp.parallel.ai/mcp for easy access.
Local development workflow
Supports local testing with wrangler dev and the Model Context Protocol Inspector; connect to http://localhost:8787/mcp.
Configurable MCP server entry
Demonstrates MCP server configuration via JSON (mcpServers) for easy setup.
Who Is This For?
- Developers:Use to trigger deep research and task groups from LLMs and explore Parallel APIs.
- LLM engineers:Integrate with LLM clients to orchestrate parallel tasks and experiments.
- Prototype engineers:Prototype production workflows using Parallel APIs and MCP in a test environment.




