- Create Deep Research Task - Initiates a deep research task, returns details to view progress
- Create Task Group - Initiates a task group to enrich multiple items in parallel.
- Get Result - Retrieves the results of both deep research as well as task groups in an LLM friendly format.
- Choose a data source to start with - See Enrichment Data Sources and Destinations
- Initiate your tasks - After you have your initial data the MCP can initiate the deep research or task group(s). See these Use Cases for inspiration.
- Analyze the results - The LLM will provide a link to view the progress of the spawned work as results come in. After everything is completed, prompt the LLM to analyze the results to review the work done and answer your questions.
Enrichment Data Sources and Destinations
The task group tool can be used directly from LLM memory, but is often used in combination with a data source. We’ve identified the following data sources that work well with the Task Group tool:- Upload Tabular Files - You can use the Task MCP with Excel sheets or CSVs you can upload. Some LLM clients (such as ChatGPT) may allow uploading Excel or CSV files and working with them. Availability differs per client.
- Connect with databases - There are several MCPs available that allow your LLM to retrieve data from your database. For example, Supabase MCP and Neon MCP.
- Connect with documents - Documents may contain vital initial information to start a task group Notion MCP, Linear MCP
- Connect with web search data - Parallel Search MCP or other Web Tools MCPs or features can be used to get an initial list of items, which is often a great starting point for a Task Group.
Use Cases
We see two main use-cases for the Task MCP. On the one hand it makes Parallel APIs accessible for anyone requiring more reliable and deeper research or enrichment without any coding skills, lowering the barrier to using our product significantly. On the other hand, it’s a great tool for developers to get to know our product by experimenting with different use-cases, seeing output quality for different configurations before writing a single line of code. Below are some examples of using the Task MCP (sometimes in combination with the Web Tools MCP and/or other MCPs) for both of these use-cases. A) Day to day data enrichment and research:- Sentiment analysis for ecommerce products
- Improving product listings for a web store
- Fact checking
- Deep research every major MCP client creating a Matrix of the results
- Reddit Sentiment analysis
- Comparing the output quality between 2 processors
- Testing and iterating on entity resolution for social media profiles
- Performing 100 deep researches and analyzing results quality
Installation
The Task MCP can be installed in any MCP client. The server URL is:https://task-mcp.parallel.ai/mcp
The Task MCP can also be used programmatically by providing your Parallel API key in the Authorization header as a Bearer token.
Cursor
Add to~/.cursor/mcp.json or .cursor/mcp.json (project-specific):
VS Code
Add tosettings.json in VS Code:
Claude Desktop / Claude.ai
Go to Settings → Connectors → Add Custom Connector, and fill in:Claude Code
Run this command in your terminal:Windsurf
Add to your Windsurf MCP configuration:Cline
Go to the MCP Servers section → Remote Servers → Edit Configuration:Gemini CLI
Add to~/.gemini/settings.json:
ChatGPT
WARNING: Please note that Developer Mode must be enabled, and this feature may not be available to everyone. Additionally, MCPs in ChatGPT are experimental and may not work reliably. First, go to Settings → Connectors → Advanced Settings, and turn on Developer Mode. Then, in connector settings, click Create and fill in:Best Practices
Chose enabled MCPs carefully
Be careful which tools and features you have enabled in your MCP client. When using Parallel in combination with many other tools, the increased context window may cause a degradation in output quality. Additionally, the LLM may prefer standard web search or deep research over Parallel if it has both enabled, so it is recommended to turn off other web or deep-research tools, or explicitly mention you want to use Parallel MCPs.Ensure to limit data source context size
The task MCP can be a powerful tool for doing batch deep research, but it is still constrained by the size of the context window and max output tokens of the used LLM. Design your prompts and tool calls in a way that they do not overflow either of these limitations, or you may experience failure, degraded performance, or lower output quality. If you want to use Parallel with large datasets, it is recommended to use the API or other no-code integrations. The Task MCP is designed for smaller parallel tasks and experimentation, and only works with smaller datasets.Ensure to follow up
Currently, the Task MCP only allows initiating Deep Research and Task Groups, it does not wait for these tasks to complete. The status and/or results can be fetched using a follow up tool call by the user after research is complete. The asynchronous nature of the Task MCP allows initiating several deep researches and task groups without overflowing the context window. To perform multiple tasks or batches in a workflow, you need to reply each time to verify the task is complete and initiate the next step. We are working on improving this.Use with larger models only
While our Web Tools MCP is designed to work well with smaller models as well (such as GPT OSS 20B), our Task MCP requires strong reasoninig capability when using it’s tools, so it is recommended for use with larger models only (such as GPT-5 or Claude Sonnet 4.5). Smaller models may result in degraded output quality.Troubleshooting
Common Installation Issues
Cline: 'Authorization Error redirect_uri must be https'
Cline: 'Authorization Error redirect_uri must be https'
Gemini CLI: Where to provide API key
Gemini CLI: Where to provide API key
Gemini CLI uses HTTP MCPs and authenticates via OAuth. If OAuth isn’t working, you can provide your API key directly.Solution: Use the Add this to
mcp-remote proxy with your API key:~/.gemini/settings.json and replace YOUR-PARALLEL-API-KEY with your key from platform.parallel.ai.VS Code: Incorrect configuration format
VS Code: Incorrect configuration format
VS Code requires a specific configuration format. Common mistakes include using the wrong property names.Incorrect (Cursor format):Correct (VS Code format):Note: VS Code uses
mcp.servers (not mcpServers) and requires the type: "http" field.Windsurf: Configuration location and format
Windsurf: Configuration location and format
Windsurf uses a different configuration format than Cursor.Correct Windsurf configuration:Note: Windsurf uses
serverUrl instead of url. Add this to your Windsurf MCP configuration file.Connection timeout or 'server unavailable' errors
Connection timeout or 'server unavailable' errors
Tools not appearing in the IDE
Tools not appearing in the IDE
If the MCP installs but tools don’t show up:
- Restart your IDE completely (not just reload)
- Check configuration syntax: Ensure valid JSON with no trailing commas
- Verify the server URL: Must be exactly
https://task-mcp.parallel.ai/mcp - Check IDE logs: Look for MCP-related errors in your IDE’s output/debug panel