Skip to main content

When to use Parallel MCPs?

Our MCP servers are the best way to explore what’s possible with our APIs, as it using complex APIs without prior knowledge, and comparing results. The Parallel MCP Servers expose Parallel APIs to AI assistants and large language model (LLM) workflows, delivering high-quality, relevant results from the web while optimizing for the price-performance balance your AI applications need at scale. As can be seen in the following table, our MCPs can be useful for quick experimentation with deep research and task groups, or for daily use.
Use CaseWhat
Agentic applications where low-latency search is a tool callSearch MCP
Daily use for everyday deep-research tasks in chat-based clientsTask MCP
Enriching a dataset (eg. a CSV) with web data via chat-based clientsTask MCP
Running benchmarks on Parallel processors across a series of queriesTask MCP
Building high-scale production apps that integrate with Parallel APIsSearch MCP and Tasks

Available MCP Servers

Parallel offers two MCP servers that can be installed in any MCP client. They can also be used programmatically by providing your Parallel API key in the Authorization header as a Bearer token.

Search MCP

The Search MCP provides drop-in web search capabilities for any MCP-aware model. It invokes the Search API endpoint with an agentic mode optimized for agent workflows. Server URL: https://search-mcp.parallel.ai/mcp View Search MCP Documentation →

Task MCP

The Task MCP enables deep research tasks and data enrichment workflows. It provides access to the Task API for generating comprehensive reports and transforming datasets with web intelligence. Server URL: https://task-mcp.parallel.ai/mcp View Task MCP Documentation →

Quick Installation

Both MCPs can be installed in popular AI assistants and IDEs. For detailed installation instructions for your specific platform, visit: For Cursor and VS Code users, you can use these deep links for one-click installation: Cursor: VS Code: