When to use Parallel MCPs?
Our MCP servers are the best way to explore what’s possible with our APIs, as it using complex APIs without prior knowledge, and comparing results. The Parallel MCP Servers expose Parallel APIs to AI assistants and large language model (LLM) workflows, delivering high-quality, relevant results from the web while optimizing for the price-performance balance your AI applications need at scale. As can be seen in the following table, our MCPs can be useful for quick experimentation with deep research and task groups, or for daily use.| Use Case | What |
|---|---|
| Agentic applications where low-latency search is a tool call | Search MCP |
| Daily use for everyday deep-research tasks in chat-based clients | Task MCP |
| Enriching a dataset (eg. a CSV) with web data via chat-based clients | Task MCP |
| Running benchmarks on Parallel processors across a series of queries | Task MCP |
| Building high-scale production apps that integrate with Parallel APIs | Search MCP and Tasks |
Available MCP Servers
Parallel offers two MCP servers that can be installed in any MCP client. They can also be used programmatically by providing your Parallel API key in the Authorization header as a Bearer token.Search MCP
The Search MCP provides drop-in web search capabilities for any MCP-aware model. It invokes the Search API endpoint with anagentic mode optimized for agent workflows.
Server URL: https://search-mcp.parallel.ai/mcp
View Search MCP Documentation →
Task MCP
The Task MCP enables deep research tasks and data enrichment workflows. It provides access to the Task API for generating comprehensive reports and transforming datasets with web intelligence. Server URL:https://task-mcp.parallel.ai/mcp
View Task MCP Documentation →