Documentation Index
Fetch the complete documentation index at: https://docs.parallel.ai/llms.txt
Use this file to discover all available pages before exploring further.
For AI agents: a documentation index is available at https://docs.parallel.ai/llms.txt. The full text of all docs is at https://docs.parallel.ai/llms-full.txt. You may also fetch any page as Markdown by appending
.md to its URL or sending Accept: text/markdown.When to use Parallel MCPs?
Our MCP servers are the fastest way to explore what’s possible with the Parallel APIs — you can use them without writing any code, and compare results across use cases before integrating at scale. The Parallel MCP Servers expose Parallel APIs to AI assistants and large language model (LLM) workflows, delivering high-quality, relevant results from the web while optimizing for the price-performance balance your AI applications need at scale. As can be seen in the following table, our MCPs can be useful for quick experimentation with deep research and task groups, or for daily use.| Use Case | What |
|---|---|
| Agentic applications where low-latency search is a tool call | Search MCP |
| Daily use for everyday deep-research tasks in chat-based clients | Task MCP |
| Enriching a dataset (eg. a CSV) with web data via chat-based clients | Task MCP |
| Running benchmarks on Parallel processors across a series of queries | Task MCP |
| Building high-scale production apps that integrate with Parallel APIs | Search MCP and Tasks |
Available MCP Servers
Parallel offers two MCP servers that can be installed in any MCP client. They can also be used programmatically by providing your Parallel API key in the Authorization header as a Bearer token.Looking for help with Parallel documentation? Try our Docs MCP to get AI-assisted answers about Parallel’s documentation. This is for navigating our docs only—it does not provide access to Parallel APIs.
Search MCP
The Search MCP provides drop-in web search capabilities for any MCP-aware model. It invokes the Search API inbasic mode — tuned for low-latency responses inside agent loops.
Server URLs:
https://search.parallel.ai/mcp— default. Free anonymous; optional Bearer API key for higher limits. OAuth is not advertised here — clients that support OAuth sign-in will not prompt for it on this endpoint.https://search.parallel.ai/mcp-oauth— the OAuth-capable endpoint. Use this if you want OAuth instead of a Bearer key, or need enforced authentication (organization-wide deployments, Zero Data Retention, etc.). Anonymous requests return401.
Task MCP
The Task MCP enables deep research tasks and data enrichment workflows. It provides access to the Task API for generating comprehensive reports and transforming datasets with web intelligence. Server URL:https://task-mcp.parallel.ai/mcp
View Task MCP Documentation →
Quick Installation
Both MCPs can be installed in popular AI assistants and IDEs. For detailed installation instructions for your specific platform, visit:Let your agent install it for you
If you already have an agent like Claude Code, Cursor, or Codex in front of you, paste the prompt below and the agent will find the right config for your client and wire it up. Search MCP (no API key required):Search MCP install prompt
Task MCP install prompt
One-Click Install
Install in any of the following clients with a single click. Search MCP (no API key required):Install in Cursor
One-click install for Cursor.
Install in VS Code
One-click install for VS Code.
Install in LM Studio
One-click install for LM Studio.
Install in Goose
One-click install for Goose.
Install in Cursor
One-click install for Cursor.
Install in VS Code
One-click install for VS Code.
Install in LM Studio
One-click install for LM Studio.
Install in Goose
One-click install for Goose.
Other clients: Claude Code, Codex CLI, Claude Desktop, Windsurf, Zed, Gemini CLI, Warp, and Kiro don’t expose deep-link install URLs — they use CLI commands or a JSON config file instead. See the per-platform instructions on the Search MCP and Task MCP pages.