Skip to main content
When building an agent or chat experiences that requires search, deep research, or batch task processing capabilities, it can be a good choice to integrate with our MCPs. When you desire more control over the reasoning and tool descriptions for niche use-cases (if the system prompt isn’t sufficient) or want to limit or simplify the tools, it may be better to use the APIs directly to build your own tools, for example using the AI SDK. Using the MCP-to-AI-SDK is an excellent starting point in that case. To use the Parallel MCP servers programmatically, you need to either perform the OAuth flow to provide an API key, or use your Parallel API key directly as a Bearer token in the Authorization header.

OpenAI Integration

Search MCP with OpenAI

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-5",
    "tools": [
      {
        "type": "mcp",
        "server_label": "parallel_web_search",
        "server_url": "https://search-mcp.parallel.ai/mcp",
        "headers": {
          "Authorization": "Bearer YOUR_PARALLEL_API_KEY"
        },
        "require_approval": "never"
      }
    ],
    "input": "Who is the CEO of Apple?"
  }'

Task MCP with OpenAI

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-5",
    "tools": [
      {
        "type": "mcp",
        "server_label": "parallel_task",
        "server_url": "https://task-mcp.parallel.ai/mcp",
        "headers": {
          "Authorization": "Bearer YOUR_PARALLEL_API_KEY"
        },
        "require_approval": "never"
      }
    ],
    "input": "Create a deep research task about the latest developments in AI safety research"
  }'

Anthropic Integration

Search MCP with Anthropic

curl https://api.anthropic.com/v1/messages \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: mcp-client-2025-04-04" \
  -d '{
    "model": "claude-sonnet-4-5",
    "max_tokens": 8000,
    "messages": [
      {
        "role": "user",
        "content": "What is the latest in AI research?"
      }
    ],
    "mcp_servers": [
      {
        "type": "url",
        "url": "https://search-mcp.parallel.ai/mcp",
        "name": "parallel-web-search",
        "authorization_token": "YOUR_PARALLEL_API_KEY"
      }
    ]
  }'

Task MCP with Anthropic

curl https://api.anthropic.com/v1/messages \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "anthropic-beta: mcp-client-2025-04-04" \
  -d '{
    "model": "claude-sonnet-4-5",
    "max_tokens": 8000,
    "messages": [
      {
        "role": "user",
        "content": "Create a deep research task about the latest developments in AI safety research"
      }
    ],
    "mcp_servers": [
      {
        "type": "url",
        "url": "https://task-mcp.parallel.ai/mcp",
        "name": "parallel-task",
        "authorization_token": "YOUR_PARALLEL_API_KEY"
      }
    ]
  }'

Limitations

Context Window Constraints

The Task MCP is designed for smaller parallel tasks and experimentation, constrained by:
  • Context window size - Large datasets may overflow the available context
  • Max output tokens - Results must fit within model output limitations
  • Data source size - Initial data should be appropriately sized for the model
For large-scale operations, consider using the Parallel APIs directly or other integration methods.

Asynchronous Nature

Due to current MCP/LLM client limitations:
  • Tasks run asynchronously but don’t automatically wait for completion
  • Users must explicitly request results in follow-up turns
  • Multiple workflow steps require manual progression through conversation turns

Model Requirements

  • Search MCP - Works well with smaller models (GPT OSS 20B+)
  • Task MCP - Requires larger models with strong reasoning capabilities (e.g. GPT-5, Claude Sonnet 4.5)
  • Smaller models may result in degraded output quality for complex tasks
Reach out to be among the first to overcome current limitations as we continue improving the platform.