The Parallel Search MCP Server is now generally available, making our Search API instantly accessible to any MCP-aware model as a drop-in tool. This hosted endpoint takes flexible natural language objectives as inputs and provides AI-native search results with extended webpage excerpts. Built on Parallel’s proprietary web infrastructure, it offers plug-and-play compatibility with OpenAI, Anthropic, and other MCP clients at production scale. Learn More.
Source Policy is now available for both the Parallel Task API and Search API - giving you granular control over which sources your AI agents access and how results are prioritized. Source Policy lets you define exactly which domains your research should include or exclude. Learn more in our latest blog.
Today we’re launching the Task Group API in public beta for large-scale web research workloads. When your pipeline needs hundreds or thousands of independent Parallel Tasks, the new Group API wraps operations into a single batch with unified monitoring, intelligent failure handling, and real-time results streaming. These batch operations are ideal for bulk CRM enrichment, due diligence, or competitive intelligence workflows. Learn more in our latest blog.
Parallel Task API processors achieve state-of-the-art performance on BrowseComp, a challenging benchmark built by OpenAI to test web search agents’ deep research capabilities. Our best processor (Ultra) reaches 27% accuracy, outperforming human experts and all commercially available web search and deep research APIs - while being significantly cheaper. Learn more in our latest blog.
The Parallel Search API is now available in alpha - providing a tool for AI agents to search, rank, and extract information from the public web. Built on Parallel’s custom web crawler and index, the Search API takes flexible inputs (search objective and/or search queries) and returns LLM-ready ranked URLs with extended webpage excerpts. Learn more in our latest blog.
Copy
curl https://api.parallel.ai/alpha/search \ -H "Content-Type: application/json" \ -H "x-api-key: $PARALLEL_API_KEY" \ -d '{ "objective": "When was the United Nations established? Prefer UN'\''s websites.", "search_queries": [ "Founding year UN", "Year of founding United Nations" ], "processor": "base", "max_results": 5, "max_chars_per_result": 1500 }'
Bugs (1)
[Platform] Fixed an issue where copy paste URL actions were incorrectly identified as copy paste CSV actions.
The Parallel Chat API is now generally available in beta. The Chat API utilizes our rapidly growing web index to bring real-time low latency web research to interactive AI applications. It returns OpenAI ChatCompletions compatible streaming text and JSON outputs, and easily drops in to new and existing web research workflows. Learn more in our latest blog.
Copy
from openai import OpenAIclient = OpenAI( api_key="PARALLEL_API_KEY", # Your Parallel API key base_url="https://beta.parallel.ai" # Parallel's API beta endpoint)response = client.chat.completions.create( model="speed", # Parallel model name messages=[ {"role": "user", "content": "What does Parallel Web Systems do?"} ], response_format={ "type": "json_schema", "json_schema": { "name": "reasoning_schema", "schema": { "type": "object", "properties": { "reasoning": { "type": "string", "description": "Think step by step to arrive at the answer", }, "answer": { "type": "string", "description": "The direct answer to the question", }, "citations": { "type": "array", "items": {"type": "string"}, "description": "Sources cited to support the answer", }, }, }, }, },)print(response.choices[0].message.content)
Bugs (1)
[Task API] Fixed an issue where the Task API was returning malformed schema formats.
Basis is a comprehensive suite of verification tools for understanding and validating Task API outputs through four core components.
Citations: Web URLs linking directly to source materials.
Reasoning: Detailed explanations justifying each output field.
Excerpts: Relevant text snippets from citation URLs.
Confidences: A calibrated measure of confidence classified into low, medium, or high categories.
Use Basis with Calibrated Confidences to power hybrid AI/human review workflows focused on low confidence outputs - significantly increasing leverage, accuracy, and time efficiency. Read more in our latest blog post.
Copy
{ "field": "revenue", "citations": [ { "url": "https://www.microsoft.com/en-us/Investor/earnings/FY-2023-Q4/press-release-webcast", "excerpts": ["Microsoft reported fiscal year 2023 revenue of $211.9 billion, an increase of 7% compared to the previous fiscal year."] }, { "url": "https://www.sec.gov/Archives/edgar/data/789019/000095017023014837/msft-20230630.htm", "excerpts": ["Revenue was $211.9 billion for fiscal year 2023, up 7% compared to $198.3 billion for fiscal year 2022."] } ], "reasoning": "The revenue figure is consistent across both the company's investor relations page and their official SEC filing. Both sources explicitly state the fiscal year 2023 revenue as $211.9 billion, representing a 7% increase over the previous year.", "confidence": "high"}
The Parallel Task API enables state-of-the-art web research at scale, with the highest quality at every price point. State your research task in natural language and Parallel will do the rest of the heavy lifting - generating input/output schemas, finding relevant URLs, extracting data in a structured format.
Copy
from parallel import Parallelfrom pydantic import BaseModel, Fieldclass ProductInfo(BaseModel): use_cases: str = Field( description="A few use cases for the product." ) differentiators: str = Field( description="3 unique differentiators for the product as a bullet list." ) benchmarks: str = Field( description="Detailed benchmarks of the product reported by the company." )client = Parallel()result = client.task_run.execute( input="Parallel Web Systems Task API", output=ProductInfo, processor="core")print(f"Product info: {result.output.parsed.model_dump_json(indent=2)}\n")print(f"Basis: {'\n'.join([b.model_dump_json(indent=2) for b in result.output.basis])}")
Our SDK is now available for Python, making it easy to implement Parallel into your applications. The Python SDK is at parity with our Task API endpoints and simplifies request construction and response parsing.
When running Tasks with Parallel, choose between 5 processors - Lite, Base, Core, Pro, and Ultra. We’ve built distinct processor options so that you can optimize price, latency, and quality per task.