Skip to main content
Processors are the engines that execute Task Runs. The choice of Processor determines the performance profile and reasoning behavior used. Any Task Run can be executed on any Processor.
Choose a processor based on task complexity and latency requirements. Use lite or base for simple enrichments, core for reliable accuracy on up to 10 output fields, and pro or ultra when reasoning depth is critical. For latency-sensitive workflows, append -fast to any processor name.
Each processor varies in performance characteristics and supported features. Use the tables below to compare standard and fast processors.
About “Max Fields” (the ~ symbol): The max fields column shows approximate limits because actual capacity depends on field complexity. Simple fields like dates or booleans use less capacity than complex fields requiring extensive research. A task with 5 complex analytical fields may require more processing than one with 15 simple lookup fields. Use these numbers as guidelines. If you’re near the limit and seeing quality issues, try a higher-tier processor.
ProcessorLatencyStrengthsMax Fields
lite10s - 60sBasic metadata, fallback, low latency~2 fields
base15s - 100sReliable standard enrichments~5 fields
core60s - 5minCross-referenced, moderately complex outputs~10 fields
core2x60s - 10minHigh complexity cross referenced outputs~10 fields
pro2min - 10minExploratory web research~20 fields
ultra5min - 25minAdvanced multi-source deep research~20 fields
ultra2x5min - 50minDifficult deep research~25 fields
ultra4x5min - 90minVery difficult deep research~25 fields
ultra8x5min - 2hrThe most difficult deep research~25 fields
See Pricing for processor costs and all API rates.

Standard vs Fast Processors

Each processor is available in two variants: Standard and Fast. They differ in how they balance speed versus data freshness. To use a fast processor, append -fast to the processor name:
task_run = client.task_run.create(
    input="Parallel Web Systems (parallel.ai)",
    task_spec={"output_schema": "The founding date of the company"},
    processor="core-fast"  # Fast processor
)

What’s the Trade-off?

AspectStandard ProcessorsFast Processors
LatencyHigher2-5x faster
Data FreshnessHighest freshness (prioritizes live data)Very fresh (optimized for speed)
Best ForBackground jobs, accuracy-critical tasksInteractive apps, agent workflows
The trade-off is simple: fast processors optimize for speed, standard processors optimize for freshness. Both maintain high accuracy—the difference is in how they prioritize when retrieving data.

Why are Fast Processors Faster?

Fast processors are optimized for speed—they return results as quickly as possible while maintaining high accuracy. Standard processors prioritize data freshness and will wait longer to ensure the most up-to-date information when needed. In practice, you can expect 2-5x faster response times with fast processors compared to standard processors for the same tier. This makes fast processors ideal for interactive applications where users are waiting for results.

How Fresh is the Data?

Both processor types access very fresh data sufficient for most use cases. Our data is continuously updated, so for the vast majority of queries—company information, product details, professional backgrounds, market research—both will return accurate, current results. When to prefer standard processors for freshness:
  • Real-time financial data (stock prices, exchange rates)
  • Breaking news or events from the last few hours
  • Rapidly changing information (live scores, election results)
  • Any use case where absolute data freshness is more important than speed

When to Use Each

  • Accuracy is paramount - When correctness matters much more than speed
  • Real-time data required - Stock prices, live scores, breaking news
  • Background/async jobs - Tasks running without user waiting
  • Research-heavy tasks - Deep research benefiting from live sources
  • High-volume async enrichments - Processing large datasets in the background

Examples

Processors can be used flexibly depending on the scope and structure of your task. The examples below show how to:
  • Use a single processor (like lite, base, core, pro, or ultra) to handle specific types of input and reasoning depth.
  • Chain processors together to combine fast lookups with deeper synthesis.
This structure enables flexibility across a variety of tasks—whether you’re extracting metadata, enriching structured records, or generating analytical reports.

Sample Task for each Processor

task_run = client.task_run.create(
    input="Parallel Web Systems (parallel.ai)",
    task_spec={"output_schema":"The founding date of the company in the format MM-YYYY"},
    processor="lite"
)
print(f"Run ID: {task_run.run_id}")

run_result = client.task_run.result(task_run.run_id, api_timeout=3600)
print(run_result.output)

Sample Task for each Fast Processor

task_run = client.task_run.create(
    input="Parallel Web Systems (parallel.ai)",
    task_spec={"output_schema":"The founding date of the company in the format MM-YYYY"},
    processor="lite-fast"
)
print(f"Run ID: {task_run.run_id}")

run_result = client.task_run.result(task_run.run_id, api_timeout=3600)
print(run_result.output)

Multi-Processor Workflows

You can combine processors in sequence to support more advanced workflows. Start by retrieving basic information with base:
task_run_base = client.task_run.create(
    input="Pfizer",
    task_spec={"output_schema":"Who are the current executive leaders at Pfizer? Include their full name and title. Ensure that you retrieve this information from a reliable source, such as major news outlets or the company website."},
    processor="base"
)
print(f"Run ID: {task_run_base.run_id}")

base_result = client.task_run.result(task_run_base.run_id, api_timeout=3600)
print(base_result.output)
Then use the result as input to core to generate detailed background information:
import json

task_run = client.task_run.create(
    input=json.dumps(base_result.output.content),
    task_spec={"output_schema":"For the executive provided, find their professional background tenure at their current company, and notable strategic responsibilities."},
    processor="pro"
)
print(f"Run ID: {task_run.run_id}")

run_result = client.task_run.result(task_run.run_id, api_timeout=3600)
print(run_result.output)
This lets you use a lower compute processor for initial retrieval, then switch to a more capable one for analysis and context-building.