Run Tasks in batches for scale & efficiency
The Batch API enables you to efficiently run and manage multiple task executions in a single request. It also allows you to export results in bulk, such as in a CSV.
This can be useful for various use cases, including processing lists of companies or transactions, running multiple variations of research queries, analyzing data sets in parallel, and periodic bulk data updates.
A batch run is scoped to a task. Runners can be specified globally for the whole batch and individually for each task run within the batch as well. The endpoint for batch runs look like this:
Here is an example request:
This creates the following response:
Track the progress of your batch execution:
Returns a status map grouping run IDs by their current status:
queued: Waiting to start
running: In progress
awaiting_tool_calls: Waiting for tool responses
complete: Finished successfully
failed: Error encountered
This creates the following response:
Retrieve results for all completed runs in your preferred format, either as a JSON or CSV.
Query parameters include format: json (default) or csv
These are the CSV headers created for each run:
task_id
, batch_run_id
, run_id
, status
, runner
, input.arguments.keys()
, output.keys()
This returns a streaming response containing all completed run results.
In order to see a single run result, use the following endpoint:
Each run within a batch request can have its own callback configuration. When a run completes, its associated callback is triggered automatically.
For each run in your batch request, you must provide:
Example request data:
The callback system processes runs independently, sending notifications to each run’s specified webhook as they complete. This allows you to track the progress of individual runs within your batch.
For the specific callback payload format and additional configuration options, see our async documentation.