Skip to main content
This API is in beta and is accessible via the /v1beta/tasks/groups endpoint.
The Parallel Task Group API enables you to batch process hundreds or thousands of Tasks efficiently. Instead of running Tasks one by one, you can organize them into groups, monitor their progress collectively, and retrieve results in bulk. The API is comprised of the following endpoints: Creation: To run a batch of tasks in a group, you first need to create a task group, after which you can add runs to it, which will be queued and processed.
  • POST /v1beta/tasks/groups (Create task-group)
  • POST /v1beta/tasks/groups/{taskgroup_id}/runs (Add runs)
Progress Snapshot: At any moment during the task, you can get an instant snapshot of the state of it using GET /{taskgroup_id} and GET /{taskgroup_id}/runs. Please note that the runs endpoint streams back the requested runs instantly (using SSE) to allow for large payloads without pagination, and it doesn’t wait for runs to complete. Runs in a task group are stored indefinitely, so unless you have high performance requirements, you may not need to keep your own state of the intermediate results. However, it’s recommended to still do so after the task group is completed.
  • GET /v1beta/tasks/groups/{taskgroup_id} (Get task-group summary)
  • GET /v1beta/tasks/groups/{taskgroup_id}/runs (Fetch task group runs)
Realtime updates: You may want to provide efficient real-time updates to your app. For a high-level summary and run completion events, you can use GET /{taskgroup_id}/events. To also retrieve the task run result upon completion you can use the task run endpoint
  • GET /v1beta/tasks/groups/{taskgroup_id}/events (Stream task-group events)
  • GET /v1/tasks/runs/{run_id}/result (Get task-run result)
To determine whether a task group is fully completed, you can either use realtime update events, or you can poll the task-group summary endpoint. You can also keep adding runs to your task group indefinitely.

Key Concepts

Task Groups

A Task Group is a container that organizes multiple task runs. Each group has:
  • A unique taskgroup_id for identification
  • A status indicating overall progress
  • The ability to add new Tasks dynamically

Group Status

Track progress with real-time status updates:
  • Total number of task runs
  • Count of runs by status (queued, running, completed, failed)
  • Whether the group is still active
  • Human-readable status messages

Quick Start

1. Define Types and Task Structure

# Define task specification as a variable
TASK_SPEC='{
  "input_schema": {
    "json_schema": {
      "type": "object",
      "properties": {
        "company_name": {
          "type": "string",
          "description": "Name of the company"
        },
        "company_website": {
          "type": "string",
          "description": "Company website URL"
        }
      },
      "required": ["company_name", "company_website"]
    }
  },
  "output_schema": {
    "json_schema": {
      "type": "object",
      "properties": {
        "key_insights": {
          "type": "array",
          "items": {"type": "string"},
          "description": "Key business insights"
        },
        "market_position": {
          "type": "string",
          "description": "Market positioning analysis"
        }
      },
      "required": ["key_insights", "market_position"]
    }
  }
}'

2. Create a Task Group

# Create task group and capture the ID
response=$(curl --request POST \
  --url https://api.parallel.ai/v1beta/tasks/groups \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: ${PARALLEL_API_KEY}' \
  --data '{}')

# Extract taskgroup_id from response
TASKGROUP_ID=$(echo $response | jq -r '.taskgroup_id')
echo "Created task group: $TASKGROUP_ID"

3. Add Tasks to the Group

curl --request POST \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/runs \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: ${PARALLEL_API_KEY}' \
  --data '{
  "default_task_spec": '$TASK_SPEC',
  "inputs": [
    {
      "input": {
        "company_name": "Acme Corp",
        "company_website": "https://acme.com"
      },
      "processor": "pro"
    },
    {
      "input": {
        "company_name": "TechStart",
        "company_website": "https://techstart.io"
      },
      "processor": "pro"
    }
  ]
}'

4. Monitor Progress

# Get status of the group
curl --request GET \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID} \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

# Get status of all runs in the group
curl --request GET \
  --no-buffer \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/runs \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

5. Retrieve Results

curl --request GET \
  --no-buffer \
  --url https://api.parallel.ai/v1beta/tasks/groups/${TASKGROUP_ID}/events \
  --header 'x-api-key: ${PARALLEL_API_KEY}'

Batch Processing Pattern

For large datasets, process Tasks in batches to optimize performance:
async def process_companies_in_batches(
    client: parallel.AsyncParallel,
    taskgroup_id: str,
    companies: list[dict[str, str]],
    batch_size: int = 500,
) -> None:
    total_created = 0

    for i in range(0, len(companies), batch_size):
        batch = companies[i : i + batch_size]

        # Create run inputs for this batch
        run_inputs = []
        for company in batch:
            input_data = CompanyInput(
                company_name=company["company_name"],
                company_website=company["company_website"],
            )
            run_inputs.append(
                TaskRunInputParam(input=input_data.model_dump(),
                processor="pro"),
            )

        # Add batch to group
        run_request = TaskGroupRunRequest(
            default_task_spec=task_spec, inputs=run_inputs
        )

        response = await client.post(
            path=f"/v1beta/tasks/groups/{taskgroup_id}/runs",
            cast_to=TaskGroupRunResponse,
            body=run_request.model_dump(),
        )
        total_created += len(response.run_ids)

        print(f"Processed {i + len(batch)} companies. Created {total_created} Tasks.")

Error Handling

The Group API provides robust error handling:
async def process_with_error_handling(client: parallel.AsyncParallel, taskgroup_id: str) -> tuple[list[TaskRunEvent], list[ErrorResponse]]:
    successful_results = []
    failed_results = []

    path = f"/v1beta/tasks/groups/{taskgroup_id}/runs"
    path += "?include_input=true&include_output=true"

    result_stream = await client.get(
        path=path,
        cast_to=TaskRunEvent | ErrorResponse | None,
        stream=True,
        stream_cls=AsyncStream[TaskRunEvent | ErrorResponse]
    )

    async for event in result_stream:
        if isinstance(event, ErrorResponse):
            failed_results.append(event)
            continue

        try:
            # Validate the result
            company_input = CompanyInput.model_validate(event.input.input)
            company_output = CompanyOutput.model_validate(event.output.content)
            successful_results.append(event)
        except Exception as e:
            print(f"Validation error: {e}")
            failed_results.append(event)

    print(f"Success: {len(successful_results)}, Failed: {len(failed_results)}")
    return successful_results, failed_results

API Reference

Create Task Group

POST /v1beta/tasks/groups
Response:
{
  "taskgroup_id": "tgrp_abc123",
  "status": {
    "num_task_runs": 0,
    "task_run_status_counts": {},
    "is_active": false
  }
}

Add Runs to Group

POST /v1beta/tasks/groups/{taskgroup_id}/runs

Get Group Status

GET /v1beta/tasks/groups/{taskgroup_id}

Stream Group Results

Task runs are returned in the order they were added to the group. Completed tasks include output, while incomplete tasks include run status and null output.
GET /v1beta/tasks/groups/{taskgroup_id}/runs?include_input=true&include_output=true

Stream Group Events

Group events include status updates and task results as they complete (not included in the sample code above).
GET /v1beta/tasks/groups/{taskgroup_id}/events

Complete Example

Here’s a complete Python script that demonstrates the full workflow, including all of the setup code above.
import asyncio
import typing

import parallel
import pydantic
from parallel.types import JsonSchemaParam, TaskRun, TaskSpecParam
from parallel.types.task_run_result import OutputTaskRunJsonOutput

# Define your input and output models
class CompanyInput(pydantic.BaseModel):
    company_name: str = pydantic.Field(description="Name of the company")
    company_website: str = pydantic.Field(description="Company website URL")

class CompanyOutput(pydantic.BaseModel):
    key_insights: list[str] = pydantic.Field(description="Key business insights")
    market_position: str = pydantic.Field(description="Market positioning analysis")

# Define Group API types (these will be added to the Parallel SDK in a future release)
class TaskRunInputParam(parallel.BaseModel):
    task_spec: TaskSpecParam | None = pydantic.Field(default=None)
    input: str | dict[str, str] = pydantic.Field(description="Input to the task")
    metadata: dict[str, str] | None = pydantic.Field(default=None)
    processor: str = pydantic.Field(description="Processor to use for the task")

class TaskGroupStatus(parallel.BaseModel):
    num_task_runs: int = pydantic.Field(description="Number of task runs in the group")
    task_run_status_counts: dict[str, int] = pydantic.Field(
        description="Number of task runs with each status"
    )
    is_active: bool = pydantic.Field(
        description="True if at least one run in the group is currently active"
    )
    status_message: str | None = pydantic.Field(
        description="Human-readable status message for the group"
    )

class TaskGroupRunRequest(parallel.BaseModel):
    default_task_spec: TaskSpecParam | None = pydantic.Field(default=None)
    inputs: list[TaskRunInputParam] = pydantic.Field(description="List of task runs to execute")

class TaskGroupResponse(parallel.BaseModel):
    taskgroup_id: str = pydantic.Field(description="ID of the group")
    status: TaskGroupStatus = pydantic.Field(description="Status of the group")

class TaskGroupRunResponse(parallel.BaseModel):
    status: TaskGroupStatus = pydantic.Field(description="Status of the group")
    run_ids: list[str] = pydantic.Field(description="IDs of the newly created runs")

class TaskRunEvent(parallel.BaseModel):
    type: typing.Literal["task_run"] = pydantic.Field(default="task_run")
    event_id: str = pydantic.Field(description="Cursor to resume the event stream")
    run: TaskRun = pydantic.Field(description="Task run object")
    input: TaskRunInputParam | None = pydantic.Field(default=None)
    output: OutputTaskRunJsonOutput | None = pydantic.Field(default=None)

class Error(parallel.BaseModel):
    ref_id: str = pydantic.Field(description="Reference ID for the error")
    message: str = pydantic.Field(description="Human-readable message")
    detail: dict[str, typing.Any] | None = pydantic.Field(default=None)

class ErrorResponse(parallel.BaseModel):
    type: typing.Literal["error"] = pydantic.Field(default="error")
    error: Error = pydantic.Field(description="Error")

# Create reusable task specification
task_spec = TaskSpecParam(
    input_schema=JsonSchemaParam(json_schema=CompanyInput.model_json_schema()),
    output_schema=JsonSchemaParam(json_schema=CompanyOutput.model_json_schema()),
)


async def wait_for_completion(client: parallel.AsyncParallel, taskgroup_id: str) -> None:
    while True:
        response = await client.get(
            path=f"/v1beta/tasks/groups/{taskgroup_id}", cast_to=TaskGroupResponse
        )

        status = response.status
        print(f"Status: {status.task_run_status_counts}")

        if not status.is_active:
            print("All tasks completed!")
            break

        await asyncio.sleep(10)


async def get_all_results(client: parallel.AsyncParallel, taskgroup_id: str):
    results = []

    path = f"/v1beta/tasks/groups/{taskgroup_id}/runs"
    path += "?include_input=true&include_output=true"

    result_stream = await client.get(
        path=path,
        cast_to=TaskRunEvent | ErrorResponse | None,
        stream=True,
        stream_cls=parallel.AsyncStream[TaskRunEvent | ErrorResponse],
    )

    async for event in result_stream:
        if isinstance(event, TaskRunEvent) and event.output:
            company_input = CompanyInput.model_validate(event.input.input)
            company_output = CompanyOutput.model_validate(event.output.content)

            results.append(
                {
                    "company": company_input.company_name,
                    "insights": company_output.key_insights,
                    "market_position": company_output.market_position,
                }
            )

    return results


async def batch_company_research():
    client = parallel.AsyncParallel(
        base_url="https://api.parallel.ai",
        api_key="PARALLEL_API_KEY",
    )

    # Create task group
    group_response = await client.post(
        path="/v1beta/tasks/groups", cast_to=TaskGroupResponse, body={}
    )
    taskgroup_id = group_response.taskgroup_id
    print(f"Created taskgroup id {taskgroup_id}")

    # Define companies to research
    companies = [
        {"company_name": "Stripe", "company_website": "https://stripe.com"},
        {"company_name": "Shopify", "company_website": "https://shopify.com"},
        {"company_name": "Salesforce", "company_website": "https://salesforce.com"},
    ]

    # Add Tasks to group
    run_inputs = []
    for company in companies:
        input_data = CompanyInput(
            company_name=company["company_name"],
            company_website=company["company_website"],
        )
        run_inputs.append(
            TaskRunInputParam(input=input_data.model_dump(), processor="pro")
        )

    response = await client.post(
        path=f"/v1beta/tasks/groups/{taskgroup_id}/runs",
        cast_to=TaskGroupRunResponse,
        body=TaskGroupRunRequest(
            default_task_spec=task_spec, inputs=run_inputs
        ).model_dump(),
    )
    print(f"Added {len(response.run_ids)} runs to taskgroup {taskgroup_id}")

    # Wait for completion and get results
    await wait_for_completion(client, taskgroup_id)
    results = await get_all_results(client, taskgroup_id)
    print(f"Successfully processed {len(results)} companies")
    return results


# Run the batch job
results = asyncio.run(batch_company_research())
I