Chat completions.
This endpoint can be used to get realtime chat completions. It can also be used with the Task API processors to get structured, research outputs via a chat interface.
Request for the chat completions endpoint.
Note that all parameters except for model, stream, and response_format
are ignored.
The model to use for chat completions.
The messages to use for chat completions.
Whether to stream the chat completions.
The response format to use for chat completions. OpenAI compatible.
The maximum number of tokens to generate. Unsupported.
The temperature to use for chat completions. Unsupported.
The top p to use for chat completions. Unsupported.
The number of chat completions to generate. Unsupported.
The presence penalty to use for chat completions. Unsupported.
The frequency penalty to use for chat completions. Unsupported.
Returns a ChatCompletion object for non-streaming requests (application/json), or a stream of ChatCompletionResponseChunk objects for streaming requests (text/event-stream) when stream=true is set in the request.
Chat completion response.
The id of the chat completion.
"chat.completion"auto, default, flex, scale, priority Basis for the chat completion, including citations and reasoning supporting the output.