Field | Type | Required | Description | Example |
---|---|---|---|---|
messages |
List[models.AgentsCompletionRequestMessages] | ✔️ | The prompt(s) to generate completions for, encoded as a list of dict with role and content. | { "role": "user", "content": "Who is the best French painter? Answer in one short sentence." } |
agent_id |
str | ✔️ | The ID of the agent to use for this completion. | |
max_tokens |
OptionalNullable[int] | ➖ | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. |
|
min_tokens |
OptionalNullable[int] | ➖ | The minimum number of tokens to generate in the completion. | |
stream |
Optional[bool] | ➖ | Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON. | |
stop |
Optional[models.AgentsCompletionRequestStop] | ➖ | Stop generation if this token is detected. Or if one of these tokens is detected when providing an array | |
random_seed |
OptionalNullable[int] | ➖ | The seed to use for random sampling. If set, different calls will generate deterministic results. | |
response_format |
Optional[models.ResponseFormat] | ➖ | N/A | |
tools |
List[models.Tool] | ➖ | N/A | |
tool_choice |
Optional[models.AgentsCompletionRequestToolChoice] | ➖ | N/A |