Skip to content

Add missing fields #51

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Mar 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 10 additions & 6 deletions async-openai/src/types/types.rs
Original file line number Diff line number Diff line change
Expand Up @@ -724,12 +724,6 @@ pub struct CreateChatCompletionRequest {
/// The messages to generate chat completions for, in the [chat format](https://platform.openai.com/docs/guides/chat/introduction).
pub messages: Vec<ChatCompletionRequestMessage>, // min: 1

/// The maximum number of [tokens](/tokenizer) to generate in the completion.
///
/// The token count of your prompt plus `max_tokens` cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
#[serde(skip_serializing_if = "Option::is_none")]
pub max_tokens: Option<u16>,

/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
///
/// We generally recommend altering this or `top_p` but not both.
Expand All @@ -754,6 +748,10 @@ pub struct CreateChatCompletionRequest {
#[serde(skip_serializing_if = "Option::is_none")]
pub stop: Option<Stop>,

/// The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
#[serde(skip_serializing_if = "Option::is_none")]
pub max_tokens: Option<u16>, // default: inf

/// Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
///
/// [See more information about frequency and presence penalties.](https://platform.openai.com/docs/api-reference/parameter-details)
Expand All @@ -771,6 +769,9 @@ pub struct CreateChatCompletionRequest {
/// Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
#[serde(skip_serializing_if = "Option::is_none")]
pub logit_bias: Option<HashMap<String, serde_json::Value>>, // default: null

/// A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids).
pub user: Option<String>,
}

#[derive(Debug, Deserialize)]
Expand Down Expand Up @@ -856,6 +857,9 @@ pub struct CreateTranscriptionRequest {

/// The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
pub temperature: Option<f32>, // default: 0

/// The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
pub language: Option<String>,
}

#[derive(Debug, Deserialize)]
Expand Down
75 changes: 42 additions & 33 deletions openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2223,7 +2223,7 @@ components:
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
required:
- model

CreateCompletionResponse:
type: object
properties:
Expand Down Expand Up @@ -2275,11 +2275,11 @@ components:
type: integer
total_tokens:
type: integer
required:
required:
- prompt_tokens
- completion_tokens
- total_tokens
required:
required:
- id
- object
- created
Expand All @@ -2299,7 +2299,7 @@ components:
name:
type: string
description: The name of the user in a multi-user chat
required:
required:
- role
- content

Expand All @@ -2313,7 +2313,7 @@ components:
content:
type: string
description: The contents of the message
required:
required:
- role
- content

Expand Down Expand Up @@ -2372,6 +2372,11 @@ components:
maxItems: 4
items:
type: string
max_tokens:
description: |
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
default: inf
type: integer
presence_penalty:
type: number
default: 0
Expand Down Expand Up @@ -2431,11 +2436,11 @@ components:
type: integer
total_tokens:
type: integer
required:
required:
- prompt_tokens
- completion_tokens
- total_tokens
required:
required:
- id
- object
- created
Expand Down Expand Up @@ -2536,11 +2541,11 @@ components:
type: integer
total_tokens:
type: integer
required:
required:
- prompt_tokens
- completion_tokens
- total_tokens
required:
required:
- object
- created
- choices
Expand Down Expand Up @@ -2690,7 +2695,7 @@ components:
type: boolean
violence/graphic:
type: boolean
required:
required:
- hate
- hate/threatening
- self-harm
Expand All @@ -2715,19 +2720,19 @@ components:
type: number
violence/graphic:
type: number
required:
required:
- hate
- hate/threatening
- self-harm
- sexual
- sexual/minors
- violence
- violence/graphic
required:
required:
- flagged
- categories
- category_scores
required:
required:
- id
- model
- results
Expand Down Expand Up @@ -2810,7 +2815,7 @@ components:
type: array
items:
$ref: '#/components/schemas/OpenAIFile'
required:
required:
- object
- data

Expand Down Expand Up @@ -2845,7 +2850,7 @@ components:
type: string
deleted:
type: boolean
required:
required:
- id
- object
- deleted
Expand Down Expand Up @@ -3249,7 +3254,7 @@ components:
type: array
items:
$ref: '#/components/schemas/FineTune'
required:
required:
- object
- data

Expand All @@ -3262,7 +3267,7 @@ components:
type: array
items:
$ref: '#/components/schemas/FineTuneEvent'
required:
required:
- object
- data

Expand Down Expand Up @@ -3322,7 +3327,7 @@ components:
type: array
items:
type: number
required:
required:
- index
- object
- embedding
Expand All @@ -3333,10 +3338,10 @@ components:
type: integer
total_tokens:
type: integer
required:
required:
- prompt_tokens
- total_tokens
required:
required:
- object
- model
- data
Expand All @@ -3346,12 +3351,12 @@ components:
type: object
additionalProperties: false
properties:
file:
file:
description: |
The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
type: string
format: binary
model:
model:
description: |
ID of the model to use. Only `whisper-1` is currently available.
type: string
Expand All @@ -3369,29 +3374,33 @@ components:
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
type: number
default: 0
language:
description: |
The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
type: string
required:
- file
- model

# Note: This does not currently support the non-default response format types.
# Note: This does not currently support the non-default response format types.
CreateTranscriptionResponse:
type: object
properties:
text:
type: string
required:
required:
- text

CreateTranslationRequest:
type: object
additionalProperties: false
properties:
file:
file:
description: |
The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.
type: string
format: binary
model:
model:
description: |
ID of the model to use. Only `whisper-1` is currently available.
type: string
Expand All @@ -3413,13 +3422,13 @@ components:
- file
- model

# Note: This does not currently support the non-default response format types.
# Note: This does not currently support the non-default response format types.
CreateTranslationResponse:
type: object
properties:
text:
type: string
required:
required:
- text

Engine:
Expand All @@ -3434,7 +3443,7 @@ components:
nullable: true
ready:
type: boolean
required:
required:
- id
- object
- created
Expand All @@ -3451,7 +3460,7 @@ components:
type: integer
owned_by:
type: string
required:
required:
- id
- object
- created
Expand All @@ -3477,7 +3486,7 @@ components:
status_details:
type: object
nullable: true
required:
required:
- id
- object
- bytes
Expand Down Expand Up @@ -3523,7 +3532,7 @@ components:
type: array
items:
$ref: '#/components/schemas/FineTuneEvent'
required:
required:
- id
- object
- created_at
Expand All @@ -3548,7 +3557,7 @@ components:
type: string
message:
type: string
required:
required:
- object
- created_at
- level
Expand Down