-
Notifications
You must be signed in to change notification settings - Fork 1
Add better structured outputs handling for ChatCompletions #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…eceive LOW SCORE.
@@ -5,8 +5,10 @@ | |||
It works for any OpenAI LLM model, as well as the many other non-OpenAI LLMs that are also usable via Chat Completions API (Gemini, DeepSeek, Llama, etc). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In VPC, what will be the recommended way to score:
- tool calls
- structured outputs
?
|
||
await handle_api_key_error_from_resp(res) | ||
await handle_http_bad_request_error_from_resp(res) | ||
handle_rate_limit_error_from_resp(res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this check can be placed above the await res.json()
so in some scenarios It avoids an unnecessary attempt to parse the response body if the http status is already rate limit
Key Info
Branching off #93 for now to minimize changes / conflict
What changed?
What do you want the reviewer(s) to focus on?
Checklist