-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: OpenAI LogProbs format for Chat-Completion is incorrect #5008
Comments
@DarkLight1337 you were quicker by one hour, but you still have failing tests, so I win 😜 |
It's not a competition xD We can combine our solutions in your PR if need be. |
I know! I was being (not so) funny. |
I have updated my PR with more test cases. From my understanding, the behaviour of disabling
Edit: Thanks @br3no for the correction! |
My PR #5026 now passes all tests as well. |
Your current environment
🐛 Describe the bug
The output format of the logprobs in the chat OpenAI server has been mostly copied from the legacy completion server, according to the description in #2918.
Unfortunately, the format of this part of the answer is not the same in the official OpenAI API.
While the completion logprobs look like this:
(cf. https://platform.openai.com/docs/api-reference/completions/object)
The chat completion logprobs look like this:
(cf. https://platform.openai.com/docs/api-reference/chat/object)
OpenAI clients will have problems parsing the answer correctly.
The text was updated successfully, but these errors were encountered: