-
Notifications
You must be signed in to change notification settings - Fork 13.9k
server: remove default "gpt-3.5-turbo" model name #17668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
pwilkin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, should be a generally safer approach anyway.
|
You might want to also check the stats under the assistant response, I think that's where it bugged out for me (if you have 'show statistics' and 'show model name' checked in settings). |
|
Hmm yeah, it reflects back the input model name from the request. I think it's also time to remove this behavior as it doesn't make much sense. |
|
@pwilkin could you confirm if the latest commit resolves the problem on your side? (maybe there are cases that I haven't tested) |
|
Okay, pulling and will check. |
|
@ngxson can confirm, works good now. |
|
Merging this once most workflows passed on the mirrored PR: ngxson#46 |

Fix #17666
Important
This PR changes the way how model name is handle (see below).
In addition to that, the
"model"field from input request no longer be reflected back to the response. If you need to use a custom model name, set it via--model-aliasargument insteadRemove the default
gpt-3.5-turbomodel name as was useful in the past, but no longer be the case.The model name is now decided based on this priority:
model_aliasis set, use it<user>/<model>:<tag>if available (for cached models)After the change, web UI display the model name correctly: