-
Notifications
You must be signed in to change notification settings - Fork 20k
Finetuned OpenAI models cost calculation #11715 #12190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetuned OpenAI models cost calculation #11715 #12190
Conversation
add new models and legacy models add openai pricing
update standardize_model_name to support new fine tuned model name scheme add logic to handle fine tuned model completion cost (only for new models)
test new and legacy fine tuned models
|
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
|
thanks @nirkopler! |
| if "ft:" in model_name: | ||
| return model_name.split(":")[1] + "-finetuned" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this work on Azure OpenAI? It looks like Azure OpenAI fine-tuned models have the following structure
<model_name>.ft-<job_id> (reference), which would go into the previous if case but without doing the right thing I fear :/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
youre right, Ill open a new PR about the azure fix, Ill tag you 💪
**Description:** Add cost calculation for fine tuned **Azure** with relevant unit tests. see https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo&pivots=programming-language-studio for more information. this PR is the result of this PR: #12190 Twitter handle: @nirkopler
The changes introduced in #12267 and #12190 broke the cost computation of the `completion` tokens for fine-tuned models because of the early return. This PR aims at fixing this. @baskaryan.
The changes introduced in langchain-ai#12267 and langchain-ai#12190 broke the cost computation of the `completion` tokens for fine-tuned models because of the early return. This PR aims at fixing this. @baskaryan.
…n-ai#12190) **Description:** Add cost calculation for fine tuned models (new and legacy), this is required after OpenAI added new models for fine tuning and separated the costs of I/O for fine tuned models. Also I updated the relevant unit tests see https://platform.openai.com/docs/guides/fine-tuning for more information. issue: langchain-ai#11715 - **Issue:** 11715 - **Twitter handle:** @nirkopler
**Description:** Add cost calculation for fine tuned **Azure** with relevant unit tests. see https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo&pivots=programming-language-studio for more information. this PR is the result of this PR: langchain-ai#12190 Twitter handle: @nirkopler
The changes introduced in langchain-ai#12267 and langchain-ai#12190 broke the cost computation of the `completion` tokens for fine-tuned models because of the early return. This PR aims at fixing this. @baskaryan.
Description:
Add cost calculation for fine tuned models (new and legacy), this is required after OpenAI added new models for fine tuning and separated the costs of I/O for fine tuned models.
Also I updated the relevant unit tests
see https://platform.openai.com/docs/guides/fine-tuning for more information.
issue: #11715