Skip to content

Conversation

@nirkopler
Copy link
Contributor

Description:
Add cost calculation for fine tuned models (new and legacy), this is required after OpenAI added new models for fine tuning and separated the costs of I/O for fine tuned models.
Also I updated the relevant unit tests
see https://platform.openai.com/docs/guides/fine-tuning for more information.
issue: #11715

add new models and legacy models
add openai pricing
update standardize_model_name to support new fine tuned model name scheme
add logic to handle fine tuned model completion cost (only for new models)
test new and legacy fine tuned models
@vercel
Copy link

vercel bot commented Oct 24, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Oct 24, 2023 8:43am

@baskaryan
Copy link
Collaborator

thanks @nirkopler!

@baskaryan baskaryan merged commit d374417 into langchain-ai:master Oct 24, 2023
Comment on lines +95 to +96
if "ft:" in model_name:
return model_name.split(":")[1] + "-finetuned"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work on Azure OpenAI? It looks like Azure OpenAI fine-tuned models have the following structure
<model_name>.ft-<job_id> (reference), which would go into the previous if case but without doing the right thing I fear :/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

youre right, Ill open a new PR about the azure fix, Ill tag you 💪

baskaryan pushed a commit that referenced this pull request Oct 26, 2023
**Description:**
Add cost calculation for fine tuned **Azure** with relevant unit tests.
see
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo&pivots=programming-language-studio
for more information.
this PR is the result of this PR:
#12190

Twitter handle: @nirkopler
baskaryan pushed a commit that referenced this pull request Oct 27, 2023
The changes introduced in #12267 and #12190 broke the cost computation
of the `completion` tokens for fine-tuned models because of the early
return. This PR aims at fixing this.
@baskaryan.
xieqihui pushed a commit to xieqihui/langchain that referenced this pull request Nov 21, 2023
The changes introduced in langchain-ai#12267 and langchain-ai#12190 broke the cost computation
of the `completion` tokens for fine-tuned models because of the early
return. This PR aims at fixing this.
@baskaryan.
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this pull request Feb 2, 2024
…n-ai#12190)

**Description:**
Add cost calculation for fine tuned models (new and legacy), this is
required after OpenAI added new models for fine tuning and separated the
costs of I/O for fine tuned models.
Also I updated the relevant unit tests
see https://platform.openai.com/docs/guides/fine-tuning for more
information.
issue: langchain-ai#11715

  - **Issue:** 11715
  - **Twitter handle:** @nirkopler
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this pull request Feb 2, 2024
**Description:**
Add cost calculation for fine tuned **Azure** with relevant unit tests.
see
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?tabs=turbo&pivots=programming-language-studio
for more information.
this PR is the result of this PR:
langchain-ai#12190

Twitter handle: @nirkopler
hoanq1811 pushed a commit to hoanq1811/langchain that referenced this pull request Feb 2, 2024
The changes introduced in langchain-ai#12267 and langchain-ai#12190 broke the cost computation
of the `completion` tokens for fine-tuned models because of the early
return. This PR aims at fixing this.
@baskaryan.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants