How to track input and output tokens to AzureChatOpenaI #30226
Replies: 1 comment 6 replies
-
To track input and output tokens and costs for each model call to an Azure deployed OpenAI model, you can indeed use the from langchain_community.callbacks import get_openai_callback
from langchain_openai import OpenAI
llm = OpenAI(model_name="gpt-3.5-turbo-instruct")
with get_openai_callback() as cb:
result = llm.invoke("Tell me a joke")
print(result)
print("---")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}") This code will track the input and output tokens and calculate the cost based on the usage. The To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other |
Beta Was this translation helpful? Give feedback.
-
I need to track costs across my LangGraph graph whenever I make a call to the Azure deployed openai model. I saw in one of the documentation guide that
from langchain_community.callbacks import get_openai_callback
is used to get input tokens and cost and other similar metadata. Is this the correct module to use for tracking tokens/cost? I already use Langsmith but I want a programming based implementation to track the tokens in and out for each model call and return It in my API.Beta Was this translation helpful? Give feedback.
All reactions