-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Closed
Description
Checked other resources
- I added a very descriptive title to this issue.
- I searched the LangChain.js documentation with the integrated search.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain.js rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
Python example code:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
print(llm.invoke("hello world").model_dump_json())Javascript example code:
import { ChatOpenAI } from "@langchain/openai"
(async () => {
const model = new ChatOpenAI({ model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY });
const response = await model.invoke("hello world");
console.log(response)
})()Error Message and Stack Trace (if applicable)
No response
Description
Hey, I've noticed that the response from simple LLM OpenAI call in Python Langchain is different from Javascript Langchain.
We are using LiteLLM proxy and model_name field is helpful because it is the real name of the model that was used.
Is there a reason why this field is not present in the JS response?
Python Langchain:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="azure.gpt-4o") # litellm proxy model name
print(llm.invoke("hello world").model_dump_json())Response:
{
"content": "Hello! How can I assist you today?",
"additional_kwargs": {
"refusal": null
},
"response_metadata": {
"token_usage": {
"completion_tokens": 9,
"prompt_tokens": 9,
"total_tokens": 18,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"model_name": "gpt-4o-2024-08-06", <- actual model name
"system_fingerprint": "fp_",
"finish_reason": "stop",
"logprobs": null
},
"type": "ai",
"name": null,
"id": "run-",
"example": false,
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 9,
"output_tokens": 9,
"total_tokens": 18,
"input_token_details": {},
"output_token_details": {}
}
}Javascript Langchain:
import { ChatOpenAI } from "@langchain/openai"
const model = new ChatOpenAI({ model: "azure.gpt-4o", apiKey: process.env.OPENAI_API_KEY, configuration: { baseURL: process.env.OPENAI_API_BASE }}); // litellm proxy model name
const response = await model.invoke("hello world");
console.log(response)Response:
{
"id": "chatcmpl-id",
"content": "Hello! How can I assist you today?",
"additional_kwargs": {
"function_call": null,
"tool_calls": null
},
"response_metadata": {
"tokenUsage": {
"promptTokens": 9,
"completionTokens": 9,
"totalTokens": 18
},
"finish_reason": "stop",
"usage": {
"completion_tokens": 9,
"prompt_tokens": 9,
"total_tokens": 18,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"system_fingerprint": "fp_"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"output_tokens": 9,
"input_tokens": 9,
"total_tokens": 18,
"input_token_details": {},
"output_token_details": {}
}
}The same behaviour can be observed in the examples from the official docs:
- JS - no model name in the response - https://js.langchain.com/docs/integrations/chat/openai/#invocation
- Python - model name present in the response - https://python.langchain.com/docs/integrations/chat/openai/#invocation
System Info
Node version: v20.18.0
Platform: macOS Sonoma 14.7.1
langchain: 0.3.6
langchain/openai: 0.3.14
dosubot
Metadata
Metadata
Assignees
Labels
No labels