Skip to content

"model_name" missing in "response_metadata" in JavaScript SDK but is present in Python SDK #7335

@rors41

Description

@rors41

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

Python example code:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")
print(llm.invoke("hello world").model_dump_json())

Javascript example code:

import { ChatOpenAI } from "@langchain/openai"

(async () => {
	const model = new ChatOpenAI({ model: "gpt-4o", apiKey: process.env.OPENAI_API_KEY });
	const response = await model.invoke("hello world");
	console.log(response)
})()

Error Message and Stack Trace (if applicable)

No response

Description

Hey, I've noticed that the response from simple LLM OpenAI call in Python Langchain is different from Javascript Langchain.
We are using LiteLLM proxy and model_name field is helpful because it is the real name of the model that was used.

Is there a reason why this field is not present in the JS response?

Python Langchain:

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="azure.gpt-4o") # litellm proxy model name
print(llm.invoke("hello world").model_dump_json())

Response:

{
  "content": "Hello! How can I assist you today?",
  "additional_kwargs": {
    "refusal": null
  },
  "response_metadata": {
    "token_usage": {
      "completion_tokens": 9,
      "prompt_tokens": 9,
      "total_tokens": 18,
      "completion_tokens_details": null,
      "prompt_tokens_details": null
    },
    "model_name": "gpt-4o-2024-08-06", <- actual model name
    "system_fingerprint": "fp_",
    "finish_reason": "stop",
    "logprobs": null
  },
  "type": "ai",
  "name": null,
  "id": "run-",
  "example": false,
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 9,
    "output_tokens": 9,
    "total_tokens": 18,
    "input_token_details": {},
    "output_token_details": {}
  }
}

Javascript Langchain:

import { ChatOpenAI } from "@langchain/openai"

const model = new ChatOpenAI({ model: "azure.gpt-4o", apiKey: process.env.OPENAI_API_KEY, configuration: { baseURL: process.env.OPENAI_API_BASE }}); // litellm proxy model name
const response = await model.invoke("hello world");
console.log(response)

Response:

{
  "id": "chatcmpl-id",
  "content": "Hello! How can I assist you today?",
  "additional_kwargs": {
    "function_call": null,
    "tool_calls": null
  },
  "response_metadata": {
    "tokenUsage": {
      "promptTokens": 9,
      "completionTokens": 9,
      "totalTokens": 18
    },
    "finish_reason": "stop",
    "usage": {
      "completion_tokens": 9,
      "prompt_tokens": 9,
      "total_tokens": 18,
      "completion_tokens_details": null,
      "prompt_tokens_details": null
    },
    "system_fingerprint": "fp_"
  },
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "output_tokens": 9,
    "input_tokens": 9,
    "total_tokens": 18,
    "input_token_details": {},
    "output_token_details": {}
  }
}

The same behaviour can be observed in the examples from the official docs:

System Info

Node version: v20.18.0
Platform: macOS Sonoma 14.7.1
langchain: 0.3.6
langchain/openai: 0.3.14

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions