-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(langchain): langchain tracing errors #1049
Conversation
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅ |
I have read the CLA Document and I hereby sign the CLA |
recheck |
Hey @domsj-foodpairing, looks like CI is failing on formatting. Can you run our formatter with Definitely understand that it is a pain to produce a reproducible example, but it would help us understand the change! |
@@ -376,6 +376,8 @@ def _input_messages( | |||
parsed_messages.append(dict(_parse_message_data(first_messages.to_json()))) | |||
elif hasattr(first_messages, "get"): | |||
parsed_messages.append(dict(_parse_message_data(first_messages))) | |||
elif isinstance(first_messages, tuple) and len(first_messages) == 2: | |||
parsed_messages.append( { MESSAGE_ROLE: first_messages[0], MESSAGE_CONTENT: first_messages[1] } ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been trying to reproduce in a smaller example but couldn't yet so far.
However I was able to make this problem go away by replacing
input = {
"start_time": time.time(),
"messages": [
(
"system",
get_system_prompt(),
),
...
with
input = {
"start_time": time.time(),
"messages": [
SystemMessage(
content=get_system_prompt(),
),
...
🤔
for i, obj in enumerate(content): | ||
assert hasattr(obj, "get"), f"expected Mapping, found {type(obj)}" | ||
if isinstance(obj, str): | ||
yield f"{MESSAGE_CONTENTS}.0", obj |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one is a value returned by the LLM.
so sometimes the LLM returns a list[str]
. while usually it returns just a single str
?
fwiw I'm using langchain_google_vertexai.VertexAI
edit: note that this LLM output is input for another langgraph node; not immediately clear to me where the instrumentation picks it up
for my own future reference, here's the reproduction I was attempting from langchain_core.messages import AIMessage
from langchain_core.tools import tool
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent
from openinference.instrumentation.langchain import LangChainInstrumentor
from phoenix.otel import register
from vertexai.generative_models import HarmBlockThreshold, HarmCategory
_tracer_provider = register(endpoint="http://localhost:6006/v1/traces")
LangChainInstrumentor().instrument(tracer_provider=_tracer_provider)
model = ChatVertexAI(
max_output_tokens=8192,
model_name="gemini-1.5-flash",
safety_settings={
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
},
)
@tool
def add(a: int, b: int) -> str:
"""Add two numbers together."""
return f"res = {a + b}"
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
tools = [add, multiply]
agent = create_react_agent(
model=model,
tools=tools, # type: ignore[arg-type]
)
import uuid
from langchain_core.messages import ToolMessage
tool_call_id = str(uuid.uuid4())
async for res in agent.astream(
{
"messages": [
("system", "Be a helpful agent, use tools to help create a response."),
("user", "what can you help me with?"),
AIMessage(content="I can help with math problems"),
("user", "what is 2 + 2?"),
AIMessage(content="", tool_calls=[{"name": "add", "args": {"a": 2, "b": 2}, "id": tool_call_id}]),
ToolMessage(content="res=4", name="add", tool_call_id=tool_call_id),
("user", "what is 3 * 3?"),
]
}
):
print(res) |
@domsj-foodpairing are you still able to reproduce the issue with a different code snippet? |
yes for both issues/changes. however with code that I can't share. I can move the attempt-at-reproducing closer to the actual code, but I'm a bit busy right now. let me get back to this later. |
I'm not seeing these anymore (at least for now), and I don't have time to dig further in to this. Fwiw I did update langchain and langgraph since. |
No worries, thank you for the contribution and sorry we couldn't reproduce. |
Hi,
I'm using langchain 0.3 and langgraph 0.2.26 with a
langgraph.prebuilt.chat_agent_executor.create_react_agent
andChatVertexAI
model. More details can be provided on request. I prefer not to put time in creating a reproducer if possible.The attached changes fix some errors I have been seeing in the logs.
Not sure if the changes make sense, but my pattern matching thinks it's a reasonable attempt at making things better.