Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(langchain): langchain tracing errors #1049

Closed

Conversation

domsj-foodpairing
Copy link

Hi,

I'm using langchain 0.3 and langgraph 0.2.26 with a langgraph.prebuilt.chat_agent_executor.create_react_agent and ChatVertexAI model. More details can be provided on request. I prefer not to put time in creating a reproducer if possible.

The attached changes fix some errors I have been seeing in the logs.

2024-09-25 08:38:40,201: ERROR - openinference.instrumentation.langchain._tracer - Failed to get attribute.
Traceback (most recent call last):
  File "/workspaces/product/python/.venv/lib/python3.11/site-packages/openinference/instrumentation/langchain/_tracer.py", line 280, in wrapper
    yield from wrapped(*args, **kwargs)
  File "/workspaces/product/python/.venv/lib/python3.11/site-packages/openinference/instrumentation/langchain/_tracer.py", line 452, in _parse_message_data
    assert hasattr(obj, "get"), f"expected Mapping, found {type(obj)}: {obj}"
           ^^^^^^^^^^^^^^^^^^^
AssertionError: expected Mapping, found <class 'str'>: 
2024-09-24 20:01:29.845 CEST
Traceback (most recent call last):
  File "/app/.venv/lib/python3.11/site-packages/openinference/instrumentation/langchain/_tracer.py", line 280, in wrapper
    yield from wrapped(*args, **kwargs)
  File "/app/.venv/lib/python3.11/site-packages/openinference/instrumentation/langchain/_tracer.py", line 380, in _input_messages
    raise ValueError(f"failed to parse messages of type {type(first_messages)}")
ValueError: failed to parse messages of type <class 'tuple'>

Not sure if the changes make sense, but my pattern matching thinks it's a reasonable attempt at making things better.

@domsj-foodpairing domsj-foodpairing requested a review from a team as a code owner September 25, 2024 08:52
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Sep 25, 2024
Copy link
Contributor

github-actions bot commented Sep 25, 2024

CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅

@domsj-foodpairing
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Sep 25, 2024
@domsj-foodpairing
Copy link
Author

recheck

@axiomofjoy axiomofjoy changed the title fix langchain tracing errors fix: langchain tracing errors Sep 28, 2024
@axiomofjoy axiomofjoy changed the title fix: langchain tracing errors fix(langchain): langchain tracing errors Sep 28, 2024
@axiomofjoy
Copy link
Contributor

Hey @domsj-foodpairing, looks like CI is failing on formatting. Can you run our formatter with ruff format and commit?

Definitely understand that it is a pain to produce a reproducible example, but it would help us understand the change!

@axiomofjoy axiomofjoy self-requested a review September 28, 2024 02:08
@@ -376,6 +376,8 @@ def _input_messages(
parsed_messages.append(dict(_parse_message_data(first_messages.to_json())))
elif hasattr(first_messages, "get"):
parsed_messages.append(dict(_parse_message_data(first_messages)))
elif isinstance(first_messages, tuple) and len(first_messages) == 2:
parsed_messages.append( { MESSAGE_ROLE: first_messages[0], MESSAGE_CONTENT: first_messages[1] } )
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been trying to reproduce in a smaller example but couldn't yet so far.
However I was able to make this problem go away by replacing

            input = {
                "start_time": time.time(),
                "messages": [
                    (
                        "system",
                        get_system_prompt(),
                    ),
...

with

            input = {
                "start_time": time.time(),
                "messages": [
                    SystemMessage(
                        content=get_system_prompt(),
                    ),
...

🤔

for i, obj in enumerate(content):
assert hasattr(obj, "get"), f"expected Mapping, found {type(obj)}"
if isinstance(obj, str):
yield f"{MESSAGE_CONTENTS}.0", obj
Copy link
Author

@domsj-foodpairing domsj-foodpairing Oct 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one is a value returned by the LLM.
so sometimes the LLM returns a list[str]. while usually it returns just a single str?
fwiw I'm using langchain_google_vertexai.VertexAI

edit: note that this LLM output is input for another langgraph node; not immediately clear to me where the instrumentation picks it up

@domsj-foodpairing
Copy link
Author

for my own future reference, here's the reproduction I was attempting
(however it didn't reproduce ...)

from langchain_core.messages import AIMessage
from langchain_core.tools import tool
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent
from openinference.instrumentation.langchain import LangChainInstrumentor
from phoenix.otel import register
from vertexai.generative_models import HarmBlockThreshold, HarmCategory

_tracer_provider = register(endpoint="http://localhost:6006/v1/traces")
LangChainInstrumentor().instrument(tracer_provider=_tracer_provider)


model = ChatVertexAI(
    max_output_tokens=8192,
    model_name="gemini-1.5-flash",
    safety_settings={
        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
    },
)


@tool
def add(a: int, b: int) -> str:
    """Add two numbers together."""
    return f"res = {a + b}"


@tool
def multiply(a: int, b: int) -> int:
    """Multiply two numbers together."""
    return a * b


tools = [add, multiply]

agent = create_react_agent(
    model=model,
    tools=tools,  # type: ignore[arg-type]
)

import uuid

from langchain_core.messages import ToolMessage

tool_call_id = str(uuid.uuid4())
async for res in agent.astream(
    {
        "messages": [
            ("system", "Be a helpful agent, use tools to help create a response."),
            ("user", "what can you help me with?"),
            AIMessage(content="I can help with math problems"),
            ("user", "what is 2 + 2?"),
            AIMessage(content="", tool_calls=[{"name": "add", "args": {"a": 2, "b": 2}, "id": tool_call_id}]),
            ToolMessage(content="res=4", name="add", tool_call_id=tool_call_id),
            ("user", "what is 3 * 3?"),
        ]
    }
):
    print(res)

@axiomofjoy
Copy link
Contributor

for my own future reference, here's the reproduction I was attempting
(however it didn't reproduce ...)

@domsj-foodpairing are you still able to reproduce the issue with a different code snippet?

@domsj-foodpairing
Copy link
Author

for my own future reference, here's the reproduction I was attempting
(however it didn't reproduce ...)

@domsj-foodpairing are you still able to reproduce the issue with a different code snippet?

yes for both issues/changes. however with code that I can't share.
just to clarify: one issue I did 'fix' as mentioned in #1049 (comment) , but that change shouldn't have been needed imo (because the code is equivalent from the langgraph/langchain point of view)

I can move the attempt-at-reproducing closer to the actual code, but I'm a bit busy right now. let me get back to this later.

@domsj-foodpairing
Copy link
Author

I'm not seeing these anymore (at least for now), and I don't have time to dig further in to this. Fwiw I did update langchain and langgraph since.
Apologies and thanks for bearing with me...

@domsj-foodpairing domsj-foodpairing deleted the langchain-0.3-errors branch October 23, 2024 07:05
@axiomofjoy
Copy link
Contributor

No worries, thank you for the contribution and sorry we couldn't reproduce.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cannot reproduce needs information size:XS This PR changes 0-9 lines, ignoring generated files.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants