Skip to content

🐛 Bug Report: LLM token counts not updated in traces sent to dynatrace #2661

Open
@sristiraj

Description

@sristiraj

Which component is this bug for?

Langchain Instrumentation

📜 Description

Openllmetry traces does not set completion and total token counts in traces sent to Dynatrace SAAS http via open telemetry for an RAG chain using langchain framework and antropic claude 3.5 foundation model served on bedrock and accessed via Mosaic AI gateway in databricks.

Traces for prompt generation and LLM request response calls are captured correctly but token counts are missing.

👟 Reproduction steps

  1. Create a python project and install traceloop-sdk dependency along with langchain-core, databricks-langchain, streamlit.
  2. Create a sample application module with file name "sample.py" using code sample below.
import streamlit as st
from traceloop.sdk import Traceloop
from langchain_core.prompts import PromptTemplate
from langchain_databricks.chat_models import ChatDatabricks
from langchain_core.output_parsers import StrOutputParser
user_id="abc"
session_id = uuid.uuid4()
Traceloop.set_association_properties({"user_id":user_id, "session_id":session_id})
Traceloop.init(app_name="sr sample", disable_batch=True)
st.title("Marketing Email Generator")
with st.form("my_form"):
    product_name = st.text_input("Product Name")
    product_features = st.text_input("Product Features")
    audience = st.text_input("Target Audience")
    submitted = st.form_submit_button("Submit")
    if submitted:
        prompt_subject = PromptTemplate(
            input_variables=["product_name", "product_features", "audience"],
            template="""You are an email maketing campaign manager.
            Generate a subject for the email for the product: {product_name}
            and having features that includes: {product_features}
            Respond only the subject line.
            """
        )
        prompt_email = PromptTemplate(
            input_variables=["subject", "audience"],
            template="""You are an email marketing manager.
            Generate a email body for the email subject: {subject}
            and for the audience: {audience}
            """
        )
        llm = ChatDatabricks(endpoint="claude-3-5-sonnet-20240620-v1-0",
                     temperature = 0)
        subject_chain = prompt_subject | llm | (lambda subject: (subject, st.write(f"Title: {subject.content}"))[0]) | StrOutputParser()
        email_chain = prompt_email | llm
        final_chain = subject_chain | (lambda subject: {"subject": subject, "audience": audience}) | email_chain
        response = final_chain.invoke({"product_name": product_name,
                                        "product_features": product_features})
        st.write(response.content)
  1. Create a serving endpoint in databricks AI gateway for claude 3.5 model with name "claude-3-5-sonnet-20240620-v1-0".
  2. Create an access token in databricks that can be used for interacting between the code snippet above and claude model served on AI gateway.
  3. Create a access token in dynatrace with permission to ingest open telemetry traces, metrics and logs.
  4. In the python project, set the environment variable, TRACELOOP_HEADERS with the dynatrace token created in step 4. and TRACELOOP_BASE_URL to dynatrace url https://.live.dynatrace.com/api/v2/otlp.
  5. Set the DATABRICKS_HOST environment variable to databricks workspace url.
  6. Set the DATABRICKS_TOKEN environment variable to the databricks access token created in step 3 above.
  7. Run the python module created in step 2 using command "streamlit run sample.py"
  8. In the browser open the application started using streamlit and provide values for product name, product feature and audience as traceloop, tracing, GenAI engineer respectively.
  9. Check in dynatrace by navigating to Distributed Tracing within Dynatrace to check if the trace has token counts updated.

👍 Expected behavior

Completion token and Total token should be updated in llm.usage section in dynatrace.

👎 Actual Behavior with Screenshots

Token counts are not updated. Refer to screenshot attached.

Image

🤖 Python Version

3.10

📃 Provide any additional context for the Bug.

No response

👀 Have you spent some time to check if this bug has been raised before?

  • I checked and didn't find similar issue

Are you willing to submit PR?

Yes I am willing to submit a PR!

Activity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions