Skip to content

Setup Langfuse instrumentation for Google ADK agent#8

Merged
MrOrz merged 4 commits intomasterfrom
setup-langfuse-adk-6582592787464144360
Feb 24, 2026
Merged

Setup Langfuse instrumentation for Google ADK agent#8
MrOrz merged 4 commits intomasterfrom
setup-langfuse-adk-6582592787464144360

Conversation

@MrOrz
Copy link
Member

@MrOrz MrOrz commented Feb 23, 2026

This PR sets up Langfuse observability for the Google ADK agent. It adds the necessary dependencies and initializes the instrumentation in the agent's entry point. The instrumentation is configured to use environment variables LANGFUSE_PUBLIC_KEY and LANGFUSE_SECRET_KEY.


PR created automatically by Jules for task 6582592787464144360 started by @MrOrz

- Added `langfuse`, `opentelemetry-sdk`, and `openinference-instrumentation-google-adk` to `adk/pyproject.toml`.
- Created `adk/cofacts-ai/instrumentation.py` to handle Langfuse initialization.
- Modified `adk/cofacts-ai/agent.py` to import and run instrumentation setup on startup.
- Updated `adk/uv.lock` via `pnpm install:agent`.

Co-authored-by: MrOrz <108608+MrOrz@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@gemini-code-assist
Copy link

Summary of Changes

Hello @MrOrz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive observability capabilities to the Google ADK agent by integrating Langfuse. The changes enable the agent to trace and monitor its operations, providing valuable insights into its behavior and performance. This enhancement is crucial for debugging, performance optimization, and understanding the agent's interactions in production environments.

Highlights

  • Langfuse Observability Integration: Integrated Langfuse for observability into the Google ADK agent, enabling tracing and monitoring of agent interactions.
  • New Instrumentation Module: Introduced a dedicated instrumentation.py module to centralize the setup logic for Langfuse and OpenTelemetry.
  • Dependency Updates: Added langfuse, opentelemetry-sdk, and openinference-instrumentation-google-adk to pyproject.toml to support the new observability features.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • adk/cofacts-ai/agent.py
    • Imported setup_instrumentation from the new instrumentation module.
    • Called setup_instrumentation() at the agent's entry point to initialize Langfuse.
  • adk/cofacts-ai/instrumentation.py
    • Added a new file containing the setup_instrumentation function.
    • Implemented logic to initialize TracerProvider, LangfuseExporter, and BatchSpanProcessor.
    • Instrumented the Google ADK using GoogleADKInstrumentor.
    • Included checks for environment variables (LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_HOST) and graceful handling if packages or credentials are missing.
  • adk/pyproject.toml
    • Added langfuse, opentelemetry-sdk, and openinference-instrumentation-google-adk to the project dependencies.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly sets up Langfuse instrumentation for the Google ADK agent by adding the necessary dependencies and initialization logic. The implementation is robust, handling missing dependencies and credentials gracefully. I've suggested a small improvement to set the global OpenTelemetry tracer provider, which will make the instrumentation more robust for future extensions. Overall, this is a solid addition for observability.

@MrOrz
Copy link
Member Author

MrOrz commented Feb 24, 2026

@jules I think the current implementation is more complicated than the Langfuse official doc https://langfuse.com/integrations/frameworks/google-adk

If some changes are actually not necessary, please consider remove them.

---
source: ⚠️ Jupyter Notebook
title: Observability for Google Agent Development Kit with Langfuse
sidebarTitle: Google ADK
logo: /images/integrations/google_adk_icon.png
description: Learn how to instrument Google ADK agents with Langfuse via OpenTelemetry
category: Integrations
---

# Integrate Langfuse with Google's Agent Development Kit

This notebook demonstrates how to capture detailed traces from a [Google Agent Development Kit](https://github.com/google/adk-python) (ADK) application with **[Langfuse](https://langfuse.com)** using the OpenTelemetry (OTel) protocol.

> **Why Agent Development Kit?**  
> [Google’s Agent Development Kit](https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/) streamlines building, orchestrating, and tracing generative-AI agents out of the box, letting you move from prototype to production far faster than wiring everything yourself.

> **Why Langfuse?**  
> [Langfuse](https://langfuse.com) gives you a detailed dashboard and rich analytics for every prompt, model response, and function call in your agent, making it easy to debug, evaluate, and iterate on LLM apps.

<Steps>
## Step&nbsp;1: Install dependencies


```python
%pip install langfuse google-adk openinference-instrumentation-google-adk -q

Step 2: Set up environment variables

Fill in the Langfuse and your Gemini API key.

import os

# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region

# Gemini API Key (Get from Google AI Studio: https://aistudio.google.com/app/apikey)
os.environ["GOOGLE_API_KEY"] = "..." 

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")
Langfuse client is authenticated and ready!

Step 3: OpenTelemetry Instrumentation

Use the GoogleADKInstrumentor library to wrap ADK calls and send OpenTelemetry spans to Langfuse.

from openinference.instrumentation.google_adk import GoogleADKInstrumentor

GoogleADKInstrumentor().instrument()

Step 3: Build a hello world agent

Every tool call and model completion is captured as an OpenTelemetry span and forwarded to Langfuse.

from google.adk.agents import Agent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types

def say_hello():
    return {"greeting": "Hello Langfuse 👋"}

agent = Agent(
    name="hello_agent",
    model="gemini-2.0-flash",
    instruction="Always greet using the say_hello tool.",
    tools=[say_hello],
)

APP_NAME = "hello_app"
USER_ID = "demo-user"
SESSION_ID = "demo-session"

session_service = InMemorySessionService()
# create_session is async → await it in notebooks
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)

runner = Runner(agent=agent, app_name=APP_NAME, session_service=session_service)

user_msg = types.Content(role="user", parts=[types.Part(text="hi")])
for event in runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=user_msg):
    if event.is_final_response():
        print(event.content.parts[0].text)

Step 4: View the trace in Langfuse

Head over to your Langfuse dashboard → Traces. You should see traces including all tool calls and model inputs/outputs.

Google ADK example trace in Langfuse

Link to trace in Langfuse

import { Tabs, Cards } from "nextra/components";
import {
FileText,
ClipboardCheck,
Scale,
Database,
LayoutDashboard,
TestTube,
} from "lucide-react";

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the trace.

<Tabs items={["Decorator", "Context Manager"]}>
<Tabs.Tab>

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):

        # YOUR APPLICATION CODE HERE
        result = call_llm(input)

        # Update the trace input and output
        langfuse.update_current_trace(
            input=input,
            output=result,
        )

        return result

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

</Tabs.Tab>
<Tabs.Tab>

The Context Manager allows you to wrap your instrumented code using context managers (with with statements), which allows you to add additional attributes to the trace.

from langfuse import get_client, propagate_attributes

langfuse = get_client()

with langfuse.start_as_current_observation(as_type="span", name="my-trace") as span:
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        metadata={"experiment": "variant_a", "env": "prod"},
        version="1.0"
    ):
    
        # YOUR APPLICATION CODE HERE
        result = call_llm("some input")

        # Update the trace input and output
        langfuse.start_as_current_observation(
            input=input,
            output=result,
        )

# Flush events in short-lived applications
langfuse.flush()

Learn more about using the Context Manager in the Langfuse SDK instrumentation docs.

</Tabs.Tab>

Troubleshooting

No traces appearing

First, enable debug mode in the Python SDK:

export LANGFUSE_DEBUG="True"

Then run your application and check the debug logs:

  • OTel spans appear in the logs: Your application is instrumented correctly but traces are not reaching Langfuse. To resolve this:
    1. Call langfuse.flush() at the end of your application to ensure all traces are exported.
    2. Verify that you are using the correct API keys and base URL.
  • No OTel spans in the logs: Your application is not instrumented correctly. Make sure the instrumentation runs before your application code.
Unwanted observations in Langfuse

The Langfuse SDK is based on OpenTelemetry. Other libraries in your application may emit OTel spans that are not relevant to you. These still count toward your billable units, so you should filter them out. See Unwanted spans in Langfuse for details.

Missing attributes

Some attributes may be stored in the metadata object of the observation rather than being mapped to the Langfuse data model. If a mapping or integration does not work as expected, please raise an issue on GitHub.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:


@google-labs-jules
Copy link
Contributor

@jules I think the current implementation is more complicated than the Langfuse official doc https://langfuse.com/integrations/frameworks/google-adk

If some changes are actually not necessary, please consider remove them.

---
source: ⚠️ Jupyter Notebook
title: Observability for Google Agent Development Kit with Langfuse
sidebarTitle: Google ADK
logo: /images/integrations/google_adk_icon.png
description: Learn how to instrument Google ADK agents with Langfuse via OpenTelemetry
category: Integrations
---

# Integrate Langfuse with Google's Agent Development Kit

This notebook demonstrates how to capture detailed traces from a [Google Agent Development Kit](https://github.com/google/adk-python) (ADK) application with **[Langfuse](https://langfuse.com)** using the OpenTelemetry (OTel) protocol.

> **Why Agent Development Kit?**  
> [Google’s Agent Development Kit](https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/) streamlines building, orchestrating, and tracing generative-AI agents out of the box, letting you move from prototype to production far faster than wiring everything yourself.

> **Why Langfuse?**  
> [Langfuse](https://langfuse.com) gives you a detailed dashboard and rich analytics for every prompt, model response, and function call in your agent, making it easy to debug, evaluate, and iterate on LLM apps.

<Steps>
## Step&nbsp;1: Install dependencies


```python
%pip install langfuse google-adk openinference-instrumentation-google-adk -q

Step 2: Set up environment variables

Fill in the Langfuse and your Gemini API key.

import os

# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region

# Gemini API Key (Get from Google AI Studio: https://aistudio.google.com/app/apikey)
os.environ["GOOGLE_API_KEY"] = "..." 

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")
Langfuse client is authenticated and ready!

Step 3: OpenTelemetry Instrumentation

Use the GoogleADKInstrumentor library to wrap ADK calls and send OpenTelemetry spans to Langfuse.

from openinference.instrumentation.google_adk import GoogleADKInstrumentor

GoogleADKInstrumentor().instrument()

Step 3: Build a hello world agent

Every tool call and model completion is captured as an OpenTelemetry span and forwarded to Langfuse.

from google.adk.agents import Agent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types

def say_hello():
    return {"greeting": "Hello Langfuse 👋"}

agent = Agent(
    name="hello_agent",
    model="gemini-2.0-flash",
    instruction="Always greet using the say_hello tool.",
    tools=[say_hello],
)

APP_NAME = "hello_app"
USER_ID = "demo-user"
SESSION_ID = "demo-session"

session_service = InMemorySessionService()
# create_session is async → await it in notebooks
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)

runner = Runner(agent=agent, app_name=APP_NAME, session_service=session_service)

user_msg = types.Content(role="user", parts=[types.Part(text="hi")])
for event in runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=user_msg):
    if event.is_final_response():
        print(event.content.parts[0].text)

Step 4: View the trace in Langfuse

Head over to your Langfuse dashboard → Traces. You should see traces including all tool calls and model inputs/outputs.

Google ADK example trace in Langfuse

Link to trace in Langfuse

import { Tabs, Cards } from "nextra/components";
import {
FileText,
ClipboardCheck,
Scale,
Database,
LayoutDashboard,
TestTube,
} from "lucide-react";

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the trace.

<Tabs items={["Decorator", "Context Manager"]}>
<Tabs.Tab>

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):

        # YOUR APPLICATION CODE HERE
        result = call_llm(input)

        # Update the trace input and output
        langfuse.update_current_trace(
            input=input,
            output=result,
        )

        return result

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

</Tabs.Tab>
<Tabs.Tab>

The Context Manager allows you to wrap your instrumented code using context managers (with with statements), which allows you to add additional attributes to the trace.

from langfuse import get_client, propagate_attributes

langfuse = get_client()

with langfuse.start_as_current_observation(as_type="span", name="my-trace") as span:
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        metadata={"experiment": "variant_a", "env": "prod"},
        version="1.0"
    ):
    
        # YOUR APPLICATION CODE HERE
        result = call_llm("some input")

        # Update the trace input and output
        langfuse.start_as_current_observation(
            input=input,
            output=result,
        )

# Flush events in short-lived applications
langfuse.flush()

Learn more about using the Context Manager in the Langfuse SDK instrumentation docs.

</Tabs.Tab>

Troubleshooting

No traces appearing

First, enable debug mode in the Python SDK:

export LANGFUSE_DEBUG="True"

Then run your application and check the debug logs:

  • OTel spans appear in the logs: Your application is instrumented correctly but traces are not reaching Langfuse. To resolve this:
    1. Call langfuse.flush() at the end of your application to ensure all traces are exported.
    2. Verify that you are using the correct API keys and base URL.
  • No OTel spans in the logs: Your application is not instrumented correctly. Make sure the instrumentation runs before your application code.
Unwanted observations in Langfuse

The Langfuse SDK is based on OpenTelemetry. Other libraries in your application may emit OTel spans that are not relevant to you. These still count toward your billable units, so you should filter them out. See Unwanted spans in Langfuse for details.

Missing attributes

Some attributes may be stored in the metadata object of the observation rather than being mapped to the Langfuse data model. If a mapping or integration does not work as expected, please raise an issue on GitHub.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:



Thank you for the feedback. I've simplified the adk/cofacts-ai/instrumentation.py implementation to align more closely with the Langfuse and OpenInference documentation, removing unnecessary complexity and relying on environment variables and standard OpenTelemetry configuration patterns.

google-labs-jules bot and others added 3 commits February 24, 2026 09:14
- Refactored `adk/cofacts-ai/instrumentation.py` to be simpler and more aligned with official documentation.
- Removed manual credential checks and exception handling, relying on library defaults and environment variables.
- Configured global tracer provider with Langfuse exporter and batch processor.
- Addressed PR feedback regarding implementation complexity.

Co-authored-by: MrOrz <108608+MrOrz@users.noreply.github.com>
Replace manual OpenTelemetry TracerProvider/BatchSpanProcessor/LangfuseExporter
setup with the simpler get_client() approach from official Langfuse docs.
Adds auth_check() to verify credentials before instrumenting.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Documents required environment variables including Google API key and
optional Langfuse credentials for observability instrumentation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@MrOrz MrOrz marked this pull request as ready for review February 24, 2026 09:43
@MrOrz
Copy link
Member Author

MrOrz commented Feb 24, 2026

image

it works

@MrOrz MrOrz merged commit 2adbf9f into master Feb 24, 2026
@MrOrz MrOrz deleted the setup-langfuse-adk-6582592787464144360 branch February 24, 2026 09:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant