-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
LLM Langchain wrap call in chain to display it in sentry AI tab #13905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -36,6 +36,8 @@ An additional dependency, `tiktoken`, is required to be installed if you want to | |
|
||
In addition to capturing errors, you can monitor interactions between multiple services or applications by [enabling tracing](/concepts/key-terms/tracing/). You can also collect and analyze performance profiles from real users with [profiling](/product/explore/profiling/). | ||
|
||
Tracing is required to see AI pipelines in sentry's [AI tab](https://sentry.io/insights/ai/llm-monitoring/). | ||
|
||
Select which Sentry features you'd like to install in addition to Error Monitoring to get the corresponding installation and configuration instructions below. | ||
|
||
<OnboardingOptionButtons | ||
|
@@ -76,13 +78,15 @@ Verify that the integration works by inducing an error: | |
|
||
```python | ||
from langchain_openai import ChatOpenAI | ||
from langchain_core.output_parsers import StrOutputParser | ||
import sentry_sdk | ||
|
||
sentry_sdk.init(...) # same as above | ||
|
||
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0, api_key="bad API key") | ||
with sentry_sdk.start_transaction(op="ai-inference", name="The result of the AI inference"): | ||
response = llm.invoke([("system", "What is the capital of paris?")]) | ||
chain = (llm | StrOutputParser()).with_config({"run_name": "test-run"}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm admittedly not super familiar with Langchain, could you explain what this change does differently than what we had before? I get that the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @szokeasaurusrex Thank you for the review! The important difference here is that the syntax As this example has no I have tried following code:
resulting in this output:
Only I dug a bit deeper in the sentry-python sdk and found
So a plain llm-call will not appear as an AI pipeline. TL;DR:
|
||
response = chain.invoke([("system", "What is the capital of France?")]) | ||
print(response) | ||
``` | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this, definitely an oversight that we forgot to add this before!