Skip to content

[MLOB-983] Add LLM Observability and Serverless Compatibility Documentation #24000

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 10 commits into from
Jul 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion content/en/llm_observability/sdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,12 @@ LLMObs.enable(
: optional - _string_
<br />The name of the service used for your application. If not provided, this defaults to the value of `DD_SERVICE`.

### AWS Lambda setup

Enable LLM Observability by specifying the required environment variables in your [command line setup](#command-line-setup) and following the setup instructions for the [Datadog-Python and Datadog-Extension][14] AWS Lambda layers.

**Note**: Using the `Datadog-Python` and `Datadog-Extension` layers automatically turns on all LLM Observability integrations, and force flushes spans at the end of the Lambda function.

#### Application naming guidelines

Your application name (the value of `DD_LLMOBS_ML_APP`) must be a lowercase Unicode string. It may contain the characters listed below:
Expand Down Expand Up @@ -564,7 +570,7 @@ def separate_task(workflow_span):
return
{{< /code-block >}}

### Flushing in serverless environments
#### Force flushing in serverless environments

`LLMObs.flush()` is a blocking function that submits all buffered LLM Observability data to the Datadog backend. This can be useful in serverless environments to prevent an application from exiting until all LLM Observability traces are submitted.

Expand Down Expand Up @@ -656,3 +662,4 @@ def server_process_request(request):
[11]: https://docs.datadoghq.com/tracing/trace_collection/compatibility/python/#integrations
[12]: https://docs.datadoghq.com/tracing/trace_collection/compatibility/python/#library-compatibility
[13]: /llm_observability/auto_instrumentation/
[14]: https://docs.datadoghq.com/serverless/aws_lambda/installation/python/?tab=custom#installation
4 changes: 2 additions & 2 deletions content/en/llm_observability/trace_an_llm_application.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ If you're new to LLM Observability traces, read the [Core Concepts][3] before pr

## Instrument your LLM application

<div class="alert alert-info">This guide uses the LLM Observability SDK for Python. If your application is not written in Python, you can complete the steps below with API requests instead of SDK function calls.</a></div>
<div class="alert alert-info">This guide uses the LLM Observability SDK for Python. If your application is running in a serverless environment, follow the <a href="/llm_observability/sdk/#aws-lambda-setup">serverless setup instructions</a>. If your application is not written in Python, you can complete the steps below with API requests instead of SDK function calls.</div>

Datadog provides [auto-instrumentation][4] to capture LLM calls for specific LLM provider libraries. However, manually instrumenting your LLM application using the Python SDK can unlock even more of Datadog's LLM Observability features.

Expand Down Expand Up @@ -185,4 +185,4 @@ Depending on the complexity of your LLM application, you can also:
[14]: /llm_observability/sdk/#tracking-user-sessions
[15]: /llm_observability/sdk/#tracing-multiple-applications
[16]: /llm_observability/submit_evaluations
[17]: /llm_observability/core_concepts/#spans
[17]: /llm_observability/core_concepts/#spans
Loading