Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Dec 19, 2025

Reverted Changes

The previous implementation added a maxAttributeValueLength option that introduces truncation on the AI SDK side. This doesn't address the actual issue - the truncation users are experiencing happens at the OpenTelemetry SDK/exporter layer, not in the AI SDK.

The AI SDK passes attributes to the OpenTelemetry API without any truncation. The 1024-character truncation the user experiences is happening downstream in the OpenTelemetry SDK or their specific backend (Trigger.dev).

Analysis

After re-reading the issue:

  • The user wants to prevent truncation, not add a new way to truncate
  • The truncation happens at the OpenTelemetry SDK level, which is outside the AI SDK's control
  • Environment variables like OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT need to be configured in the OpenTelemetry SDK, not in the AI SDK
  • The AI SDK correctly passes full attribute values to the tracer API

Conclusion

This issue requires investigation into how Trigger.dev's OpenTelemetry integration handles attribute limits, not changes to the AI SDK. The AI SDK is working correctly - it passes the full attribute values to OpenTelemetry.

Original prompt

This section details on the original issue you should resolve

<issue_title>OpenTelemetry span attributes still truncated despite environment variables</issue_title>
<issue_description>### Description

When using the AI SDK's experimental_telemetry feature with OpenTelemetry, span attributes containing LLM messages and responses are being truncated despite setting all recommended OpenTelemetry environment variables to very high values (1MB+).

The truncation appears to be happening somewhere in the AI SDK's telemetry recording or the underlying OpenTelemetry integration, ignoring the configured attribute value length limits.

The truncation happens at 1024 characters.
Our prompts are longer, and so we loose a lot of information in the logs.

Environment Variables Set

OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_LOG_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576

All three environment variables are confirmed to be set in the runtime environment (verified via process.env logging).

Code Example

import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";

const result = await generateText({
  model: anthropic("claude-sonnet-4-5"),
  prompt: "Very long prompt with lots of context...",
  experimental_telemetry: {
    isEnabled: true,
    functionId: "my-function",
    recordInputs: true,
    recordOutputs: true,
    metadata: {
      component: "my-component",
    },
  },
});

Expected Behavior

With OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576 (1MB), span attributes should contain the full:

  • Input prompt (up to 1MB)
  • Output text (up to 1MB)
  • Tool call arguments and results (up to 1MB)

Actual Behavior

Span attributes are still being truncated at what appears to be a much lower limit (possibly still at the default 1024 bytes or similar). This makes debugging LLM conversations extremely difficult as we can only see partial messages.

What we tried

We tried changing values for OpenTelemetry environment variables:

OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=0

# and bigger:
OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576

But that didn't help, it still got truncated.

We tried with a custom tracer, but that also didn't help:
We tried creating a custom NodeTracerProvider with programmatic span limits and passing it via the tracer option in experimental_telemetry:

import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";

const provider = new NodeTracerProvider({
  spanLimits: {
    attributeValueLengthLimit: 0, // unlimited
  },
});
provider.register();

const result = await generateText({
  model,
  prompt,
  experimental_telemetry: {
    isEnabled: true,
    tracer: provider.getTracer("my-tracer"),
  },
});

Any working examples of logging more that 1024 characters per attribute with ai-sdk? Thanks!

AI SDK Version

  • AI SDK Version: ai@6.0.0-beta.99
  • Runtime: Node.js (via Trigger.dev v4)
  • Telemetry Backend: Trigger.dev (OpenTelemetry)
  • Model Provider: Anthropic Claude (@ai-sdk/anthropic@3.0.0-beta.53)

Code of Conduct

  • I agree to follow this project's Code of Conduct</issue_description>

<agent_instructions>investigate</agent_instructions>

Comments on the Issue (you are @copilot in this section)


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Co-authored-by: lgrammel <205036+lgrammel@users.noreply.github.com>
Co-authored-by: lgrammel <205036+lgrammel@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix truncation of OpenTelemetry span attributes Add maxAttributeValueLength option to TelemetrySettings Dec 19, 2025
Copilot AI requested a review from lgrammel December 19, 2025 13:46
Copy link
Collaborator

@lgrammel lgrammel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this introduces truncation that was not there before on the ai sdk side. how does it address the issue at all?

Co-authored-by: lgrammel <205036+lgrammel@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Dec 19, 2025

this introduces truncation that was not there before on the ai sdk side. how does it address the issue at all?

You're right - I misunderstood the issue. The truncation users experience happens at the OpenTelemetry SDK/exporter layer, not in the AI SDK. The AI SDK correctly passes full attribute values to the tracer API without any truncation.

Reverted the changes in 1873083. This issue requires investigation into how Trigger.dev's OpenTelemetry integration handles attribute limits, not changes to the AI SDK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenTelemetry span attributes still truncated despite environment variables

2 participants