-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
feat: Add opt-in vercelAiIntegration
to cloudflare & vercel-edge
#16732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
13 commits
Select commit
Hold shift + click to select a range
2c84d43
Expose vercel ai integration
andreiborza ecdd759
move stuff to core
mydea 2f9bf95
move stuff around and cleanup
mydea 15802bf
fix build
mydea 1d13371
add missing export
mydea a2945ef
refactor to fix stuff
mydea f30419f
Apply suggestions from code review
mydea aac34e0
ref comment
mydea 9b8f5f0
conditional node imports?
mydea 6c68d0e
remove modules
mydea 12e2246
export from sveltekit workers
mydea 5e437a9
also export from vercel edge
mydea d898309
fix index.types.ts
mydea File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
/** | ||
* This is a copy of the Vercel AI integration from the node SDK. | ||
* | ||
* The only difference is that it does not use `@opentelemetry/instrumentation` | ||
* because Cloudflare Workers do not support it. | ||
* | ||
* Therefore, we cannot automatically patch setting `experimental_telemetry: { isEnabled: true }` | ||
* and users have to manually set this to get spans. | ||
*/ | ||
|
||
import type { IntegrationFn } from '@sentry/core'; | ||
import { addVercelAiProcessors, defineIntegration } from '@sentry/core'; | ||
|
||
const INTEGRATION_NAME = 'VercelAI'; | ||
|
||
const _vercelAIIntegration = (() => { | ||
return { | ||
name: INTEGRATION_NAME, | ||
setup(client) { | ||
addVercelAiProcessors(client); | ||
}, | ||
}; | ||
}) satisfies IntegrationFn; | ||
|
||
/** | ||
* Adds Sentry tracing instrumentation for the [ai](https://www.npmjs.com/package/ai) library. | ||
* This integration is not enabled by default, you need to manually add it. | ||
* | ||
* For more information, see the [`ai` documentation](https://sdk.vercel.ai/docs/ai-sdk-core/telemetry). | ||
* | ||
* You need to enable collecting spans for a specific call by setting | ||
* `experimental_telemetry.isEnabled` to `true` in the first argument of the function call. | ||
* | ||
* ```javascript | ||
* const result = await generateText({ | ||
* model: openai('gpt-4-turbo'), | ||
* experimental_telemetry: { isEnabled: true }, | ||
* }); | ||
* ``` | ||
* | ||
* If you want to collect inputs and outputs for a specific call, you must specifically opt-in to each | ||
* function call by setting `experimental_telemetry.recordInputs` and `experimental_telemetry.recordOutputs` | ||
* to `true`. | ||
* | ||
* ```javascript | ||
* const result = await generateText({ | ||
* model: openai('gpt-4-turbo'), | ||
* experimental_telemetry: { isEnabled: true, recordInputs: true, recordOutputs: true }, | ||
* }); | ||
*/ | ||
export const vercelAIIntegration = defineIntegration(_vercelAIIntegration); |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
import type { Span } from '@opentelemetry/api'; | ||
import type { SpanOrigin } from '@sentry/core'; | ||
import { SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '@sentry/core'; | ||
|
||
/** Adds an origin to an OTEL Span. */ | ||
export function addOriginToSpan(span: Span, origin: SpanOrigin): void { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, origin); | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
/** Detect CommonJS. */ | ||
export function isCjs(): boolean { | ||
try { | ||
return typeof module !== 'undefined' && typeof module.exports !== 'undefined'; | ||
} catch { | ||
return false; | ||
} | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,221 @@ | ||
import type { Client } from '../client'; | ||
import { SEMANTIC_ATTRIBUTE_SENTRY_OP, SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN } from '../semanticAttributes'; | ||
import type { Event } from '../types-hoist/event'; | ||
import type { Span, SpanAttributes, SpanJSON, SpanOrigin } from '../types-hoist/span'; | ||
import { spanToJSON } from './spanUtils'; | ||
import { | ||
AI_MODEL_ID_ATTRIBUTE, | ||
AI_MODEL_PROVIDER_ATTRIBUTE, | ||
AI_PROMPT_ATTRIBUTE, | ||
AI_PROMPT_MESSAGES_ATTRIBUTE, | ||
AI_PROMPT_TOOLS_ATTRIBUTE, | ||
AI_RESPONSE_TEXT_ATTRIBUTE, | ||
AI_RESPONSE_TOOL_CALLS_ATTRIBUTE, | ||
AI_TELEMETRY_FUNCTION_ID_ATTRIBUTE, | ||
AI_TOOL_CALL_ID_ATTRIBUTE, | ||
AI_TOOL_CALL_NAME_ATTRIBUTE, | ||
AI_USAGE_COMPLETION_TOKENS_ATTRIBUTE, | ||
AI_USAGE_PROMPT_TOKENS_ATTRIBUTE, | ||
GEN_AI_RESPONSE_MODEL_ATTRIBUTE, | ||
GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE, | ||
GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE, | ||
} from './vercel-ai-attributes'; | ||
|
||
function addOriginToSpan(span: Span, origin: SpanOrigin): void { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_ORIGIN, origin); | ||
} | ||
|
||
/** | ||
* Post-process spans emitted by the Vercel AI SDK. | ||
* This is supposed to be used in `client.on('spanStart', ...) | ||
*/ | ||
function onVercelAiSpanStart(span: Span): void { | ||
const { data: attributes, description: name } = spanToJSON(span); | ||
|
||
if (!name) { | ||
return; | ||
} | ||
|
||
// Tool call spans | ||
// https://ai-sdk.dev/docs/ai-sdk-core/telemetry#tool-call-spans | ||
if (attributes[AI_TOOL_CALL_NAME_ATTRIBUTE] && attributes[AI_TOOL_CALL_ID_ATTRIBUTE] && name === 'ai.toolCall') { | ||
processToolCallSpan(span, attributes); | ||
return; | ||
} | ||
|
||
// The AI and Provider must be defined for generate, stream, and embed spans. | ||
// The id of the model | ||
const aiModelId = attributes[AI_MODEL_ID_ATTRIBUTE]; | ||
// the provider of the model | ||
const aiModelProvider = attributes[AI_MODEL_PROVIDER_ATTRIBUTE]; | ||
if (typeof aiModelId !== 'string' || typeof aiModelProvider !== 'string' || !aiModelId || !aiModelProvider) { | ||
return; | ||
} | ||
|
||
processGenerateSpan(span, name, attributes); | ||
} | ||
|
||
const vercelAiEventProcessor = Object.assign( | ||
(event: Event): Event => { | ||
if (event.type === 'transaction' && event.spans) { | ||
for (const span of event.spans) { | ||
// this mutates spans in-place | ||
processEndedVercelAiSpan(span); | ||
} | ||
} | ||
return event; | ||
}, | ||
{ id: 'VercelAiEventProcessor' }, | ||
); | ||
|
||
/** | ||
* Post-process spans emitted by the Vercel AI SDK. | ||
*/ | ||
function processEndedVercelAiSpan(span: SpanJSON): void { | ||
const { data: attributes, origin } = span; | ||
|
||
if (origin !== 'auto.vercelai.otel') { | ||
return; | ||
} | ||
|
||
renameAttributeKey(attributes, AI_USAGE_COMPLETION_TOKENS_ATTRIBUTE, GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE); | ||
renameAttributeKey(attributes, AI_USAGE_PROMPT_TOKENS_ATTRIBUTE, GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE); | ||
|
||
if ( | ||
typeof attributes[GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE] === 'number' && | ||
typeof attributes[GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE] === 'number' | ||
) { | ||
attributes['gen_ai.usage.total_tokens'] = | ||
attributes[GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE] + attributes[GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]; | ||
} | ||
|
||
// Rename AI SDK attributes to standardized gen_ai attributes | ||
renameAttributeKey(attributes, AI_PROMPT_MESSAGES_ATTRIBUTE, 'gen_ai.request.messages'); | ||
renameAttributeKey(attributes, AI_RESPONSE_TEXT_ATTRIBUTE, 'gen_ai.response.text'); | ||
renameAttributeKey(attributes, AI_RESPONSE_TOOL_CALLS_ATTRIBUTE, 'gen_ai.response.tool_calls'); | ||
renameAttributeKey(attributes, AI_PROMPT_TOOLS_ATTRIBUTE, 'gen_ai.request.available_tools'); | ||
} | ||
|
||
/** | ||
* Renames an attribute key in the provided attributes object if the old key exists. | ||
* This function safely handles null and undefined values. | ||
*/ | ||
function renameAttributeKey(attributes: Record<string, unknown>, oldKey: string, newKey: string): void { | ||
if (attributes[oldKey] != null) { | ||
attributes[newKey] = attributes[oldKey]; | ||
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete | ||
delete attributes[oldKey]; | ||
} | ||
} | ||
|
||
function processToolCallSpan(span: Span, attributes: SpanAttributes): void { | ||
addOriginToSpan(span, 'auto.vercelai.otel'); | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.execute_tool'); | ||
span.setAttribute('gen_ai.tool.call.id', attributes[AI_TOOL_CALL_ID_ATTRIBUTE]); | ||
span.setAttribute('gen_ai.tool.name', attributes[AI_TOOL_CALL_NAME_ATTRIBUTE]); | ||
span.updateName(`execute_tool ${attributes[AI_TOOL_CALL_NAME_ATTRIBUTE]}`); | ||
} | ||
|
||
function processGenerateSpan(span: Span, name: string, attributes: SpanAttributes): void { | ||
addOriginToSpan(span, 'auto.vercelai.otel'); | ||
|
||
const nameWthoutAi = name.replace('ai.', ''); | ||
span.setAttribute('ai.pipeline.name', nameWthoutAi); | ||
span.updateName(nameWthoutAi); | ||
|
||
// If a Telemetry name is set and it is a pipeline span, use that as the operation name | ||
const functionId = attributes[AI_TELEMETRY_FUNCTION_ID_ATTRIBUTE]; | ||
if (functionId && typeof functionId === 'string' && name.split('.').length - 1 === 1) { | ||
span.updateName(`${nameWthoutAi} ${functionId}`); | ||
span.setAttribute('ai.pipeline.name', functionId); | ||
} | ||
|
||
if (attributes[AI_PROMPT_ATTRIBUTE]) { | ||
span.setAttribute('gen_ai.prompt', attributes[AI_PROMPT_ATTRIBUTE]); | ||
} | ||
if (attributes[AI_MODEL_ID_ATTRIBUTE] && !attributes[GEN_AI_RESPONSE_MODEL_ATTRIBUTE]) { | ||
span.setAttribute(GEN_AI_RESPONSE_MODEL_ATTRIBUTE, attributes[AI_MODEL_ID_ATTRIBUTE]); | ||
} | ||
span.setAttribute('ai.streaming', name.includes('stream')); | ||
|
||
// Generate Spans | ||
if (name === 'ai.generateText') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.generateText.doGenerate') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.generate_text'); | ||
span.updateName(`generate_text ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name === 'ai.streamText') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.streamText.doStream') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.stream_text'); | ||
span.updateName(`stream_text ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name === 'ai.generateObject') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.generateObject.doGenerate') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.generate_object'); | ||
span.updateName(`generate_object ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name === 'ai.streamObject') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.streamObject.doStream') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.stream_object'); | ||
span.updateName(`stream_object ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name === 'ai.embed') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.embed.doEmbed') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.embed'); | ||
span.updateName(`embed ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name === 'ai.embedMany') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.invoke_agent'); | ||
return; | ||
} | ||
|
||
if (name === 'ai.embedMany.doEmbed') { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'gen_ai.embed_many'); | ||
span.updateName(`embed_many ${attributes[AI_MODEL_ID_ATTRIBUTE]}`); | ||
return; | ||
} | ||
|
||
if (name.startsWith('ai.stream')) { | ||
span.setAttribute(SEMANTIC_ATTRIBUTE_SENTRY_OP, 'ai.run'); | ||
return; | ||
} | ||
} | ||
|
||
/** | ||
* Add event processors to the given client to process Vercel AI spans. | ||
*/ | ||
export function addVercelAiProcessors(client: Client): void { | ||
client.on('spanStart', onVercelAiSpanStart); | ||
// Note: We cannot do this on `spanEnd`, because the span cannot be mutated anymore at this point | ||
client.addEventProcessor(vercelAiEventProcessor); | ||
} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.