Skip to content

Cleanup ai attributes #57

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 20 commits into from
May 13, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
316 changes: 174 additions & 142 deletions generated/attributes/ai.md

Large diffs are not rendered by default.

143 changes: 143 additions & 0 deletions generated/attributes/gen_ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,20 @@

- [Stable Attributes](#stable-attributes)
- [gen_ai.prompt](#gen_aiprompt)
- [gen_ai.request.frequency_penalty](#gen_airequestfrequency_penalty)
- [gen_ai.request.presence_penalty](#gen_airequestpresence_penalty)
- [gen_ai.request.seed](#gen_airequestseed)
- [gen_ai.request.temperature](#gen_airequesttemperature)
- [gen_ai.request.top_k](#gen_airequesttop_k)
- [gen_ai.request.top_p](#gen_airequesttop_p)
- [gen_ai.response.finish_reasons](#gen_airesponsefinish_reasons)
- [gen_ai.response.id](#gen_airesponseid)
- [gen_ai.response.model](#gen_airesponsemodel)
- [gen_ai.system](#gen_aisystem)
- [gen_ai.tool.name](#gen_aitoolname)
- [gen_ai.usage.input_tokens](#gen_aiusageinput_tokens)
- [gen_ai.usage.output_tokens](#gen_aiusageoutput_tokens)
- [gen_ai.usage.total_tokens](#gen_aiusagetotal_tokens)
- [Deprecated Attributes](#deprecated-attributes)
- [gen_ai.usage.completion_tokens](#gen_aiusagecompletion_tokens)
- [gen_ai.usage.prompt_tokens](#gen_aiusageprompt_tokens)
Expand All @@ -24,6 +35,102 @@ The input messages sent to the model
| Exists in OpenTelemetry | Yes |
| Example | `[{"role": "user", "message": "hello"}]` |

### gen_ai.request.frequency_penalty

Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation.

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `0.5` |
| Aliases | `ai.frequency_penalty` |

### gen_ai.request.presence_penalty

Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies.

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `0.5` |
| Aliases | `ai.presence_penalty` |

### gen_ai.request.seed

The seed, ideally models given the same seed and same other parameters will produce the exact same output.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `1234567890` |
| Aliases | `ai.seed` |

### gen_ai.request.temperature

For an AI model call, the temperature parameter. Temperature essentially means how random the output will be.

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `0.1` |
| Aliases | `ai.temperature` |

### gen_ai.request.top_k

Limits the model to only consider the K most likely next tokens, where K is an integer (e.g., top_k=20 means only the 20 highest probability tokens are considered).

| Property | Value |
| --- | --- |
| Type | `integer` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `35` |
| Aliases | `ai.top_k` |

### gen_ai.request.top_p

Limits the model to only consider tokens whose cumulative probability mass adds up to p, where p is a float between 0 and 1 (e.g., top_p=0.7 means only tokens that sum up to 70% of the probability mass are considered).

| Property | Value |
| --- | --- |
| Type | `double` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `0.7` |
| Aliases | `ai.top_p` |

### gen_ai.response.finish_reasons

The reason why the model stopped generating.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `COMPLETE` |
| Aliases | `ai.finish_reason` |

### gen_ai.response.id

Unique identifier for the completion.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `gen_123abc` |
| Aliases | `ai.generation_id` |

### gen_ai.response.model

The vendor-specific ID of the model used.
Expand All @@ -36,6 +143,30 @@ The vendor-specific ID of the model used.
| Example | `gpt-4` |
| Aliases | `ai.model_id` |

### gen_ai.system

The provider of the model.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `openai` |
| Aliases | `ai.model.provider` |

### gen_ai.tool.name

Name of the tool utilized by the agent.

| Property | Value |
| --- | --- |
| Type | `string` |
| Has PII | false |
| Exists in OpenTelemetry | Yes |
| Example | `Flights` |
| Aliases | `ai.function_call` |

### gen_ai.usage.input_tokens

The number of tokens used in the GenAI input (prompt).
Expand All @@ -60,6 +191,18 @@ The number of tokens used in the GenAI response (completion).
| Example | `10` |
| Aliases | `ai.completion_tokens.used`, `gen_ai.usage.completion_tokens` |

### gen_ai.usage.total_tokens

The total number of tokens used to process the prompt. (input tokens plus output todkens)

| Property | Value |
| --- | --- |
| Type | `integer` |
| Has PII | false |
| Exists in OpenTelemetry | No |
| Example | `20` |
| Aliases | `ai.total_tokens.used` |

## Deprecated Attributes

These attributes are deprecated and will be removed in a future version. Please use the recommended replacements.
Expand Down
Loading