You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Define initial semantic conventions that establishes a foundation with essential attributes, including a vendor-specific example, and basic event definitions:
pick a namespace: gen_ai (genai?)
define basic request and response attributes.
gen_ai.system - Name of the LLM foundation model system or vendor
gen_ai.request.max_tokens - Max number of tokens the LLM to generate per request
gen_ai.request.model - Name of the LLM model used for the request
gen_ai.request.temperature - Temperature setting for the LLM request
gen_ai.request.top_p - Top_p sampling setting for the LLM request
gen_ai.response.model - Name of the LLM model used for the response
gen_ai.response.finish_reason - Reason why the LLM stopped generating tokens
gen_ai.response.id - Unique identifier for the response
define usage attributes
requirement levels, if they belong on spans or events
gen_ai.usage.completion_tokens - Number of tokens used in the LLM response
gen_ai.usage.prompt_tokens - Number of tokens used in the LLM prompt gen_ai.usage.total_tokens - Total number of tokens used in both prompt and response
lmolkova
changed the title
Initial PR with essential attributes, including a vendor-specific example, and basic event definitions
LLM: initial semconv definition
Mar 12, 2024
Define initial semantic conventions that establishes a foundation with essential attributes, including a vendor-specific example, and basic event definitions:
gen_ai
(genai
?)gen_ai.system
- Name of the LLM foundation model system or vendorgen_ai.request.max_tokens
- Max number of tokens the LLM to generate per requestgen_ai.request.model
- Name of the LLM model used for the requestgen_ai.request.temperature
- Temperature setting for the LLM requestgen_ai.request.top_p
- Top_p sampling setting for the LLM requestgen_ai.response.model
- Name of the LLM model used for the responsegen_ai.response.finish_reason
- Reason why the LLM stopped generating tokensgen_ai.response.id
- Unique identifier for the responsegen_ai.usage.completion_tokens
- Number of tokens used in the LLM responsegen_ai.usage.prompt_tokens
- Number of tokens used in the LLM promptgen_ai.usage.total_tokens
- Total number of tokens used in both prompt and responseopenai
) - @drewby - Add system specific conventions for OpenAI #1385gen_ai.openai.request.logit_bias
- The logit_bias used in the requestgen_ai.openai.request.presence_penalty
- The presence_penalty used in the requestgen_ai.openai.request.seed
- Seed used in request to improve determinism.gen_ai.openai.request.response_format
- Format of the LLM's response, e.g., text or JSONgen_ai.openai.response.created
- UNIX timestamp of when the response was createdgen_ai.content.prompt
- Captures the full prompt string sent to an LLMgen_ai.content.completion
- Captures the full response string from an LLMserver.address
,server.port
anderror.type
#1297server.address
- Address of the server hosting the LLMserver.port
- Port number used by the servererror.type
The text was updated successfully, but these errors were encountered: