|
15 | 15 | """ |
16 | 16 | Telemetry handler for GenAI invocations. |
17 | 17 |
|
18 | | -This module provides the `TelemetryHandler` class, which manages the lifecycle of |
19 | | -GenAI (Generative AI) invocations and emits telemetry data as spans, metrics, and events. |
20 | | -It supports starting, stopping, and failing LLM invocations, |
21 | | -and provides module-level convenience functions for these operations. |
| 18 | +This module exposes the `TelemetryHandler` class, which manages the lifecycle of |
| 19 | +GenAI (Generative AI) invocations and emits telemetry data (spans and related attributes). |
| 20 | +It supports starting, stopping, and failing LLM invocations. |
22 | 21 |
|
23 | 22 | Classes: |
24 | | - TelemetryHandler: Manages GenAI invocation lifecycles and emits telemetry. |
| 23 | + - TelemetryHandler: Manages GenAI invocation lifecycles and emits telemetry. |
25 | 24 |
|
26 | 25 | Functions: |
27 | | - get_telemetry_handler: Returns a singleton TelemetryHandler instance. |
28 | | - llm_start: Starts a new LLM invocation. |
29 | | - llm_stop: Stops an LLM invocation and emits telemetry. |
30 | | - llm_fail: Marks an LLM invocation as failed and emits error telemetry. |
| 26 | + - get_telemetry_handler: Returns a singleton `TelemetryHandler` instance. |
31 | 27 |
|
32 | 28 | Usage: |
33 | | - Use the module-level functions (`llm_start`, `llm_stop`, `llm_fail`) to |
34 | | - instrument GenAI invocations for telemetry collection. |
| 29 | + handler = get_telemetry_handler() |
| 30 | + handler.start_llm(prompts, run_id, system="provider-name", **attrs) |
| 31 | + handler.stop_llm(run_id, chat_generations, **attrs) |
| 32 | + handler.fail_llm(run_id, error, **attrs) |
35 | 33 | """ |
36 | 34 |
|
37 | 35 | import time |
|
0 commit comments