-
Notifications
You must be signed in to change notification settings - Fork 48
Extended tracing to all tool calls #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system.
- Introduces
LLMTracingWrapper.ts
to wrap LLM calls with tracing metadata - Updates all tool implementations to use the new
callLLMWithTracing
function - Refactors direct
LLMClient.call()
usage to include proper tracing context
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
File | Description |
---|---|
LLMTracingWrapper.ts |
New wrapper module providing centralized LLM call tracing functionality |
Multiple tool files | Updated to use callLLMWithTracing instead of direct LLMClient.call() |
AgentRunner.ts |
Added tracing to agent progress summarization |
BUILD.gn |
Added new tracing wrapper module to build configuration |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
messages, | ||
systemPrompt: prompt.systemPrompt, | ||
temperature: 0.3, | ||
options: { retryConfig: { maxRetries: 3 } } |
Copilot
AI
Aug 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The retryConfig
parameter has been moved inside an options
object, but this changes the API structure from the original flat configuration. Verify this is compatible with the expected API structure.
options: { retryConfig: { maxRetries: 3 } } | |
retryConfig: { maxRetries: 3 } |
Copilot uses AI. Check for mistakes.
provider: llmCallConfig.provider, | ||
model: llmCallConfig.model, | ||
messages: llmCallConfig.messages, | ||
systemPrompt: llmCallConfig.systemPrompt || '', |
Copilot
AI
Aug 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Defaulting to empty string when systemPrompt is undefined may not be appropriate for all LLM providers. Consider preserving the undefined value to let the underlying LLM client handle it according to provider requirements.
systemPrompt: llmCallConfig.systemPrompt || '', | |
systemPrompt: llmCallConfig.systemPrompt, |
Copilot uses AI. Check for mistakes.
messages: llmCallConfig.messages, | ||
systemPrompt: llmCallConfig.systemPrompt || '', | ||
temperature: llmCallConfig.temperature, | ||
...llmCallConfig.options |
Copilot
AI
Aug 17, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spreading llmCallConfig.options
at the root level conflicts with the expected API structure where these options should be nested. This could overwrite other parameters or cause unexpected behavior.
...llmCallConfig.options | |
options: llmCallConfig.options |
Copilot uses AI. Check for mistakes.
## Pull Request Overview This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system. - Introduces `LLMTracingWrapper.ts` to wrap LLM calls with tracing metadata - Updates all tool implementations to use the new `callLLMWithTracing` function - Refactors direct `LLMClient.call()` usage to include proper tracing context
No description provided.