Skip to content

Conversation

olesho
Copy link
Collaborator

@olesho olesho commented Aug 17, 2025

No description provided.

@olesho olesho requested a review from tysonthomas9 August 17, 2025 19:02
@tysonthomas9 tysonthomas9 requested a review from Copilot August 17, 2025 19:04
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system.

  • Introduces LLMTracingWrapper.ts to wrap LLM calls with tracing metadata
  • Updates all tool implementations to use the new callLLMWithTracing function
  • Refactors direct LLMClient.call() usage to include proper tracing context

Reviewed Changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.

File Description
LLMTracingWrapper.ts New wrapper module providing centralized LLM call tracing functionality
Multiple tool files Updated to use callLLMWithTracing instead of direct LLMClient.call()
AgentRunner.ts Added tracing to agent progress summarization
BUILD.gn Added new tracing wrapper module to build configuration

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

messages,
systemPrompt: prompt.systemPrompt,
temperature: 0.3,
options: { retryConfig: { maxRetries: 3 } }
Copy link

Copilot AI Aug 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retryConfig parameter has been moved inside an options object, but this changes the API structure from the original flat configuration. Verify this is compatible with the expected API structure.

Suggested change
options: { retryConfig: { maxRetries: 3 } }
retryConfig: { maxRetries: 3 }

Copilot uses AI. Check for mistakes.

provider: llmCallConfig.provider,
model: llmCallConfig.model,
messages: llmCallConfig.messages,
systemPrompt: llmCallConfig.systemPrompt || '',
Copy link

Copilot AI Aug 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Defaulting to empty string when systemPrompt is undefined may not be appropriate for all LLM providers. Consider preserving the undefined value to let the underlying LLM client handle it according to provider requirements.

Suggested change
systemPrompt: llmCallConfig.systemPrompt || '',
systemPrompt: llmCallConfig.systemPrompt,

Copilot uses AI. Check for mistakes.

messages: llmCallConfig.messages,
systemPrompt: llmCallConfig.systemPrompt || '',
temperature: llmCallConfig.temperature,
...llmCallConfig.options
Copy link

Copilot AI Aug 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spreading llmCallConfig.options at the root level conflicts with the expected API structure where these options should be nested. This could overwrite other parameters or cause unexpected behavior.

Suggested change
...llmCallConfig.options
options: llmCallConfig.options

Copilot uses AI. Check for mistakes.

@tysonthomas9 tysonthomas9 merged commit 92c1eb5 into BrowserOperator:main Aug 17, 2025
2 of 3 checks passed
tysonthomas9 pushed a commit that referenced this pull request Sep 28, 2025
## Pull Request Overview

This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system.

- Introduces `LLMTracingWrapper.ts` to wrap LLM calls with tracing metadata
- Updates all tool implementations to use the new `callLLMWithTracing` function
- Refactors direct `LLMClient.call()` usage to include proper tracing context
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants