-
Notifications
You must be signed in to change notification settings - Fork 765
feat(prompts): add tool function support #3153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
""" WalkthroughThree new Pydantic models— Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant PromptRegistryClient
participant ModelConfig
User->>PromptRegistryClient: Call render_prompt()
PromptRegistryClient->>ModelConfig: Check llm_config.tools and tool_choice
alt tools empty or None
PromptRegistryClient->>ModelConfig: Set tool_choice to None
end
PromptRegistryClient-->>User: Return rendered prompt
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 025b928 in 46 seconds. Click for details.
- Reviewed
30
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:41
- Draft comment:
Consider specifying a more explicit type for the 'parameters' field in ToolFunction (e.g. Dict[str, Any]) and adding a brief docstring to clarify its expected structure. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:46
- Draft comment:
Using Literal['function'] for the Tool type is fine if only one tool type is expected, but consider a more flexible design if additional tool types may be needed in the future. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
3. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:59
- Draft comment:
Adding 'tool_choice' and 'tools' to ModelConfig is a good start. Consider including field descriptions or validators to clearly document their intended use and expected values. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_MpJrhouXmlj5vdDo
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/traceloop-sdk/traceloop/sdk/prompts/model.py (3)
41-44
: Consider using a more specific type for the parameters field.The implementation is correct, but the
parameters
field could benefit from more specific typing. According to OpenAI's function calling specification, parameters should follow JSON Schema format.Consider this improvement:
- parameters: dict + parameters: dict[str, any] # or consider using a JSONSchema type if availableAlternatively, you could add validation to ensure the parameters follow the expected JSON Schema structure.
46-49
: Remove trailing whitespace and consider future extensibility.The
Tool
class structure is well-designed using discriminated unions. Minor formatting issue: remove the trailing whitespace on line 49.class Tool(RegistryObjectBaseModel): type: Literal["function"] function: ToolFunction - +The design with
Literal["function"]
is excellent as it allows for future extension with other tool types while maintaining type safety.
59-60
: Consider more specific typing for the tool_choice field.The implementation correctly adds optional tool support to the model configuration. However, the
tool_choice
field could benefit from more specific typing.According to OpenAI's API specification,
tool_choice
typically accepts specific values:- tool_choice: Optional[str] = None + tool_choice: Optional[Literal["none", "auto", "required"]] = NoneAlternatively, if you need to support specifying a particular tool, you might need a Union type to handle both string literals and tool specification objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed c6c2a3c in 38 seconds. Click for details.
- Reviewed
36
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:1
- Draft comment:
Consider grouping and ordering imports consistently. Moving 'from typing_extensions import Annotated' after the pydantic import may conflict with common import grouping practices. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:41
- Draft comment:
The additional blank line between ToolFunction and Tool improves readability. Ensure consistency with the rest of the codebase. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
3. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:61
- Draft comment:
A blank line was added after the 'tools' field in ModelConfig. Verify that this formatting aligns with your project's style guidelines. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_S4OwX1ksrrlZZ1uL
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed cb6905f in 38 seconds. Click for details.
- Reviewed
17
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/prompts/client.py:78
- Draft comment:
The if condition redundantly checks for an empty tools list. Since an empty list (or None) already evaluates as False, the 'or len(prompt_version.llm_config.tools) == 0' check is unnecessary. Consider simplifying the condition and adding a brief comment explaining why tool_choice is reset to None when no tools are provided. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_kAhs2sacAQvA2lFS
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
packages/traceloop-sdk/traceloop/sdk/prompts/client.py
(1 hunks)packages/traceloop-sdk/traceloop/sdk/prompts/model.py
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/traceloop-sdk/traceloop/sdk/prompts/model.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Lint
- GitHub Check: Build Packages (3.11)
if ( | ||
not prompt_version.llm_config.tools | ||
or len(prompt_version.llm_config.tools) == 0 | ||
) and prompt_version.llm_config.tool_choice is not None: | ||
prompt_version.llm_config.tool_choice = None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix potential AttributeError and simplify the condition.
The current implementation has a potential bug and can be simplified:
- AttributeError risk: If
tools
isNone
, thelen(prompt_version.llm_config.tools)
call will raise anAttributeError
. - Redundant condition: The
not prompt_version.llm_config.tools
check already handles bothNone
and empty list cases, making the length check unnecessary.
Apply this diff to fix the issue and improve readability:
- if (
- not prompt_version.llm_config.tools
- or len(prompt_version.llm_config.tools) == 0
- ) and prompt_version.llm_config.tool_choice is not None:
- prompt_version.llm_config.tool_choice = None
+ if (
+ not prompt_version.llm_config.tools
+ and prompt_version.llm_config.tool_choice is not None
+ ):
+ prompt_version.llm_config.tool_choice = None
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if ( | |
not prompt_version.llm_config.tools | |
or len(prompt_version.llm_config.tools) == 0 | |
) and prompt_version.llm_config.tool_choice is not None: | |
prompt_version.llm_config.tool_choice = None | |
if ( | |
not prompt_version.llm_config.tools | |
and prompt_version.llm_config.tool_choice is not None | |
): | |
prompt_version.llm_config.tool_choice = None |
🤖 Prompt for AI Agents
In packages/traceloop-sdk/traceloop/sdk/prompts/client.py around lines 78 to 83,
the condition checking if tools is empty or None uses both a falsy check and a
length check, which risks an AttributeError if tools is None and is redundant.
Simplify the condition by removing the length check and only use the falsy check
on prompt_version.llm_config.tools to cover both None and empty list cases, then
proceed to set tool_choice to None if needed.
…lude response_format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 6a1a99c in 35 seconds. Click for details.
- Reviewed
23
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:52
- Draft comment:
Consider adding a docstring to ResponseFormat to clarify its purpose and structure, including details for the json_schema field. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/traceloop-sdk/traceloop/sdk/prompts/model.py:67
- Draft comment:
Document the response_format field in ModelConfig to explain when and how it should be used. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_dHUk7EOEF7kr2H7w
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
…eloop/openllmetry into gz/add-tool-function-to-modal-types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed 77e55f6 in 1 minute and 33 seconds. Click for details.
- Reviewed
142
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/tests/test_prompt_management.py:67
- Draft comment:
New 'prompts_with_tools_json' block added. Ensure the JSON structure exactly matches the expected schema for tool configuration. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%
<= threshold50%
The comment is asking the PR author to ensure that the JSON structure matches the expected schema. This falls under the category of asking the author to double-check or ensure something, which is against the rules.
2. packages/traceloop-sdk/tests/test_prompt_management.py:170
- Draft comment:
The new test 'test_prompt_management_with_tools' properly validates tool-related attributes. Consider adding negative tests for missing or invalid tool config fields in the future. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While the suggestion to add negative tests is reasonable from a testing perspective, it's more of a general testing best practice suggestion rather than a specific issue with the code changes. The current changes properly implement the happy path test case. Adding negative tests could be done as a separate improvement. The comment does point out a legitimate testing gap. Negative test cases are important for robust test coverage. However, per the rules, we should not make purely informative comments or suggestions that aren't clearly required changes. The current implementation is correct and complete for its scope. The comment should be removed as it's more of a nice-to-have suggestion rather than a required change. The current test implementation is valid and complete.
Workflow ID: wflow_DCBYEhtfQGGkzQTu
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
} | ||
} | ||
} | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File ends without a newline. Please add a newline at EOF to adhere to best practices.
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed 908e0b7 in 1 minute and 38 seconds. Click for details.
- Reviewed
145
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/tests/test_prompt_management.py:142
- Draft comment:
The added JSON fixture (prompts_with_response_format_json) starts with a leading newline. Consider removing it or trimming whitespace to prevent unintended formatting issues. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While the comment is technically correct about an inconsistency, the leading newline in a JSON string literal is extremely unlikely to cause any actual issues. The json.loads() function handles whitespace without problems. Looking at the test that uses this string (test_prompt_management_with_response_format), it works fine. This seems like an extremely minor style nitpick that doesn't affect functionality. The inconsistency in style between the JSON strings could indicate a copy-paste error or oversight that might be worth fixing for code cleanliness. While consistent style is good, this is far too minor to warrant a PR comment. The code works perfectly fine and the inconsistency has no practical impact. This comment should be deleted as it points out an extremely minor style issue that has no practical impact on the code's functionality.
2. packages/traceloop-sdk/tests/test_prompt_management.py:251
- Draft comment:
The test for tool function support now only asserts the tool call name. If the detailed prompt metadata is important, consider adding checks for key attributes (e.g. prompt key, version, etc.) to ensure nothing regress. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
Workflow ID: wflow_iEjAmmAaLOJCYc6B
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
spans = exporter.get_finished_spans() | ||
open_ai_span = spans[0] | ||
completion = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.content") | ||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using a try/except block with a redundant 'assert True', consider asserting that the parsed JSON includes expected keys (e.g., 'joke') to strengthen the test's coverage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
packages/traceloop-sdk/tests/test_prompt_management.py (1)
268-268
: File still ends without a newline.
🧹 Nitpick comments (1)
packages/traceloop-sdk/tests/test_prompt_management.py (1)
243-252
: Test correctly validates tool function integration.The test properly verifies that tool calls are captured in tracing spans. Consider enhancing the assertion to also validate tool call parameters for more comprehensive coverage.
- completion = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.0.name") - assert completion == "get_joke" + tool_call_name = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.0.name") + tool_call_args = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.tool_calls.0.function.arguments") + assert tool_call_name == "get_joke" + assert "style" in tool_call_args # Verify parameters are passed
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
packages/traceloop-sdk/tests/test_prompt_management.py
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
packages/traceloop-sdk/tests/test_prompt_management.py (6)
packages/traceloop-sdk/tests/conftest.py (1)
exporter
(18-26)packages/opentelemetry-instrumentation-openai/tests/conftest.py (1)
openai_client
(42-43)packages/traceloop-sdk/traceloop/sdk/prompts/client.py (1)
PromptRegistryClient
(37-149)packages/traceloop-sdk/traceloop/sdk/prompts/__init__.py (1)
get_prompt
(4-5)packages/traceloop-sdk/traceloop/sdk/utils/in_memory_span_exporter.py (1)
get_finished_spans
(40-43)packages/opentelemetry-semantic-conventions-ai/opentelemetry/semconv_ai/__init__.py (1)
SpanAttributes
(36-229)
🪛 Ruff (0.12.2)
packages/traceloop-sdk/tests/test_prompt_management.py
266-266: Do not assert False
(python -O
removes these calls), raise AssertionError()
Replace assert False
(B011)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Packages (3.11)
- GitHub Check: Lint
🔇 Additional comments (1)
packages/traceloop-sdk/tests/test_prompt_management.py (1)
67-216
: Well-structured JSON prompt definitions for new features.The JSON configurations correctly extend the original prompt structure to support tool functions and response formats. The tool configuration properly defines the function schema, and the response format uses the appropriate JSON schema structure for OpenAI's structured output.
@pytest.mark.vcr | ||
def test_prompt_management_with_response_format(exporter, openai_client): | ||
PromptRegistryClient()._registry.load(prompts_json=json.loads(prompts_with_response_format_json)) | ||
prompt_args = get_prompt(key="joke_generator", variables={"style": "pirate"}) | ||
openai_client.chat.completions.create(**prompt_args) | ||
|
||
spans = exporter.get_finished_spans() | ||
open_ai_span = spans[0] | ||
completion = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.content") | ||
try: | ||
json.loads(completion) | ||
except json.JSONDecodeError: | ||
assert False | ||
assert True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix assertion pattern for better test reliability.
The test correctly validates JSON response format, but the assertion pattern should be improved for reliability.
- try:
- json.loads(completion)
- except json.JSONDecodeError:
- assert False
- assert True
+ try:
+ json.loads(completion)
+ except json.JSONDecodeError as e:
+ pytest.fail(f"Response is not valid JSON: {e}")
This addresses the static analysis warning about assert False
being removed by python -O
and makes the test failure more informative.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
@pytest.mark.vcr | |
def test_prompt_management_with_response_format(exporter, openai_client): | |
PromptRegistryClient()._registry.load(prompts_json=json.loads(prompts_with_response_format_json)) | |
prompt_args = get_prompt(key="joke_generator", variables={"style": "pirate"}) | |
openai_client.chat.completions.create(**prompt_args) | |
spans = exporter.get_finished_spans() | |
open_ai_span = spans[0] | |
completion = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.content") | |
try: | |
json.loads(completion) | |
except json.JSONDecodeError: | |
assert False | |
assert True | |
@pytest.mark.vcr | |
def test_prompt_management_with_response_format(exporter, openai_client): | |
PromptRegistryClient()._registry.load(prompts_json=json.loads(prompts_with_response_format_json)) | |
prompt_args = get_prompt(key="joke_generator", variables={"style": "pirate"}) | |
openai_client.chat.completions.create(**prompt_args) | |
spans = exporter.get_finished_spans() | |
open_ai_span = spans[0] | |
completion = open_ai_span.attributes.get(f"{SpanAttributes.LLM_COMPLETIONS}.0.content") | |
try: | |
json.loads(completion) | |
except json.JSONDecodeError as e: | |
pytest.fail(f"Response is not valid JSON: {e}") |
🧰 Tools
🪛 Ruff (0.12.2)
266-266: Do not assert False
(python -O
removes these calls), raise AssertionError()
Replace assert False
(B011)
🤖 Prompt for AI Agents
In packages/traceloop-sdk/tests/test_prompt_management.py around lines 254 to
267, the test uses a try-except block with assert False inside except to check
JSON decoding, which is unreliable and can be removed by python -O optimization.
Replace this pattern by directly asserting that json.loads(completion) does not
raise an exception, for example by using pytest.raises or by asserting the
result is not None, to make the test failure more informative and reliable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 3203be0 in 1 minute and 4 seconds. Click for details.
- Reviewed
42
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/tests/test_prompt_management.py:245
- Draft comment:
Multiline formatting improvements look good. Consider adding a trailing comma after the last argument (e.g., after json.loads(prompts_with_tools_json)) for consistency and to ease future diffs. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
2. packages/traceloop-sdk/tests/test_prompt_management.py:272
- Draft comment:
In the response format test, instead of using 'assert False' in the except block and a redundant 'assert True' after, consider using pytest.fail() with a meaningful message for clarity. - Reason this comment was not posted:
Comment was on unchanged code.
Workflow ID: wflow_vgoH0bqkMF5vAjsI
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 2249327 in 34 seconds. Click for details.
- Reviewed
11
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/tests/test_prompt_management.py:274
- Draft comment:
Good change: using pytest.fail with a descriptive message improves clarity over assert False. Optionally, the final 'assert True' is redundant since the test passes if no exception is raised. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
Workflow ID: wflow_scOqExnVqdOkcEZP
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
…readability and update request/response details
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 0b3cb15 in 49 seconds. Click for details.
- Reviewed
478
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/tests/cassettes/test_prompt_management/test_prompt_management.yaml:18
- Draft comment:
Confirm that updated header values (e.g., traceparent, user-agent, runtime version) are intentional and match new configuration. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. packages/traceloop-sdk/tests/cassettes/test_prompt_management/test_prompt_management_with_response_format.yaml:4
- Draft comment:
Ensure that the response_format JSON schema is correctly structured and comprehensive for the intended response validation. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
3. packages/traceloop-sdk/tests/cassettes/test_prompt_management/test_prompt_management_with_tools.yaml:4
- Draft comment:
Verify that the tool support configuration (tool_choice and tools array) follows the expected schema and includes all required fields. - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_hCOQROses5EEeL6R
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
if ( | ||
not prompt_version.llm_config.tools | ||
or len(prompt_version.llm_config.tools) == 0 | ||
) and prompt_version.llm_config.tool_choice is not None: | ||
prompt_version.llm_config.tool_choice = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add a explanation for this
how could it be tool choice="something" and tools is empty? if tool_choice is None but tools has values, it's ok?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed c8a1433 in 1 minute and 16 seconds. Click for details.
- Reviewed
13
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/traceloop-sdk/traceloop/sdk/prompts/client.py:78
- Draft comment:
The added comment contains grammatical errors. Consider rephrasing for clarity. For example, change "setted its throw an error" to "set, it throws an error". - Reason this comment was not posted:
Confidence changes required:33%
<= threshold50%
None
Workflow ID: wflow_ZzjeUtS96l9pGvv9
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
feat(instrumentation): ...
orfix(instrumentation): ...
.Important
Add support for tool functions and JSON schema response formats in prompt management, with corresponding tests.
tool_choice
andtools
toModelConfig
inmodel.py
for specifying tool functions.ResponseFormat
toModelConfig
inmodel.py
for JSON schema response formats.render_prompt()
inclient.py
to handle tool functions and cleartool_choice
if no tools are configured.test_prompt_management_with_tools()
andtest_prompt_management_with_response_format()
intest_prompt_management.py
to validate tool function and response format handling.This description was created by
for c8a1433. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit