Skip to content

Conversation

@dangusev
Copy link
Contributor

@dangusev dangusev commented Nov 18, 2025

Code for parsing instructions is scattered around the repo, even though it's the same.

To fix that:

  • Added new vision_agents.core.instructions.Instructions class that holds initial prompt and parses @ markdown references
  • Removed old instructions parsing code.

Summary by CodeRabbit

  • New Features

    • Added an Instructions type that expands @filename.md references by including referenced markdown into prompts.
    • LLMs and realtime flows now accept Instructions via a public set_instructions API and use its combined text as the system prompt.
  • Documentation

    • Examples updated to reflect the simplified system-prompt flow using the new instructions handling.
  • Tests

    • Added tests covering markdown reference parsing and instruction propagation.

@coderabbitai
Copy link

coderabbitai bot commented Nov 18, 2025

Walkthrough

This PR adds a dedicated Instructions class that resolves @file.md references into a composed full_reference, replaces inline markdown-parsing utilities with that class, and updates agents, LLM internals, plugins, docs, and tests to accept and propagate Instructions instances (LLMs store the resolved full_reference in _instructions).

Changes

Cohort / File(s) Change Summary
Instructions class (new)
agents-core/vision_agents/core/instructions.py
Adds Instructions which parses @filename.md references, reads referenced .md files from a base dir, and exposes full_reference combining original input and referenced contents; includes guards for hidden files, extensions, and base_dir containment.
Core agent and LLM
agents-core/vision_agents/core/agents/agents.py, agents-core/vision_agents/core/llm/llm.py
Agent now wraps instruction text in an Instructions object at init; LLM consolidates instruction storage into private _instructions and exposes set_instructions(Instructions) which stores instructions.full_reference. Agent join now passes instructions.full_reference when creating conversations.
Utility removal
agents-core/vision_agents/core/utils/utils.py
Removes prior inline instruction parsing utilities and Instructions dataclass from utils; parsing logic consolidated into new Instructions module.
Plugin LLM / Realtime changes
plugins/aws/.../aws_llm.py, plugins/aws/.../aws_realtime.py, plugins/gemini/.../gemini_llm.py, plugins/gemini/.../gemini_realtime.py, plugins/openai/.../openai_llm.py, plugins/openai/.../openai_realtime.py, plugins/openai/.../chat_completions/*.py
Replace calls to _build_enhanced_instructions() with direct use of self._instructions; update realtime/openai/gemini/aws code paths to accept Instructions via set_instructions and to use _instructions for system messages.
Tests updated / added
tests/test_instructions.py, plugins/*/tests/*
Adds comprehensive tests for Instructions; updates various plugin tests to call set_instructions (public) with Instructions objects instead of private _set_instructions with raw strings.
Docs examples
docs/ai/instructions/ai-llm.md, docs/ai/instructions/ai-realtime-llm.md
Update examples to use self._instructions directly for setting system message instead of an enhanced-builder step.

Sequence Diagram(s)

sequenceDiagram
    participant Agent
    participant Instructions
    participant LLM
    participant Plugin

    Agent->>Instructions: Instructions(input_text, base_dir)
    activate Instructions
    Instructions->>Instructions: extract `@file.md` refs, read files, build full_reference
    Instructions-->>Agent: full_reference
    deactivate Instructions

    Agent->>LLM: set_instructions(Instructions)
    activate LLM
    LLM->>LLM: _instructions = Instructions.full_reference
    deactivate LLM

    Plugin->>LLM: request (converse / realtime)
    activate LLM
    alt _instructions present
        LLM-->>Plugin: provide _instructions as system message
    else
        LLM-->>Plugin: no system message
    end
    deactivate LLM
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Attention areas:
    • Verify Instructions path resolution and security guards (hidden files, .md extension, base_dir containment).
    • Confirm all call sites now expect Agent.instructions to be an Instructions object and that no callers assume a raw string.
    • Check plugin replacements for removed _build_enhanced_instructions() preserve intended system-message behavior.
    • Ensure tests updated to use Instructions match new constructor/base_dir semantics.

Possibly related PRs

Suggested reviewers

  • Nash0x7E2
  • tschellenbach

Poem

The instruction folds like paper in a drawer,
I prod its seam — a dotted filename breathes;
It opens into rooms of markdown lore,
The lamp of full_reference trembles, wreathes,
A quiet map of files, stitched into death.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 52.63% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main objective of the PR: consolidating scattered instruction-parsing code into a single Instructions class implementation and removing duplicate logic.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/instructions-parsing

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7deee3b and 29c8585.

📒 Files selected for processing (4)
  • plugins/aws/tests/test_aws.py (2 hunks)
  • plugins/aws/tests/test_aws_realtime.py (3 hunks)
  • plugins/gemini/tests/test_gemini_llm.py (2 hunks)
  • plugins/openai/tests/test_openai_realtime.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • plugins/aws/tests/test_aws.py
🧰 Additional context used
🧬 Code graph analysis (3)
plugins/openai/tests/test_openai_realtime.py (4)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
plugins/aws/tests/test_aws_realtime.py (1)
  • realtime (21-34)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
plugins/gemini/tests/test_gemini_llm.py (3)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
plugins/aws/tests/test_aws_realtime.py (4)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
plugins/openai/tests/test_openai_realtime.py (1)
  • realtime (21-31)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Mypy
  • GitHub Check: unit / Test "not integration"
🔇 Additional comments (7)
plugins/aws/tests/test_aws_realtime.py (3)

7-7: Correctly importing Instructions for the new API

Importing Instructions from vision_agents.core.instructions matches the new unified instructions abstraction and is required for the updated set_instructions signature.


28-30: Fixture now passes an Instructions object, matching set_instructions contract

Wrapping the prompt string in Instructions(...) ensures Realtime.set_instructions receives the expected type and can safely use instructions.full_reference, eliminating the prior AttributeError risk.


40-42: Consistent Instructions usage when updating instructions in the test

Using Instructions(...) here keeps the test aligned with the public set_instructions API and consistent with other realtime tests (e.g., OpenAI), while allowing per-test instruction overrides.

plugins/gemini/tests/test_gemini_llm.py (2)

6-6: LGTM! Import aligns with the new Instructions API.

The import correctly brings in the Instructions class needed for the refactored instruction-passing pattern.


88-88: Correctly adopts the new Instructions API.

The migration from the private _set_instructions to the public set_instructions method, with the instruction string wrapped in an Instructions object, correctly follows the refactored pattern. Since the instruction text contains no @ markdown references, the behavior remains equivalent to the previous implementation.

plugins/openai/tests/test_openai_realtime.py (2)

5-5: LGTM! Import aligns with the API refactoring.

The import is correctly added to support the new Instructions-based API.


27-27: LGTM! API usage correctly updated.

The test fixture now properly wraps the instruction string in an Instructions object, matching the refactored API signature and the pattern established in other realtime tests.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@dangusev dangusev force-pushed the fix/instructions-parsing branch from f2e38e9 to 7deee3b Compare November 18, 2025 22:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (6)
agents-core/vision_agents/core/instructions.py (2)

12-12: Regex pattern may not handle all edge cases.

The pattern r"@([^\s@]+\.md)" matches @filename.md but may capture unintended strings. For example:

  • @path/to/file.md - captures the entire path (likely intended)
  • @@file.md - doesn't match due to [^\s@] (good)
  • @file.md.backup - doesn't match due to .md at end (good)
  • @file.md, - captures file.md, including the comma

Consider adding word boundary or punctuation handling if needed:

-_MD_PATTERN = re.compile(r"@([^\s@]+\.md)")
+_MD_PATTERN = re.compile(r"@([^\s@,;:!?]+\.md)\b")

87-99: Ensure path traversal protection is robust.

The security check on line 98 uses is_relative_to(self._base_dir) which correctly prevents directory traversal attacks. However, the check order could be optimized: you check is_file() before exists(), which is redundant since is_file() returns False if the path doesn't exist.

Consider reordering for clarity and efficiency:

         # Check if the path is a file, it exists, and it's a markdown file.
         skip_reason = ""
-        if not full_path.is_file():
-            skip_reason = "path is not a file"
+        if not full_path.exists():
+            skip_reason = "file not found"
         elif full_path.name.startswith("."):
             skip_reason = "path is invalid"
-        elif not full_path.exists():
-            skip_reason = "file not found"
+        elif not full_path.is_file():
+            skip_reason = "path is not a file"
         elif full_path.suffix != ".md":
             skip_reason = "file is not .md"
         # The markdown file also must be inside the base_dir
         elif not full_path.is_relative_to(self._base_dir):
             skip_reason = "file outside the base directory"
tests/test_instructions.py (2)

6-26: Parametrized success cases look solid; consider one case with differing file contents

The parametrized test_parse_success nicely covers empty, plain-text, and multi-file happy paths, and the expected full_reference matches the current _extract_full_reference behavior (including the triple newline before ## Referenced Documentation:). To guard against regressions where different files might accidentally share or overwrite content, you could add (or tweak) a case where file1.md and file2.md contain different text so that the mapping by filename is also implicitly checked.


55-65: Hidden .md files produce a placeholder entry—confirm this matches the intended “ignore” semantics

This test encodes that @.file1.md yields a ### .file1.md heading with the generic “File not found or could not be read” placeholder, even when the file exists. That matches _read_md_file returning "" for names starting with ".", but it means hidden files are still surfaced in the Referenced Documentation section, just without contents. If “ignore files starting with '.'” is meant to hide them entirely (no section at all), you may want to adjust either the implementation or this expectation; otherwise, this test is a clear spec of the current behavior.

agents-core/vision_agents/core/llm/llm.py (2)

27-31: Participant type hint now comes from getstream’s protobuf; consider aligning with the edge Participant type

Here Participant is imported from getstream.video.rtc.pb...models_pb2, whereas other parts of the system (e.g., vision_agents.core.edge.types and the OpenAI Realtime implementation) use a different Participant type. Since this is only a type hint it won’t break runtime behavior, but if you care about static checking it might be worth standardizing on a single Participant abstraction (or using a Protocol/alias) so overrides of simple_response/simple_audio_response remain type‑compatible.


377-388: Harden _sanitize_tool_output against non‑JSON‑serializable values

Right now, _sanitize_tool_output calls json.dumps(value) for all non‑string values, which will raise TypeError for common Python objects (e.g., datetimes, custom classes) and could turn tool execution into an unexpected failure at the sanitization step. Since this helper is meant to be defensive, it’s safer to fall back to str() if JSON encoding fails or to use default=str in json.dumps.

You could make it more robust along these lines:

-        s = value if isinstance(value, str) else json.dumps(value)
+        if isinstance(value, str):
+            s = value
+        else:
+            try:
+                s = json.dumps(value, default=str)
+            except TypeError:
+                # Fallback for objects json can't handle
+                s = str(value)

This keeps the length‑limiting behavior but avoids crashing on rich tool results.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 55b2b5a and 7deee3b.

📒 Files selected for processing (20)
  • agents-core/vision_agents/core/agents/agents.py (3 hunks)
  • agents-core/vision_agents/core/instructions.py (1 hunks)
  • agents-core/vision_agents/core/llm/llm.py (4 hunks)
  • agents-core/vision_agents/core/utils/utils.py (0 hunks)
  • docs/ai/instructions/ai-llm.md (1 hunks)
  • docs/ai/instructions/ai-realtime-llm.md (1 hunks)
  • plugins/aws/tests/test_aws.py (1 hunks)
  • plugins/aws/tests/test_aws_realtime.py (2 hunks)
  • plugins/aws/vision_agents/plugins/aws/aws_llm.py (2 hunks)
  • plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1 hunks)
  • plugins/gemini/tests/test_gemini_llm.py (1 hunks)
  • plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (1 hunks)
  • plugins/gemini/vision_agents/plugins/gemini/gemini_realtime.py (1 hunks)
  • plugins/openai/tests/test_openai_realtime.py (1 hunks)
  • plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_llm.py (1 hunks)
  • plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.py (1 hunks)
  • plugins/openai/vision_agents/plugins/openai/openai_llm.py (3 hunks)
  • plugins/openai/vision_agents/plugins/openai/openai_realtime.py (2 hunks)
  • plugins/openrouter/tests/test_openrouter_llm.py (1 hunks)
  • tests/test_instructions.py (1 hunks)
💤 Files with no reviewable changes (1)
  • agents-core/vision_agents/core/utils/utils.py
🧰 Additional context used
🧬 Code graph analysis (10)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (3)
agents-core/vision_agents/core/edge/sfu_events.py (1)
  • Participant (229-270)
agents-core/vision_agents/core/llm/llm.py (2)
  • LLM (50-388)
  • LLMResponseEvent (39-43)
agents-core/vision_agents/core/llm/llm_types.py (2)
  • NormalizedToolCallItem (107-111)
  • ToolSchema (64-67)
plugins/aws/tests/test_aws_realtime.py (3)
plugins/openai/tests/test_openai_realtime.py (1)
  • realtime (20-30)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (2)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/gemini/tests/test_gemini_llm.py (2)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
plugins/openai/tests/test_openai_realtime.py (3)
plugins/aws/tests/test_aws_realtime.py (1)
  • realtime (20-33)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
agents-core/vision_agents/core/llm/llm.py (4)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
agents-core/vision_agents/core/llm/events.py (2)
  • ToolEndEvent (126-135)
  • ToolStartEvent (116-122)
agents-core/vision_agents/core/agents/agents.py (1)
  • Agent (75-1317)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
plugins/aws/tests/test_aws.py (2)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
agents-core/vision_agents/core/agents/agents.py (1)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
tests/test_instructions.py (1)
agents-core/vision_agents/core/instructions.py (1)
  • Instructions (15-115)
plugins/openrouter/tests/test_openrouter_llm.py (2)
agents-core/vision_agents/core/llm/llm.py (1)
  • set_instructions (179-180)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
  • set_instructions (471-473)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Test "not integration"
🔇 Additional comments (23)
plugins/gemini/vision_agents/plugins/gemini/gemini_realtime.py (1)

311-336: LGTM!

The change from _build_enhanced_instructions() to direct use of self._instructions correctly consolidates instruction handling. This aligns with the new Instructions abstraction where full_reference is precomputed and stored via set_instructions.

docs/ai/instructions/ai-realtime-llm.md (1)

49-50: LGTM!

The documentation example correctly demonstrates the simplified instruction handling pattern using self._instructions directly.

docs/ai/instructions/ai-llm.md (1)

25-26: LGTM!

Documentation example is consistent with the new instruction handling approach.

plugins/aws/vision_agents/plugins/aws/aws_llm.py (2)

130-131: LGTM!

Correctly replaced enhanced instruction construction with direct use of self._instructions in the converse method.


356-357: LGTM!

Correctly replaced enhanced instruction construction with direct use of self._instructions in the converse_stream method, consistent with the converse method.

agents-core/vision_agents/core/instructions.py (1)

107-115: LGTM!

The file reading implementation correctly handles potential errors with appropriate exception handling and logging.

plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_llm.py (1)

170-171: LGTM!

Correctly migrated from public instructions attribute to private _instructions attribute, consistent with the new Instructions API pattern.

plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)

232-236: LGTM! Cleaner instruction handling.

The refactor simplifies the flow by directly using self._instructions instead of calling a separate _build_enhanced_instructions() method. The explicit check ensures instructions are set before connection, which is good defensive programming for AWS Bedrock's requirements.

plugins/aws/tests/test_aws.py (1)

149-149: LGTM! Public API usage.

Good migration from the private _set_instructions to the public set_instructions API. This aligns with the broader refactor introducing standardized instruction handling.

plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (1)

165-165: LGTM! Simplified initialization.

The change removes the indirect _build_enhanced_instructions() call in favor of directly using self._instructions as the system instruction. This is cleaner and aligns with the PR's goal of consolidating instruction parsing logic into the Instructions class.

agents-core/vision_agents/core/agents/agents.py (2)

29-29: LGTM! Introduction of Instructions abstraction.

The Agent now wraps the instruction string in an Instructions object, which parses markdown file references (like @file.md) and resolves them into a full_reference. This centralizes instruction parsing logic as intended by the PR.

Also applies to: 143-143


510-510: LGTM! Proper use of full_reference.

When passing instructions to edge.create_conversation, the code correctly extracts self.instructions.full_reference (the composed string with resolved markdown) rather than passing the Instructions object itself. This maintains a clean interface.

plugins/gemini/tests/test_gemini_llm.py (1)

87-87: LGTM! Public API migration.

Correctly migrated from the private _set_instructions to the public set_instructions method, consistent with the broader refactor.

plugins/openai/tests/test_openai_realtime.py (1)

26-26: LGTM! Public API adoption.

Test correctly uses the public set_instructions method instead of the private _set_instructions, aligning with the standardized instruction handling across the codebase.

plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)

101-101: LGTM! Streamlined instruction handling.

The refactor removes complex conditional logic for building enhanced instructions and instead relies directly on self._instructions. This is much cleaner and aligns with the centralized Instructions class approach. The instruction flow is now: Agent wraps string in Instructions → set_instructions extracts full_reference → LLM uses it directly.

Also applies to: 134-136


4-4: No issues found with public API surface.

The newly imported types (Participant, OpenAIResponse, NormalizedToolCallItem, ToolSchema) are used internally within the module and in method signatures, but they are not re-exported in plugins/openai/vision_agents/plugins/openai/__init__.py. Only OpenAILLM is exposed as the public API (aliased as LLM). Using types in method signatures is standard Python practice and does not constitute an accidental API expansion, as users interact with the OpenAILLM class directly without needing to import these types themselves.

plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.py (1)

248-252: No issues found—API change is safe.

Verification confirms that no external code, test files, or other parts of the codebase access the public instructions attribute on ChatCompletionsVLM. The refactoring from self.instructions (public) to self._instructions (private) introduces no public API breakage.

tests/test_instructions.py (3)

27-46: Good coverage of “not a file” vs “missing file” fallback behavior

These two tests clearly assert that both a directory at file1.md and a non-existent file1.md yield the same placeholder *(File not found or could not be read)* in the Referenced Documentation section, which matches the current _read_md_file behavior (returning "" for any non-file path). This is a useful contract to pin down so future refactors don’t accidentally raise or silently drop these references.


47-54: Non‑markdown references correctly remain untouched

test_parse_file_not_md ensures that @file1.txt does not trigger any Referenced Documentation section and that full_reference is exactly the original input_text. This aligns with the _MD_PATTERN behavior (only *.md is matched) and is a good regression test for “don’t over‑interpret arbitrary @ tokens.”


66-81: Outside‑base‑dir absolute path handling matches design

test_parse_file_outside_base_dir nicely nails the edge case where the user mentions an absolute path outside base_dir: content from that path is intentionally not read (due to the is_relative_to(self._base_dir) guard), but the original path string is preserved in the heading with the generic “could not be read” placeholder. This is a good, explicit test for the “don’t read outside base_dir, but keep the reference visible” policy.

plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)

37-37: Realtime now correctly propagates Instructions.full_reference into the OpenAI session

Overriding set_instructions to accept an Instructions instance, delegate to the base LLM, and then assign self.realtime_session["instructions"] = self._instructions keeps the public API uniform while ensuring the composed full_reference actually reaches the OpenAI Realtime session. This matches the new Instructions‑centric flow and avoids duplicating parsing logic here.

Also applies to: 471-473

agents-core/vision_agents/core/llm/llm.py (2)

19-22: Instructions plumbing via _instructions and set_instructions is coherent and centralizes parsing

Importing Instructions, initializing self._instructions once in __init__, invoking self.set_instructions(agent.instructions) in _attach_agent, and implementing set_instructions to store instructions.full_reference gives you a single, consistent place where raw agent instructions are transformed into the composed reference string. Subclasses (like provider‑specific LLMs) can rely on _instructions without worrying about parsing details, and the Agent still retains the full Instructions object if anyone needs richer access.

Also applies to: 56-65, 160-166, 179-181


242-315: Tool execution path with events, threading, and timeouts looks robust

_run_one_tool and _execute_tools do a good job of:

  • Normalizing arguments from either "arguments_json" or "arguments".
  • Fetching callables via function_registry.get_callable when available, and offloading sync callables to asyncio.to_thread to avoid blocking the loop.
  • Wrapping execution in asyncio.wait_for for per‑tool timeouts.
  • Emitting ToolStartEvent and ToolEndEvent with arguments, success flag, result/error, and execution time, which should be very helpful for observability.
    The deduplication logic in _dedup_and_execute via _tc_key is also straightforward and avoids redundant runs.

Also applies to: 316-375

@Nash0x7E2 Nash0x7E2 merged commit 7b9d596 into main Nov 18, 2025
8 checks passed
@Nash0x7E2 Nash0x7E2 deleted the fix/instructions-parsing branch November 18, 2025 23:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants