Skip to content

Conversation

@vishnurk6247
Copy link
Member

@vishnurk6247 vishnurk6247 commented Nov 28, 2025

Summary by CodeRabbit

  • Refactor

    • Updated API to accept string inputs directly instead of lists in workflow execution methods.
    • Changed agent and workflow return values from strings to message-based objects for richer context.
    • Standardized LLM configuration parameters across provider implementations for consistency.
    • Expanded node type system to support additional workflow patterns.
  • Bug Fixes

    • Added validation to prevent missing LLM configuration errors early in initialization.
    • Enhanced error handling with stricter parameter requirements and guards.
  • Chores

    • Updated development dependencies.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Nov 28, 2025

Walkthrough

This pull request refactors the framework's type system and APIs across multiple layers. Key changes include: updating example files to pass string inputs instead of lists to build_and_run() methods; replacing BaseMemory with MessageMemory in core arium; introducing AriumNodeType as a unified node type; changing Agent.run() to return List[BaseMessage] instead of strings; and refining parameter types to use Optional[str] consistently across LLM implementations.

Changes

Cohort / File(s) Summary
Example input type consolidation
flo_ai/examples/arium_examples.py, arium_linear_usage.py, arium_yaml_example.py, custom_plan_execute_demo.py, llm_router_example.py, simple_yaml_workflow.py, telemetry_example.py, variables_workflow_example.py, variables_workflow_yaml_example.py
Updated all invocations to pass single string inputs instead of lists to build_and_run() and arium.run(); replaced UserMessage(TextMessageContent(...)) wrappers with plain strings.
Example agent and mock setup
flo_ai/examples/arium_examples.py, example_graph_visualization.py, simple_working_demo.py
Replaced simple string prompts with richer Agent initializations using system_prompt field and explicit llm with gpt-4o-mini; introduced BaseLLM-derived MockLLM; added imports for MessageMemoryItem and UserMessage.
Example utility functions
flo_ai/examples/flo_tool_example.py, multi_tool_example.py, vertexai_agent_example.py
Changed get_weather function default parameter from country: str = None to country: str = ''.
Example validation
flo_ai/examples/vllm_agent_usage.py
Added early validation guards to ensure VLLM base URL is set in multiple example functions.
Core arium memory system
flo_ai/flo_ai/arium/arium.py, base.py, memory.py
Replaced BaseMemory with MessageMemory; refactored MessageMemoryItem initialization (result now required, parameter order changed); updated method signatures to use Dict[str, Any] = {} instead of Optional[Dict[str, Any]] = None for variables; transitioned PlanAwareMemory to dict-based storage.
Core arium node typing
flo_ai/flo_ai/arium/base.py, nodes.py
Introduced AriumNodeType as unified type alias; updated node-related signatures to accept AriumNodeType instead of ExecutableNode; refactored add_end_to to create dedicated end nodes per target.
AriumBuilder API updates
flo_ai/flo_ai/arium/builder.py
Updated memory typing to MessageMemory; changed node parameters to AriumNodeType; refined input handling in build_and_run() to support both lists and single values; enhanced YAML loading with optional registries and nested YAML support; improved error messages and path handling.
Arium routing layer
flo_ai/flo_ai/arium/llm_router.py
Tightened type hints for execution_context parameter from dict to Optional[dict] across all router classes; added guarded None checks before dereferencing; filtered conversation content to string messages only.
Agent return type refactoring
flo_ai/flo_ai/models/agent.py, base_agent.py
Changed run() return type from str to List[BaseMessage]; updated internal _run_conversational() and _run_with_tools() to return message lists; adjusted variable handling signature from Optional[Dict[str, Any]] = None to Dict[str, Any] = {}.
LLM implementation type refinements
flo_ai/flo_ai/llm/anthropic_llm.py, base_llm.py, gemini_llm.py, ollama_llm.py, openai_llm.py, openai_vllm.py, rootflo_llm.py, vertexai_llm.py
Updated constructor and method signatures to use Optional[str] for nullable parameters (api_key, base_url); refined generate() signature to include functions and output_schema parameters; changed format_image_in_message() return type from str to implementation-specific types; added initialization guards in delegation methods.
Configuration and parsing updates
flo_ai/flo_ai/builder/agent_builder.py, formatter/yaml_format_parser.py, helpers/llm_factory.py
Added yaml_file parameter to AgentBuilder.from_yaml(); refactored array/literal type construction in YAML parser; added validation guards for required base URLs in LLM factory.
Framework utilities
flo_ai/flo_ai/tool/flo_tool.py, utils/variable_extractor.py, telemetry/telemetry.py
Added resilience to missing attributes via getattr() fallbacks and type-ignore hints; updated resolve_variables() signature to accept plain strings with default empty dict; refined parameter types to explicit Optional[str] in telemetry configuration.
Dependencies
flo_ai/pyproject.toml
Added dev dependency "ty" >= 0.0.1a28.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AriumBuilder
    participant Arium
    participant Agent
    participant Memory

    User->>AriumBuilder: build_and_run(inputs: str)
    AriumBuilder->>AriumBuilder: build() if needed
    AriumBuilder->>Arium: run(inputs: str, variables: Dict)
    Arium->>Arium: _resolve_inputs(inputs: str) → UserMessage
    Arium->>Memory: add(MessageMemoryItem)
    Arium->>Agent: run(messages: List[BaseMessage])
    Agent->>Agent: _run_conversational() | _run_with_tools()
    Agent->>Memory: query history
    Agent-->>Arium: List[BaseMessage]
    Arium->>Memory: add(MessageMemoryItem) for results
    Arium-->>AriumBuilder: execution result
    AriumBuilder-->>User: final output
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • Agent return type changes (flo_ai/flo_ai/models/agent.py, base_agent.py): Verify that all call sites handling Agent.run() output are updated to handle List[BaseMessage] instead of strings; trace impact on downstream processors.
  • Memory system refactoring (flo_ai/flo_ai/arium/memory.py, arium.py): Confirm that MessageMemoryItem initialization changes (required result parameter, shifted parameter order) don't break existing memory workflows; validate dict-based storage in PlanAwareMemory.
  • AriumNodeType introduction (flo_ai/flo_ai/arium/base.py, builder.py, nodes.py): Ensure type alias covers all node variants correctly and that node resolution logic handles AriumNodeType | str uniformly across builder methods.
  • LLM implementations (flo_ai/flo_ai/llm/*.py): Verify that Optional[str] parameter changes (especially api_key, base_url) are handled consistently; confirm that new output_schema and functions parameter handling in generate() doesn't break existing implementations.
  • Builder API changes (flo_ai/flo_ai/arium/builder.py): Validate that single string vs. list handling in build_and_run() and improved YAML/registry resolution work correctly with nested workflows and external file references.

🐰 A framework reborn, with types refined,
Strings flow single, not wrapped in a line,
Nodes now speak a language so clear,
Messages in lists, the old strings disappear!
With memory refreshed and routers so bright,
This hop-skip-jump refactor feels just right!

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.13% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title check ❓ Inconclusive The PR title 'fix: add ty and fix types' is vague and doesn't clearly convey the main changes. It mentions adding 'ty' (a package) and fixing types, but doesn't explain the primary scope or significance of the refactoring across example files and core modules. Consider a more descriptive title that captures the main transformation, such as 'refactor: migrate API to string inputs and update type signatures across examples and core modules' or 'refactor: update Agent/Arium API with type safety improvements'.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/types

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
flo_ai/examples/arium_yaml_example.py (1)

557-558: Invalid Python syntax: enumerate[MessageMemoryItem] will cause a TypeError.

The syntax enumerate[MessageMemoryItem](result) is invalid. enumerate is not a generic class and doesn't support subscript notation. This will raise TypeError: 'type' object is not subscriptable at runtime.

Apply this fix:

-    for i, message in enumerate[MessageMemoryItem](result):
+    for i, message in enumerate(result):

Note: This same issue appears at lines 587, 609, 637, 668, 700, 723, and 833.

flo_ai/flo_ai/arium/builder.py (1)

642-644: Variable shadowing: agents shadows the function parameter.

The local variable agents at line 643 shadows the agents parameter from the from_yaml method signature at line 291. This can cause confusion and bugs.

Apply this diff to rename the local variable:

             elif router_type == 'plan_execute':
-                agents = router_config.get('agents', {})
-                if not agents:
+                plan_agents = router_config.get('agents', {})
+                if not plan_agents:
                     raise ValueError(
                         f'Plan-Execute router {router_name} must specify agents'
                     )

                 router_fn = create_llm_router(
                     router_type='plan_execute',
-                    agents=agents,
+                    agents=plan_agents,
                     llm=router_llm,
                     **settings,
                 )
🧹 Nitpick comments (9)
flo_ai/examples/vllm_agent_usage.py (1)

67-69: Avoid repeating the same vllm_base_url validation in every example

The three blocks repeat the same guard logic already used elsewhere in this file. Consider centralizing this into a small helper (e.g. def ensure_vllm_base_url() -> str: ...) or a single check in main() so that:

  • The validation logic and error message live in one place.
  • Future changes to how the base URL is resolved or validated don’t require touching four call sites.

This keeps the examples a bit cleaner while preserving the same behavior.

Also applies to: 123-125, 222-224

flo_ai/flo_ai/utils/variable_extractor.py (1)

110-137: Avoid mutable default for variables and confirm new empty‑text behavior

Two points here:

  1. The mutable default {} for variables is a classic Python footgun, even if the function doesn’t currently mutate it. Safer to use None and normalize:
-def resolve_variables(text: str, variables: Dict[str, Any] = {}) -> str:
+def resolve_variables(text: str, variables: Dict[str, Any] | None = None) -> str:
@@
-    if not text:
-        raise ValueError('Text is required to resolve variables')
+    if not text:
+        raise ValueError('Text is required to resolve variables')
+
+    if variables is None:
+        variables = {}

This keeps runtime semantics the same while removing the mutable default.

  1. if not text: raise ValueError(...) is a behavior change: previously empty strings (or other falsy values) likely passed through. Please confirm all call sites are prepared for this exception, or consider only rejecting None and treating "" as a valid no‑op if backward compatibility is important.
flo_ai/flo_ai/builder/agent_builder.py (1)

189-201: Signature and docstring mostly consistent; consider minor docstring tweak

The new yaml_str: Optional[str] and yaml_file: Optional[str] parameters give a clean, backwards‑compatible way to support both string and file inputs. Types and defaults look good and should not break existing positional callers.

One nit: the leading docstring line still says “from a YAML configuration string”, even though you now also support files. Consider updating it for clarity, e.g.:

-        """Create an agent builder from a YAML configuration string
+        """Create an agent builder from a YAML configuration string or file
flo_ai/examples/arium_examples.py (1)

112-112: Inconsistent input format across examples.

This call uses a plain string input, but example_linear_workflow (line 50) and example_convenience_function (line 201) still use [UserMessage(...)] list format. If both formats are supported, consider adding a comment clarifying this, or standardize the examples for consistency.

flo_ai/flo_ai/models/base_agent.py (2)

79-79: Unnecessary cast() when extending a list.

The cast() is redundant here. If input_message is already typed as List[BaseMessage], extending the list doesn't require casting. The type checker should infer the correct type from the isinstance check.

         if isinstance(input_message, list):
-            self.conversation_history.extend(cast(List[BaseMessage], input_message))
+            self.conversation_history.extend(input_message)

105-105: Type ignore comments may mask underlying type issues.

While # type: ignore suppresses the warnings, consider whether the method signatures in BaseLLM could be updated to accept MediaMessageContent directly, rather than silencing the type checker. If the types are intentionally broader, consider using a cast() or assert for documentation purposes.

Also applies to: 113-113

flo_ai/flo_ai/llm/vertexai_llm.py (1)

8-26: base_url parameter is accepted but never used.

The base_url parameter is defined in the constructor signature but is neither stored as an instance attribute nor passed to the genai.Client. If this parameter is intentionally unused for VertexAI (since it uses project/location instead), consider documenting this or removing it from the signature to avoid confusion.

If base_url should be ignored for VertexAI, add a comment:

     def __init__(
         self,
         model: str = 'gemini-2.5-flash',
         temperature: float = 0.7,
         api_key: Optional[str] = None,
-        base_url: Optional[str] = None,
+        base_url: Optional[str] = None,  # Unused; VertexAI uses project/location instead
         project: Optional[str] = None,
         location: Optional[str] = None,
         **kwargs,
     ):
flo_ai/flo_ai/models/agent.py (1)

58-58: Explicit str() cast is safe but may mask type issues.

If enhanced_prompt is already a string, this is redundant. If it's an AssistantMessage, the str() representation may not be the expected content.

Consider extracting the content explicitly:

if isinstance(enhanced_prompt, AssistantMessage):
    enhanced_prompt = enhanced_prompt.content
super().__init__(
    name=name,
    system_prompt=enhanced_prompt,
    ...
)
flo_ai/flo_ai/arium/base.py (1)

23-23: Consider using consistent dict instantiation syntax.

Line 23 uses dict[str, AriumNodeType]() for initialization while line 24 uses the same pattern. This syntax is valid in Python 3.9+, but mixing Dict (from typing) with lowercase dict in the same file could be confusing.

For consistency, consider using either:

-        self.nodes: Dict[str, AriumNodeType] = dict[str, AriumNodeType]()
-        self.edges: Dict[str, Edge] = dict[str, Edge]()
+        self.nodes: Dict[str, AriumNodeType] = {}
+        self.edges: Dict[str, Edge] = {}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 31c3e68 and 6d2bae0.

⛔ Files ignored due to path filters (1)
  • flo_ai/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (39)
  • flo_ai/examples/arium_examples.py (6 hunks)
  • flo_ai/examples/arium_linear_usage.py (2 hunks)
  • flo_ai/examples/arium_yaml_example.py (3 hunks)
  • flo_ai/examples/custom_plan_execute_demo.py (1 hunks)
  • flo_ai/examples/example_graph_visualization.py (2 hunks)
  • flo_ai/examples/flo_tool_example.py (1 hunks)
  • flo_ai/examples/llm_router_example.py (5 hunks)
  • flo_ai/examples/multi_tool_example.py (1 hunks)
  • flo_ai/examples/simple_working_demo.py (4 hunks)
  • flo_ai/examples/simple_yaml_workflow.py (1 hunks)
  • flo_ai/examples/telemetry_example.py (1 hunks)
  • flo_ai/examples/variables_workflow_example.py (2 hunks)
  • flo_ai/examples/variables_workflow_yaml_example.py (2 hunks)
  • flo_ai/examples/vertexai_agent_example.py (1 hunks)
  • flo_ai/examples/vllm_agent_usage.py (4 hunks)
  • flo_ai/flo_ai/arium/arium.py (8 hunks)
  • flo_ai/flo_ai/arium/base.py (6 hunks)
  • flo_ai/flo_ai/arium/builder.py (14 hunks)
  • flo_ai/flo_ai/arium/llm_router.py (12 hunks)
  • flo_ai/flo_ai/arium/memory.py (7 hunks)
  • flo_ai/flo_ai/arium/models.py (1 hunks)
  • flo_ai/flo_ai/arium/nodes.py (4 hunks)
  • flo_ai/flo_ai/builder/agent_builder.py (3 hunks)
  • flo_ai/flo_ai/formatter/yaml_format_parser.py (6 hunks)
  • flo_ai/flo_ai/helpers/llm_factory.py (3 hunks)
  • flo_ai/flo_ai/llm/anthropic_llm.py (2 hunks)
  • flo_ai/flo_ai/llm/base_llm.py (3 hunks)
  • flo_ai/flo_ai/llm/gemini_llm.py (4 hunks)
  • flo_ai/flo_ai/llm/ollama_llm.py (1 hunks)
  • flo_ai/flo_ai/llm/openai_llm.py (3 hunks)
  • flo_ai/flo_ai/llm/openai_vllm.py (2 hunks)
  • flo_ai/flo_ai/llm/rootflo_llm.py (3 hunks)
  • flo_ai/flo_ai/llm/vertexai_llm.py (1 hunks)
  • flo_ai/flo_ai/models/agent.py (5 hunks)
  • flo_ai/flo_ai/models/base_agent.py (4 hunks)
  • flo_ai/flo_ai/telemetry/telemetry.py (1 hunks)
  • flo_ai/flo_ai/tool/flo_tool.py (3 hunks)
  • flo_ai/flo_ai/utils/variable_extractor.py (2 hunks)
  • flo_ai/pyproject.toml (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
flo_ai/examples/vertexai_agent_example.py (2)
flo_ai/examples/flo_tool_example.py (1)
  • get_weather (64-80)
flo_ai/examples/multi_tool_example.py (1)
  • get_weather (68-83)
flo_ai/examples/multi_tool_example.py (3)
flo_ai/examples/flo_tool_example.py (1)
  • get_weather (64-80)
flo_ai/examples/vertexai_agent_example.py (1)
  • get_weather (48-51)
flo_ai/examples/tool_usage.py (1)
  • get_weather (28-30)
flo_ai/examples/flo_tool_example.py (2)
flo_ai/examples/multi_tool_example.py (1)
  • get_weather (68-83)
flo_ai/examples/vertexai_agent_example.py (1)
  • get_weather (48-51)
flo_ai/examples/telemetry_example.py (1)
flo_ai/flo_ai/arium/arium.py (1)
  • run (34-177)
flo_ai/examples/llm_router_example.py (2)
flo_ai/flo_ai/arium/memory.py (1)
  • BaseMemory (106-130)
flo_ai/flo_ai/arium/builder.py (1)
  • build_and_run (243-261)
flo_ai/examples/arium_linear_usage.py (1)
flo_ai/flo_ai/arium/builder.py (1)
  • build_and_run (243-261)
flo_ai/examples/simple_working_demo.py (2)
flo_ai/flo_ai/arium/memory.py (5)
  • MessageMemory (133-153)
  • MessageMemoryItem (22-33)
  • add (108-109)
  • add (143-147)
  • add (165-166)
flo_ai/flo_ai/models/chat_message.py (1)
  • UserMessage (65-70)
flo_ai/flo_ai/models/agent.py (1)
flo_ai/flo_ai/models/chat_message.py (2)
  • BaseMessage (49-52)
  • FunctionMessage (85-102)
flo_ai/examples/example_graph_visualization.py (1)
flo_ai/flo_ai/llm/base_llm.py (7)
  • BaseLLM (9-143)
  • generate (23-30)
  • stream (33-40)
  • get_message_content (104-106)
  • format_tool_for_llm (109-111)
  • format_tools_for_llm (114-116)
  • format_image_in_message (119-121)
flo_ai/flo_ai/arium/arium.py (2)
flo_ai/flo_ai/arium/memory.py (2)
  • MessageMemory (133-153)
  • MessageMemoryItem (22-33)
flo_ai/flo_ai/utils/variable_extractor.py (1)
  • resolve_variables (110-137)
flo_ai/flo_ai/llm/base_llm.py (7)
flo_ai/flo_ai/llm/anthropic_llm.py (1)
  • format_image_in_message (241-243)
flo_ai/flo_ai/llm/gemini_llm.py (1)
  • format_image_in_message (245-263)
flo_ai/flo_ai/llm/ollama_llm.py (1)
  • format_image_in_message (166-168)
flo_ai/flo_ai/llm/openai_llm.py (1)
  • format_image_in_message (186-210)
flo_ai/flo_ai/llm/rootflo_llm.py (1)
  • format_image_in_message (333-337)
flo_ai/tests/unit-tests/test_base_llm.py (1)
  • format_image_in_message (58-59)
flo_ai/flo_ai/models/chat_message.py (1)
  • ImageMessageContent (21-28)
flo_ai/flo_ai/arium/base.py (4)
flo_ai/flo_ai/arium/nodes.py (3)
  • AriumNode (12-50)
  • ForEachNode (53-155)
  • FunctionNode (158-206)
flo_ai/flo_ai/arium/protocols.py (1)
  • ExecutableNode (5-39)
flo_ai/flo_ai/tool/base_tool.py (1)
  • Tool (12-50)
flo_ai/flo_ai/arium/models.py (4)
  • StartNode (15-17)
  • EndNode (21-23)
  • Edge (27-34)
  • default_router (7-11)
flo_ai/flo_ai/llm/rootflo_llm.py (3)
flo_ai/flo_ai/llm/base_llm.py (3)
  • generate (23-30)
  • stream (33-40)
  • format_image_in_message (119-121)
flo_ai/flo_ai/llm/gemini_llm.py (3)
  • generate (48-148)
  • stream (151-206)
  • format_image_in_message (245-263)
flo_ai/flo_ai/llm/openai_llm.py (3)
  • generate (41-117)
  • stream (120-150)
  • format_image_in_message (186-210)
flo_ai/flo_ai/arium/builder.py (4)
flo_ai/flo_ai/arium/memory.py (5)
  • MessageMemory (133-153)
  • MessageMemoryItem (22-33)
  • get (112-113)
  • get (149-153)
  • get (168-175)
flo_ai/flo_ai/arium/nodes.py (4)
  • AriumNode (12-50)
  • run (35-50)
  • run (98-111)
  • run (179-206)
flo_ai/flo_ai/arium/base.py (1)
  • add_edge (93-164)
flo_ai/flo_ai/models/chat_message.py (2)
  • UserMessage (65-70)
  • BaseMessage (49-52)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (5)
  • MessageMemory (133-153)
  • MessageMemoryItem (22-33)
  • get (112-113)
  • get (149-153)
  • get (168-175)
flo_ai/flo_ai/arium/memory.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
  • BaseMessage (49-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build
🔇 Additional comments (77)
flo_ai/examples/vertexai_agent_example.py (1)

48-51: Consistent optional country default; behavior preserved

Switching country’s default to '' keeps the location formatting logic identical for omitted values (if country still treats it as falsy) while aligning this example with the other weather tools’ signatures and the Tool schema for an optional string. No issues from a typing or runtime perspective.

flo_ai/pyproject.toml (1)

50-50: I'll help you verify the "ty" package dependency and the pre-release version constraint. Let me gather information about this package and check the codebase for any additional context.
<function_calls>

ty package PyPI python type checker

</function_calls>

Now let me also search the codebase to see if "ty" appears elsewhere and get more context on the pyproject.toml file:
<function_calls>


#!/bin/bash

Search for all occurrences of "ty" in pyproject.toml

cat flo_ai/pyproject.toml | head -70 | tail -30


</function_calls>

<function_calls>


#!/bin/bash

Check the entire pyproject.toml structure for ty references

rg -n "ty" flo_ai/pyproject.toml


</function_calls>

flo_ai/examples/vllm_agent_usage.py (1)

18-20: Early guard for missing vLLM base URL is a good addition

This check prevents confusing runtime failures by failing fast when VLLM_BASE_URL is not configured; the error message is clear and close to where the LLM is constructed.

flo_ai/examples/telemetry_example.py (2)

185-185: LGTM!

The input type change from list to string aligns with the PR's goal of normalizing input shapes across the codebase. This makes the API more intuitive.


180-182: Based on my thorough verification of the codebase, I can now provide the rewritten review comment:

The workflow.name assignment is fully supported and working as intended.

The type: ignore comment is appropriate for suppressing type checker warnings about dynamically added attributes. The Arium class deliberately allows setting the name attribute after instantiation (see Arium.run() line 75: getattr(self, 'name', 'unnamed_workflow')), and this pattern is used throughout the codebase. The attribute is not read-only or incorrectly typed—it is simply not initialized in __init__, which is a valid Python pattern.

flo_ai/examples/multi_tool_example.py (1)

68-68: LGTM!

Changing the default from None to '' is a good improvement that simplifies the conditional logic on line 82 and maintains consistency across the codebase.

flo_ai/examples/flo_tool_example.py (1)

64-64: LGTM!

This change maintains consistency with the same function signature updates in other example files and simplifies downstream string handling.

flo_ai/flo_ai/telemetry/telemetry.py (1)

142-143: Essential type annotation fix.

Using Optional[str] instead of str = None is the correct way to type optional string parameters. The previous annotations were technically incorrect since str cannot be None.

flo_ai/flo_ai/arium/memory.py (4)

69-71: Excellent defensive coding with walrus operator.

The walrus operator combined with a None check prevents potential AttributeError from accessing .status on a None value. This is a good improvement over the previous implementation.


108-109: LGTM!

Renaming the parameter from m to message improves code readability and makes the intent clearer.


23-23: The review comment's concern is not supported by the codebase evidence.

I've searched for all MessageMemoryItem instantiations in the repository. All 6 found instances explicitly provide the result parameter:

  • 2 instances in test files (test_llm_router.py)
  • 1 instance in example code (simple_working_demo.py)
  • 3 instances in the core arium.py implementation

No code in this repository instantiates MessageMemoryItem without providing the result parameter. Making it required does not break existing code.


83-88: All call sites to mark_step_completed and mark_step_failed are compliant with the required parameters.

The methods are not called anywhere in the codebase, so there are no callers to update. This is not a breaking change.

flo_ai/flo_ai/formatter/yaml_format_parser.py (3)

92-94: LGTM!

Using Literal.__class_getitem__() for runtime construction of Literal types is the correct approach when building types dynamically.


96-101: LGTM!

Extracting array type creation into a dedicated method improves code organization and makes the type resolution logic clearer.


158-159: LGTM!

Adding explicit encoding='utf-8' is a best practice that ensures consistent file reading across different platforms and locales.

Also applies to: 203-204

flo_ai/flo_ai/arium/models.py (1)

32-33: LGTM!

The type: ignore comment is appropriate here since accessing .__name__ on functools.partial.func is valid at runtime but may not be fully typed in the type checker's perspective.

flo_ai/examples/simple_yaml_workflow.py (1)

107-116: LGTM!

Converting user_input from a list to a plain string simplifies the code and aligns with the PR's goal of normalizing input types across the framework. The updated print statement correctly reflects this change.

flo_ai/examples/variables_workflow_example.py (1)

145-148: String inputs usage matches updated AriumBuilder.build_and_run API

Both workflow invocations now pass a plain string to build_and_run(inputs=...), which aligns with the inputs: List[BaseMessage] | str signature and is equivalent to the prior single-element list usage. No behavior change, just cleaner API use.

Also applies to: 207-210

flo_ai/examples/example_graph_visualization.py (1)

11-12: MockLLM correctly implements the BaseLLM interface for the fallback path

Importing BaseLLM and having MockLLM subclass it with concrete generate, stream, get_message_content, and tool/image formatting methods cleanly satisfies the abstract interface and keeps the visualization example working even without OpenAI. The minimal implementations are appropriate for this mock context.

Also applies to: 44-65

flo_ai/flo_ai/llm/ollama_llm.py (1)

15-20: api_key type narrowed to Optional[str] without runtime impact

Updating api_key to Optional[str] = None in OllamaLLM.__init__ matches the BaseLLM constructor and other LLMs in the codebase, improving type correctness without changing behavior.

flo_ai/flo_ai/helpers/llm_factory.py (1)

141-145: Stricter config validation looks correct but may break existing setups

The new checks requiring:

  • base_url for VertexAI,
  • api_key for openai_vllm, and
  • base_url for RootFlo (even when using an access token),

tighten configuration correctness and should surface misconfigured environments early with clear error messages.

However, these will fail any existing configs/tests that previously relied on implicit defaults or partial settings. Please confirm that:

  • all production and example configs set these fields (or the relevant env vars), and
  • documentation for these providers is updated to call out the now-required parameters.

Also applies to: 175-179, 215-219

flo_ai/examples/variables_workflow_yaml_example.py (1)

200-203: YAML workflow inputs now correctly use single-string prompts

Both multi-agent YAML workflows now call build_and_run with inputs as a plain string rather than a single-element list, matching the updated builder API and keeping behavior equivalent.

Also applies to: 290-293

flo_ai/examples/llm_router_example.py (1)

100-108: Router examples now use string inputs consistently with builder/run APIs

  • All build_and_run calls now take a single string (or multi-line string) instead of a list of strings, and the task-classifier example passes task directly. This matches the AriumBuilder.build_and_run signature and keeps the semantics of the previous examples intact.
  • The content_workflow_router(memory: BaseMemory) -> Literal['editor', 'reviewer'] # type: ignore declaration keeps the router’s intent and return type explicit while pragmatically silencing the decorator’s typing mismatch, which is acceptable for example code.

No functional issues spotted with these adjustments.

Also applies to: 137-137, 173-187, 305-328, 387-397

flo_ai/examples/custom_plan_execute_demo.py (1)

100-107: Ensure arium.run supports string inputs in line with other examples

task is now a plain string passed directly to arium.run(task), which is cleaner and consistent with other examples that send string inputs into Arium flows. This depends on arium.run handling str appropriately (e.g., wrapping it as a UserMessage internally).

Please confirm that arium.run has been updated to accept str inputs (and is covered by tests) so this example doesn’t regress at runtime.

flo_ai/examples/arium_linear_usage.py (1)

29-37: Linear usage examples now align with single-string build_and_run inputs

Both AriumBuilder flows now pass a single string (or multi-line string) into build_and_run, matching the updated inputs type while preserving the original behavior of a single initial user turn. The changes are consistent with other examples in this PR.

Also applies to: 84-96

flo_ai/flo_ai/tool/flo_tool.py (2)

49-49: LGTM - Type ignore comments are appropriate for dynamic attribute assignment.

Dynamic attribute assignment to function objects is a valid Python pattern, and the # type: ignore[attr-defined] annotations correctly suppress the type checker warnings while documenting the intentional dynamic behavior.

Also applies to: 62-62, 65-65


82-89: Defensive getattr usage is appropriate, though unlikely to be needed for standard callables.

The fallback to 'unknown' handles edge cases where the callable might not have __name__ or __doc__ (e.g., certain wrapped objects or proxies). For standard functions, these attributes are always present, but this defensive approach improves resilience.

flo_ai/flo_ai/llm/anthropic_llm.py (2)

21-22: LGTM - Optional parameters allow environment-based configuration.

Making api_key and base_url optional with None defaults allows the AsyncAnthropic client to fall back to environment variables (ANTHROPIC_API_KEY, etc.), which is the standard pattern.


178-178: LGTM - Explicit type annotation improves code clarity.

Adding the Dict[str, Any] type annotation to anthropic_kwargs makes the code more explicit and helps with static analysis.

flo_ai/examples/arium_examples.py (2)

61-76: LGTM - Agent initialization is now more explicit.

The updated Agent initialization with explicit system_prompt and llm parameters improves clarity and consistency with the new API design.


232-244: The review comment is incorrect — the codebase already requires Python 3.10+.

The pyproject.toml specifies requires-python = ">=3.10,<4.0" and setup.py specifies python_requires='>=3.10'. Since the codebase's minimum supported Python version is 3.10, the union syntax List[BaseMessage] | str is appropriate and does not require migration to Union[List[BaseMessage], str].

flo_ai/examples/arium_yaml_example.py (2)

583-583: LGTM - Simplified to single string input.

The change from list-based input to a single string aligns with the PR's goal of standardizing the input API.


823-830: LGTM - Multi-line string input is cleaner.

Using a triple-quoted multi-line string instead of a list of concatenated strings improves readability.

flo_ai/flo_ai/models/base_agent.py (1)

45-45: Return type change from str to List[BaseMessage] is not a breaking change within this codebase.

The base_agent.py file is a new addition to this commit, not a modification of existing code. Both the abstract method and all implementing subclasses (such as Agent) were introduced together with the correct List[BaseMessage] return type. All examples and usage patterns already expect and handle this return type, indicating the entire codebase was designed consistently around this interface from the start.

flo_ai/flo_ai/llm/openai_vllm.py (1)

11-11: LGTM on the Optional type annotation.

Making api_key optional aligns with the base class signature and allows flexibility for configurations where the API key might be provided via environment variables.

flo_ai/flo_ai/llm/base_llm.py (3)

10-16: LGTM on the constructor signature update.

The Optional[str] type for api_key properly reflects that the API key may be optional (e.g., when provided via environment variables or when using local models).


47-56: Good defensive programming for function call extraction.

The added guards ensure both name and arguments attributes exist before constructing the result. This prevents potential AttributeError exceptions when dealing with incomplete function call objects.


118-121: Return type broadened appropriately.

The Any return type correctly reflects that subclass implementations return different types: dict (OpenAI), types.Part (Gemini), str (Anthropic/Ollama). This allows each LLM implementation to use its native format.

flo_ai/examples/simple_working_demo.py (4)

14-17: LGTM on the updated imports.

The imports align with the new memory model that uses MessageMemoryItem and UserMessage for type-safe message handling.


190-196: LGTM on memory item access patterns.

The access patterns (msg.result.role, msg.result.content, result[-1].result.content) correctly match the MessageMemoryItem structure where result is a BaseMessage with role and content attributes.


247-249: LGTM on MessageMemoryItem construction.

The usage correctly constructs MessageMemoryItem with node='user' and result=UserMessage(content=...), matching the expected signature from the memory module.


179-179: arium.run() correctly accepts string input per its type signature.

Verification confirms the run() method signature at line 34 of flo_ai/flo_ai/arium/arium.py explicitly accepts inputs: List[BaseMessage] | str, which allows both list and string inputs. The call at line 179 in simple_working_demo.py passing a string task is valid and matches the updated API.

flo_ai/flo_ai/llm/vertexai_llm.py (1)

13-15: LGTM on the Optional type annotations.

The Optional[str] types for base_url, project, and location align with the broader type system updates across LLM implementations.

flo_ai/flo_ai/arium/nodes.py (3)

9-9: LGTM on TYPE_CHECKING import.

Using TYPE_CHECKING to import AriumNodeType correctly avoids circular dependency issues at runtime while maintaining proper type hints.


60-63: LGTM on broadened node type.

Changing execute_node type to AriumNodeType allows the ForEachNode to accept any valid node type in the workflow, including ExecutableNode, AriumNode, FunctionNode, etc.


131-131: LGTM on isinstance check.

The isinstance(self.execute_node, AriumNode) check properly narrows the type to access AriumNode-specific attributes like arium.memory.

flo_ai/flo_ai/llm/openai_llm.py (3)

21-23: LGTM on type signature updates.

The change to Optional[str] for api_key and base_url properly reflects that these can be nullable, aligning with the broader PR goal of consistent typing across LLM implementations.


31-36: Client initialization simplified.

The direct initialization of AsyncOpenAI with explicit parameters is cleaner and more maintainable. Note that passing **kwargs directly may forward unexpected parameters to the client.

Ensure that the **kwargs passed to AsyncOpenAI are compatible with the client's accepted parameters. Invalid kwargs could cause runtime errors.


155-157: Improved null-safety in get_message_content.

The explicit check for response.content is not None before accessing it prevents returning the string "None" when content is absent, which is a good defensive fix.

flo_ai/flo_ai/arium/arium.py (3)

54-56: LGTM on explicit type annotation.

The inline type annotation inputs: list[BaseMessage] makes the code more readable and helps with type checking after the string-to-list conversion.


195-195: Verify behavior change in _emit_event.

The condition now requires events_filter to be truthy (if callback and events_filter and ...). Previously, if events_filter was an empty list, events would still be checked against it. Now, an empty events_filter list will suppress all events.

Confirm this is the intended behavior. If users pass events_filter=[] expecting no events, this is correct. However, if they expect the default behavior (all events), this could be a breaking change since line 68-69 sets the default to all event types only when None.


406-409: Type narrowing with AriumNodeType is a good improvement.

Using the union type alias AriumNodeType instead of listing all node types inline improves maintainability. However, the mutable default on line 409 should still be fixed.

flo_ai/flo_ai/llm/rootflo_llm.py (4)

196-197: Good defensive check for JWT generation.

This explicit check prevents a confusing error from the jwt.encode call if app_secret is None.


292-293: Defensive null check after initialization.

While _ensure_initialized() should set _llm, this explicit check provides a clear error message if something goes wrong.


321-337: Initialization guards on synchronous methods may fail.

format_tool_for_llm, format_tools_for_llm, and format_image_in_message are synchronous methods that check _llm is None but cannot call _ensure_initialized() (which is async). If these are called before generate() or stream(), they will raise RuntimeError.

Consider documenting this requirement or deferring to the base class implementation when _llm is None. Alternatively, provide a synchronous initialization path.


298-310: Remove the # type: ignore[override] comment; RootFloLLM.stream() signature aligns perfectly with BaseLLM.stream().

The original review comment's premise is incorrect. Both BaseLLM.stream() and RootFloLLM.stream() have identical signatures with no **kwargs parameter. The # type: ignore[override] comment appears unnecessary—the methods match exactly.

Note: Other LLM implementations (OpenAI, OpenAIVLLM, Gemini) incorrectly add **kwargs to their stream() methods, which violates the base class contract. RootFloLLM correctly adheres to the defined interface.

flo_ai/flo_ai/models/agent.py (2)

77-77: Return type change aligns with broader PR refactoring.

Returning List[BaseMessage] instead of str provides richer context for downstream consumers, including the full conversation history.


204-206: Explicit return path added for loop exhaustion.

This ensures the method always returns a value even if the while loop completes without an early return (e.g., after max retries).

flo_ai/flo_ai/llm/gemini_llm.py (3)

25-25: LGTM on Optional base_url.

Consistent with the other LLM implementations in this PR.


245-260: Return type and validation improvements.

The return type change from dict to types.Part is more accurate. The added validation for mime_type when base64 is provided prevents a confusing error from the Gemini SDK.


51-51: I'll help you verify this review comment about the type change for the functions parameter in the Gemini LLM. Let me start by examining the code context and searching for all callers.
<function_calls>

cat -n flo_ai/flo_ai/llm/gemini_llm.py | head -80

</function_calls>

Now let me search for all calls to the generate() method with the functions parameter:
<function_calls>

rg -n --type=py 'generate\s*([^)]functions\s=' -C 5

</function_calls>

Let me also check for any imports or usage of types.FunctionDeclaration:
<function_calls>

rg -n --type=py 'FunctionDeclaration|from.*types|import.*types' -C 2 flo_ai/flo_ai/llm/

</function_calls>

Let me look at the broader context of the gemini_llm.py file to understand the function signature:
<function_calls>

wc -l flo_ai/flo_ai/llm/gemini_llm.py

</function_calls>

<function_calls>

cat -n flo_ai/flo_ai/llm/gemini_llm.py | sed -n '40,70p'

</function_calls>

Let me search for all usages of the generate method across the codebase:
<function_calls>

rg -n --type=py '.generate\s*(' -C 3 | head -100

</function_calls>

<function_calls>

rg -n --type=py 'functions\s*=' -C 2 | grep -A2 -B2 'functions'

</function_calls>

Let me get a more focused search on the actual generate calls with functions:
<function_calls>

rg -n --type=py 'functions\s*=' -C 4

</function_calls>

flo_ai/flo_ai/arium/base.py (4)

14-16: AriumNodeType definition looks good.

The type alias correctly consolidates all node types that can be used in the Arium workflow. This provides better type safety and clearer API contracts.


233-235: Type ignore comments are appropriate for optional visualization dependencies.

The # type: ignore comments are correctly applied for matplotlib and networkx imports, which are optional dependencies used only for visualization.


376-389: Path handling refactor is clean and appropriate.

Using created_output_path as a Path object and calling .parent.mkdir(parents=True, exist_ok=True) ensures the output directory exists before saving. The refactor maintains the same functionality while improving clarity.


96-96: Breaking change confirmed: to_nodes parameter removed default value and all callers already pass it explicitly.

Verification shows the parameter at line 96 is indeed List[str] without a default value. Examining all 20+ add_edge calls across the codebase (unit tests, examples, builder, and configuration loaders) confirms every caller passes to_nodes explicitly—no code relies on a missing default value.

flo_ai/flo_ai/arium/llm_router.py (5)

104-106: Good improvement: route method now explicitly typed as async with Optional execution_context.

The signature change properly reflects that execution_context is optional and the method is asynchronous.


457-464: Conversation filtering logic correctly handles non-string content.

Filtering to only include messages where msg.result.content is a string prevents type errors when joining the conversation text.


467-471: Null-safe handling of execution_context.

The conditional check if execution_context is not None before calling _get_next_step_in_pattern prevents potential issues when execution_context is None.


948-951: Type ignore comments are necessary for dynamic Literal construction.

The # type: ignore comments are appropriate here since Literal[option_names] with a dynamic tuple is not statically verifiable by type checkers.


967-969: Good use of cast for dynamic attribute assignment.

Using cast(Any, router_function) properly handles the dynamic attribute assignment without triggering type checker errors.

flo_ai/flo_ai/arium/builder.py (7)

32-39: Type annotations properly updated for the new type system.

The updates to use MessageMemory and AriumNodeType align with the broader refactoring across the codebase.


251-260: Good improvement: Input handling now supports both list and single string inputs.

The logic correctly handles different input types, wrapping strings in UserMessage and validating list items.


267-269: Visualization now auto-builds if needed.

The change to build the arium if self._arium is None is a good UX improvement, allowing users to call visualize() without explicitly calling build() first.


452-453: Good defensive check for yaml_file parameter.

The explicit check and error message provide clearer feedback when the yaml_file is unexpectedly None.


541-545: Safe function registry access pattern.

The conditional check if function_registry is not None before calling .get() prevents AttributeError when function_registry is None.


914-918: Role extraction handles None values appropriately.

The conditional expression ensures role is always a string, defaulting to empty string when not provided.


960-977: Output schema handling with safe defaults.

The Optional[Dict[str, Any]] type for output_schema and the fallback to {} at line 977 ensure the agent builder receives a valid schema.

self,
inputs: List[BaseMessage] | str,
variables: Optional[Dict[str, Any]] = None,
variables: Dict[str, Any] = {},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mutable default argument is a Python anti-pattern.

Using {} as a default argument creates a single shared dictionary instance that persists across all calls. If the dictionary is ever mutated, the mutation persists.

Apply this diff to fix the mutable default argument:

     async def run(
         self,
         inputs: List[BaseMessage] | str,
-        variables: Dict[str, Any] = {},
+        variables: Optional[Dict[str, Any]] = None,
         event_callback: Optional[Callable[[AriumEvent], None]] = None,
         events_filter: Optional[List[AriumEventType]] = None,
     ):

Then at the start of the method body:

variables = variables if variables is not None else {}

The same issue exists on lines 204, 361, and 409.

🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around lines 37, 204, 361 and 409, several
function/method parameters use a mutable default argument like {} which creates
a single shared dict across calls; change each default from {} to None (e.g.,
variables: Optional[Dict[str, Any]] = None) and at the start of each function
body set variables = variables if variables is not None else {} to ensure a
fresh dict per call, adding Optional to typing imports if needed.

event_callback: Optional[Callable[[AriumEvent], None]] = None,
events_filter: Optional[List[AriumEventType]] = None,
variables: Optional[Dict[str, Any]] = None,
variables: Dict[str, Any] = {},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mutable default argument - same issue.

Same anti-pattern as line 37. Use Optional[Dict[str, Any]] = None and initialize inside the method.

🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around line 204, the function signature uses a
mutable default argument variables: Dict[str, Any] = {} which is an
anti-pattern; change the parameter to Optional[Dict[str, Any]] = None and inside
the function initialize it with an empty dict when None (e.g., if variables is
None: variables = {}), ensuring no shared mutable state across calls.

self,
inputs: List[BaseMessage],
variables: Dict[str, Any],
variables: Dict[str, Any] = {},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mutable default argument - same issue.

Same anti-pattern as line 37.

🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around line 361, the function parameter uses a
mutable default argument "variables: Dict[str, Any] = {}" which causes
shared-state bugs; change the signature to use None as the default (variables:
Optional[Dict[str, Any]] = None) and then inside the function initialize
variables = {} if variables is None (mirroring the fix applied at line 37) so
each call gets a fresh dict.

self,
inputs: List[BaseMessage] | str,
variables: Optional[Dict[str, Any]] = None,
variables: Dict[str, Any] = {},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mutable default argument: variables: Dict[str, Any] = {}.

Using a mutable default argument like {} is a well-known Python anti-pattern. If the dictionary is ever mutated, the changes persist across calls.

Apply this diff to fix the mutable default argument:

     async def build_and_run(
         self,
         inputs: List[BaseMessage] | str,
-        variables: Dict[str, Any] = {},
+        variables: Optional[Dict[str, Any]] = None,
     ) -> List[MessageMemoryItem]:
         """Build the Arium and run it with the given inputs and optional runtime variables."""
         arium = self.build()
         new_inputs = []
+        if variables is None:
+            variables = {}
         if isinstance(inputs, list):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
variables: Dict[str, Any] = {},
async def build_and_run(
self,
inputs: List[BaseMessage] | str,
variables: Optional[Dict[str, Any]] = None,
) -> List[MessageMemoryItem]:
"""Build the Arium and run it with the given inputs and optional runtime variables."""
arium = self.build()
new_inputs = []
if variables is None:
variables = {}
if isinstance(inputs, list):
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/builder.py around line 246, the function signature uses a
mutable default argument variables: Dict[str, Any] = {}, which can retain
mutations across calls; change the signature to use variables:
Optional[Dict[str, Any]] = None and inside the function replace any use of the
default with variables = {} if variables is None (or variables = dict(variables)
if you need a copy), ensuring you create a new dict per call.

def get_routing_prompt(
self,
memory: MessageMemory,
memory: MessageMemory[MessageMemoryItem],
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Invalid generic type annotation: MessageMemory is not a generic class.

Based on the provided context from flo_ai/flo_ai/arium/memory.py, MessageMemory is a plain class without generic type parameters. The annotation MessageMemory[MessageMemoryItem] will cause a TypeError at runtime.

Apply this diff to fix the type annotation:

     def get_routing_prompt(
         self,
-        memory: MessageMemory[MessageMemoryItem],
+        memory: MessageMemory,
         options: Dict[str, str],
         execution_context: Optional[dict] = None,
     ) -> str:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
memory: MessageMemory[MessageMemoryItem],
def get_routing_prompt(
self,
memory: MessageMemory,
options: Dict[str, str],
execution_context: Optional[dict] = None,
) -> str:
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/llm_router.py around line 586, the parameter type uses an
invalid generic annotation MessageMemory[MessageMemoryItem]; replace that with
the non-generic class name MessageMemory (i.e., change memory:
MessageMemory[MessageMemoryItem] to memory: MessageMemory) so the annotation
matches the actual class definition and avoids a runtime TypeError.

Comment on lines +136 to +137
def create(yaml_dict: Optional[Dict] = None, yaml_path: Optional[str] = None):
return FloYamlParser.Builder(yaml_dict=yaml_dict, yaml_path=yaml_path).build()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Confusing API: FloJsonParser.create accepts YAML parameters.

FloJsonParser.create() now accepts yaml_dict and yaml_path parameters, which is counterintuitive. This creates confusion between FloJsonParser and FloYamlParser.

Consider one of the following:

  1. Remove this method and require users to call FloYamlParser.create() directly for YAML inputs
  2. Rename the parameters to dict and path to be format-agnostic
  3. Add a clear deprecation notice if this is for backward compatibility
🤖 Prompt for AI Agents
In flo_ai/flo_ai/formatter/yaml_format_parser.py around lines 136-137,
FloJsonParser.create currently accepts yaml-specific parameters (yaml_dict,
yaml_path) which is confusing; change the API to be format-agnostic by renaming
those parameters to dict and path (update signature, docstring and internal
references) and update all call sites/tests to use the new names; if you must
preserve the old names for backward compatibility, add a thin wrapper that
accepts yaml_dict/yaml_path, issues a deprecation warning, maps them to
dict/path, and forwards to the new signature.

Comment on lines +30 to +35
self.client = AsyncAnthropic(
api_key=self.api_key,
base_url=base_url,
default_headers=custom_headers,
**kwargs,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential duplicate keyword argument conflict with **kwargs.

If kwargs contains keys like api_key, base_url, or default_headers, Python will raise a TypeError: got multiple values for keyword argument. Consider filtering out these keys from kwargs before spreading.

+        # Filter out keys that are explicitly passed to avoid duplicates
+        filtered_kwargs = {k: v for k, v in kwargs.items() 
+                          if k not in ('api_key', 'base_url', 'default_headers')}
         self.client = AsyncAnthropic(
             api_key=self.api_key,
             base_url=base_url,
             default_headers=custom_headers,
-            **kwargs,
+            **filtered_kwargs,
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.client = AsyncAnthropic(
api_key=self.api_key,
base_url=base_url,
default_headers=custom_headers,
**kwargs,
)
# Filter out keys that are explicitly passed to avoid duplicates
filtered_kwargs = {k: v for k, v in kwargs.items()
if k not in ('api_key', 'base_url', 'default_headers')}
self.client = AsyncAnthropic(
api_key=self.api_key,
base_url=base_url,
default_headers=custom_headers,
**filtered_kwargs,
)
🤖 Prompt for AI Agents
In flo_ai/flo_ai/llm/anthropic_llm.py around lines 30 to 35, passing **kwargs
directly into AsyncAnthropic can cause a TypeError if kwargs contains keys like
api_key, base_url, or default_headers; remove those keys from kwargs before
calling AsyncAnthropic (e.g., create a filtered_kwargs that excludes 'api_key',
'base_url', and 'default_headers' or pop them out) and pass filtered_kwargs
instead so you don’t supply duplicate keyword arguments.

Comment on lines 27 to +32
async def generate(
self, messages: list[dict], output_schema: dict = None, **kwargs
self,
messages: list[dict],
functions: Optional[List[Dict[str, Any]]] = None,
output_schema: Optional[Dict[str, Any]] = None,
**kwargs,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused functions parameter in generate method.

The functions parameter is added to the method signature but is never used in the method body. Compare this with the stream method (lines 92-93) which does use the functions parameter.

Apply this diff to use the functions parameter consistently:

         # Prepare OpenAI API parameters
         vllm_openai_kwargs = {
             'model': self.model,
             'messages': messages,
             'temperature': self.temperature,
             **kwargs,
             **self.kwargs,
         }
+
+        if functions:
+            vllm_openai_kwargs['functions'] = functions

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In flo_ai/flo_ai/llm/openai_vllm.py around lines 27 to 32, the generate method
accepts a functions parameter but never uses it; update the method to pass the
functions argument into the underlying request/response logic exactly as the
stream method does (e.g., include functions in the payload or call that builds
the model request), ensure any schema/validation logic still applies, and keep
the signature and behavior consistent with stream so functions-based calls are
honored.

Comment on lines +425 to +433
return [
FunctionMessage(
content=str(
'The final result based on the tool executions is: \n'
+ str(function_response)
),
name=function_name,
)
]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

function_response and function_name may be undefined.

This fallback return references function_response and function_name, which are only defined inside the tool execution loop. If no tool calls were made (e.g., tool_call_count is 0 when entering line 402), these variables will be undefined, causing a NameError.

Consider returning the conversation history instead, or handling the case where no tools were called:

-                return [
-                    FunctionMessage(
-                        content=str(
-                            'The final result based on the tool executions is: \n'
-                            + str(function_response)
-                        ),
-                        name=function_name,
-                    )
-                ]
+                # No assistant message extracted, return conversation history
+                return self.conversation_history

Alternatively, initialize sentinel values before the loop.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return [
FunctionMessage(
content=str(
'The final result based on the tool executions is: \n'
+ str(function_response)
),
name=function_name,
)
]
# No assistant message extracted, return conversation history
return self.conversation_history
🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/agent.py around lines 425 to 433, the fallback return
uses function_response and function_name which can be undefined if no tool calls
occurred; fix by either initializing function_response and function_name to
sentinel defaults before the tool-execution loop (e.g., None or empty strings)
and branch the fallback to produce a safe message when they are unset, or
short-circuit earlier and return the conversation history (or a clear
FunctionMessage saying "no tools were called") when tool_call_count is 0; ensure
the final return never references uninitialized variables by checking their
existence or using the initialized sentinels.

self.conversation_history = []

async def _get_message_history(self, variables: Optional[Dict[str, Any]] = None):
async def _get_message_history(self, variables: Dict[str, Any] = {}):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mutable default argument {} is a Python anti-pattern.

Using a mutable default argument like {} can cause subtle bugs. The same dictionary object is reused across all calls that don't provide an explicit variables argument. If the dictionary is ever mutated within the function, those mutations persist across calls.

Apply this fix using None with a runtime default:

-    async def _get_message_history(self, variables: Dict[str, Any] = {}):
+    async def _get_message_history(self, variables: Optional[Dict[str, Any]] = None):
+        if variables is None:
+            variables = {}
         message_history = []

This requires adding Optional to the imports if not already present.

🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/base_agent.py around line 87 the method signature uses a
mutable default argument async def _get_message_history(self, variables:
Dict[str, Any] = {}): which can cause state to persist across calls; change the
default to None (variables: Optional[Dict[str, Any]] = None) and inside the
function set variables = {} if variables is None. Also ensure Optional is
imported from typing (add it to the imports if missing).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants