-
Notifications
You must be signed in to change notification settings - Fork 19
fix: add ty and fix types #173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
WalkthroughThis pull request refactors the framework's type system and APIs across multiple layers. Key changes include: updating example files to pass string inputs instead of lists to Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AriumBuilder
participant Arium
participant Agent
participant Memory
User->>AriumBuilder: build_and_run(inputs: str)
AriumBuilder->>AriumBuilder: build() if needed
AriumBuilder->>Arium: run(inputs: str, variables: Dict)
Arium->>Arium: _resolve_inputs(inputs: str) → UserMessage
Arium->>Memory: add(MessageMemoryItem)
Arium->>Agent: run(messages: List[BaseMessage])
Agent->>Agent: _run_conversational() | _run_with_tools()
Agent->>Memory: query history
Agent-->>Arium: List[BaseMessage]
Arium->>Memory: add(MessageMemoryItem) for results
Arium-->>AriumBuilder: execution result
AriumBuilder-->>User: final output
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
flo_ai/examples/arium_yaml_example.py (1)
557-558: Invalid Python syntax:enumerate[MessageMemoryItem]will cause a TypeError.The syntax
enumerate[MessageMemoryItem](result)is invalid.enumerateis not a generic class and doesn't support subscript notation. This will raiseTypeError: 'type' object is not subscriptableat runtime.Apply this fix:
- for i, message in enumerate[MessageMemoryItem](result): + for i, message in enumerate(result):Note: This same issue appears at lines 587, 609, 637, 668, 700, 723, and 833.
flo_ai/flo_ai/arium/builder.py (1)
642-644: Variable shadowing:agentsshadows the function parameter.The local variable
agentsat line 643 shadows theagentsparameter from thefrom_yamlmethod signature at line 291. This can cause confusion and bugs.Apply this diff to rename the local variable:
elif router_type == 'plan_execute': - agents = router_config.get('agents', {}) - if not agents: + plan_agents = router_config.get('agents', {}) + if not plan_agents: raise ValueError( f'Plan-Execute router {router_name} must specify agents' ) router_fn = create_llm_router( router_type='plan_execute', - agents=agents, + agents=plan_agents, llm=router_llm, **settings, )
🧹 Nitpick comments (9)
flo_ai/examples/vllm_agent_usage.py (1)
67-69: Avoid repeating the samevllm_base_urlvalidation in every exampleThe three blocks repeat the same guard logic already used elsewhere in this file. Consider centralizing this into a small helper (e.g.
def ensure_vllm_base_url() -> str: ...) or a single check inmain()so that:
- The validation logic and error message live in one place.
- Future changes to how the base URL is resolved or validated don’t require touching four call sites.
This keeps the examples a bit cleaner while preserving the same behavior.
Also applies to: 123-125, 222-224
flo_ai/flo_ai/utils/variable_extractor.py (1)
110-137: Avoid mutable default forvariablesand confirm new empty‑text behaviorTwo points here:
- The mutable default
{}forvariablesis a classic Python footgun, even if the function doesn’t currently mutate it. Safer to useNoneand normalize:-def resolve_variables(text: str, variables: Dict[str, Any] = {}) -> str: +def resolve_variables(text: str, variables: Dict[str, Any] | None = None) -> str: @@ - if not text: - raise ValueError('Text is required to resolve variables') + if not text: + raise ValueError('Text is required to resolve variables') + + if variables is None: + variables = {}This keeps runtime semantics the same while removing the mutable default.
if not text: raise ValueError(...)is a behavior change: previously empty strings (or other falsy values) likely passed through. Please confirm all call sites are prepared for this exception, or consider only rejectingNoneand treating""as a valid no‑op if backward compatibility is important.flo_ai/flo_ai/builder/agent_builder.py (1)
189-201: Signature and docstring mostly consistent; consider minor docstring tweakThe new
yaml_str: Optional[str]andyaml_file: Optional[str]parameters give a clean, backwards‑compatible way to support both string and file inputs. Types and defaults look good and should not break existing positional callers.One nit: the leading docstring line still says “from a YAML configuration string”, even though you now also support files. Consider updating it for clarity, e.g.:
- """Create an agent builder from a YAML configuration string + """Create an agent builder from a YAML configuration string or fileflo_ai/examples/arium_examples.py (1)
112-112: Inconsistent input format across examples.This call uses a plain string input, but
example_linear_workflow(line 50) andexample_convenience_function(line 201) still use[UserMessage(...)]list format. If both formats are supported, consider adding a comment clarifying this, or standardize the examples for consistency.flo_ai/flo_ai/models/base_agent.py (2)
79-79: Unnecessarycast()when extending a list.The
cast()is redundant here. Ifinput_messageis already typed asList[BaseMessage], extending the list doesn't require casting. The type checker should infer the correct type from theisinstancecheck.if isinstance(input_message, list): - self.conversation_history.extend(cast(List[BaseMessage], input_message)) + self.conversation_history.extend(input_message)
105-105: Type ignore comments may mask underlying type issues.While
# type: ignoresuppresses the warnings, consider whether the method signatures inBaseLLMcould be updated to acceptMediaMessageContentdirectly, rather than silencing the type checker. If the types are intentionally broader, consider using acast()orassertfor documentation purposes.Also applies to: 113-113
flo_ai/flo_ai/llm/vertexai_llm.py (1)
8-26:base_urlparameter is accepted but never used.The
base_urlparameter is defined in the constructor signature but is neither stored as an instance attribute nor passed to thegenai.Client. If this parameter is intentionally unused for VertexAI (since it uses project/location instead), consider documenting this or removing it from the signature to avoid confusion.If
base_urlshould be ignored for VertexAI, add a comment:def __init__( self, model: str = 'gemini-2.5-flash', temperature: float = 0.7, api_key: Optional[str] = None, - base_url: Optional[str] = None, + base_url: Optional[str] = None, # Unused; VertexAI uses project/location instead project: Optional[str] = None, location: Optional[str] = None, **kwargs, ):flo_ai/flo_ai/models/agent.py (1)
58-58: Explicitstr()cast is safe but may mask type issues.If
enhanced_promptis already a string, this is redundant. If it's anAssistantMessage, thestr()representation may not be the expected content.Consider extracting the content explicitly:
if isinstance(enhanced_prompt, AssistantMessage): enhanced_prompt = enhanced_prompt.content super().__init__( name=name, system_prompt=enhanced_prompt, ... )flo_ai/flo_ai/arium/base.py (1)
23-23: Consider using consistent dict instantiation syntax.Line 23 uses
dict[str, AriumNodeType]()for initialization while line 24 uses the same pattern. This syntax is valid in Python 3.9+, but mixingDict(from typing) with lowercasedictin the same file could be confusing.For consistency, consider using either:
- self.nodes: Dict[str, AriumNodeType] = dict[str, AriumNodeType]() - self.edges: Dict[str, Edge] = dict[str, Edge]() + self.nodes: Dict[str, AriumNodeType] = {} + self.edges: Dict[str, Edge] = {}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
flo_ai/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (39)
flo_ai/examples/arium_examples.py(6 hunks)flo_ai/examples/arium_linear_usage.py(2 hunks)flo_ai/examples/arium_yaml_example.py(3 hunks)flo_ai/examples/custom_plan_execute_demo.py(1 hunks)flo_ai/examples/example_graph_visualization.py(2 hunks)flo_ai/examples/flo_tool_example.py(1 hunks)flo_ai/examples/llm_router_example.py(5 hunks)flo_ai/examples/multi_tool_example.py(1 hunks)flo_ai/examples/simple_working_demo.py(4 hunks)flo_ai/examples/simple_yaml_workflow.py(1 hunks)flo_ai/examples/telemetry_example.py(1 hunks)flo_ai/examples/variables_workflow_example.py(2 hunks)flo_ai/examples/variables_workflow_yaml_example.py(2 hunks)flo_ai/examples/vertexai_agent_example.py(1 hunks)flo_ai/examples/vllm_agent_usage.py(4 hunks)flo_ai/flo_ai/arium/arium.py(8 hunks)flo_ai/flo_ai/arium/base.py(6 hunks)flo_ai/flo_ai/arium/builder.py(14 hunks)flo_ai/flo_ai/arium/llm_router.py(12 hunks)flo_ai/flo_ai/arium/memory.py(7 hunks)flo_ai/flo_ai/arium/models.py(1 hunks)flo_ai/flo_ai/arium/nodes.py(4 hunks)flo_ai/flo_ai/builder/agent_builder.py(3 hunks)flo_ai/flo_ai/formatter/yaml_format_parser.py(6 hunks)flo_ai/flo_ai/helpers/llm_factory.py(3 hunks)flo_ai/flo_ai/llm/anthropic_llm.py(2 hunks)flo_ai/flo_ai/llm/base_llm.py(3 hunks)flo_ai/flo_ai/llm/gemini_llm.py(4 hunks)flo_ai/flo_ai/llm/ollama_llm.py(1 hunks)flo_ai/flo_ai/llm/openai_llm.py(3 hunks)flo_ai/flo_ai/llm/openai_vllm.py(2 hunks)flo_ai/flo_ai/llm/rootflo_llm.py(3 hunks)flo_ai/flo_ai/llm/vertexai_llm.py(1 hunks)flo_ai/flo_ai/models/agent.py(5 hunks)flo_ai/flo_ai/models/base_agent.py(4 hunks)flo_ai/flo_ai/telemetry/telemetry.py(1 hunks)flo_ai/flo_ai/tool/flo_tool.py(3 hunks)flo_ai/flo_ai/utils/variable_extractor.py(2 hunks)flo_ai/pyproject.toml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
flo_ai/examples/vertexai_agent_example.py (2)
flo_ai/examples/flo_tool_example.py (1)
get_weather(64-80)flo_ai/examples/multi_tool_example.py (1)
get_weather(68-83)
flo_ai/examples/multi_tool_example.py (3)
flo_ai/examples/flo_tool_example.py (1)
get_weather(64-80)flo_ai/examples/vertexai_agent_example.py (1)
get_weather(48-51)flo_ai/examples/tool_usage.py (1)
get_weather(28-30)
flo_ai/examples/flo_tool_example.py (2)
flo_ai/examples/multi_tool_example.py (1)
get_weather(68-83)flo_ai/examples/vertexai_agent_example.py (1)
get_weather(48-51)
flo_ai/examples/telemetry_example.py (1)
flo_ai/flo_ai/arium/arium.py (1)
run(34-177)
flo_ai/examples/llm_router_example.py (2)
flo_ai/flo_ai/arium/memory.py (1)
BaseMemory(106-130)flo_ai/flo_ai/arium/builder.py (1)
build_and_run(243-261)
flo_ai/examples/arium_linear_usage.py (1)
flo_ai/flo_ai/arium/builder.py (1)
build_and_run(243-261)
flo_ai/examples/simple_working_demo.py (2)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)add(108-109)add(143-147)add(165-166)flo_ai/flo_ai/models/chat_message.py (1)
UserMessage(65-70)
flo_ai/flo_ai/models/agent.py (1)
flo_ai/flo_ai/models/chat_message.py (2)
BaseMessage(49-52)FunctionMessage(85-102)
flo_ai/examples/example_graph_visualization.py (1)
flo_ai/flo_ai/llm/base_llm.py (7)
BaseLLM(9-143)generate(23-30)stream(33-40)get_message_content(104-106)format_tool_for_llm(109-111)format_tools_for_llm(114-116)format_image_in_message(119-121)
flo_ai/flo_ai/arium/arium.py (2)
flo_ai/flo_ai/arium/memory.py (2)
MessageMemory(133-153)MessageMemoryItem(22-33)flo_ai/flo_ai/utils/variable_extractor.py (1)
resolve_variables(110-137)
flo_ai/flo_ai/llm/base_llm.py (7)
flo_ai/flo_ai/llm/anthropic_llm.py (1)
format_image_in_message(241-243)flo_ai/flo_ai/llm/gemini_llm.py (1)
format_image_in_message(245-263)flo_ai/flo_ai/llm/ollama_llm.py (1)
format_image_in_message(166-168)flo_ai/flo_ai/llm/openai_llm.py (1)
format_image_in_message(186-210)flo_ai/flo_ai/llm/rootflo_llm.py (1)
format_image_in_message(333-337)flo_ai/tests/unit-tests/test_base_llm.py (1)
format_image_in_message(58-59)flo_ai/flo_ai/models/chat_message.py (1)
ImageMessageContent(21-28)
flo_ai/flo_ai/arium/base.py (4)
flo_ai/flo_ai/arium/nodes.py (3)
AriumNode(12-50)ForEachNode(53-155)FunctionNode(158-206)flo_ai/flo_ai/arium/protocols.py (1)
ExecutableNode(5-39)flo_ai/flo_ai/tool/base_tool.py (1)
Tool(12-50)flo_ai/flo_ai/arium/models.py (4)
StartNode(15-17)EndNode(21-23)Edge(27-34)default_router(7-11)
flo_ai/flo_ai/llm/rootflo_llm.py (3)
flo_ai/flo_ai/llm/base_llm.py (3)
generate(23-30)stream(33-40)format_image_in_message(119-121)flo_ai/flo_ai/llm/gemini_llm.py (3)
generate(48-148)stream(151-206)format_image_in_message(245-263)flo_ai/flo_ai/llm/openai_llm.py (3)
generate(41-117)stream(120-150)format_image_in_message(186-210)
flo_ai/flo_ai/arium/builder.py (4)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)get(112-113)get(149-153)get(168-175)flo_ai/flo_ai/arium/nodes.py (4)
AriumNode(12-50)run(35-50)run(98-111)run(179-206)flo_ai/flo_ai/arium/base.py (1)
add_edge(93-164)flo_ai/flo_ai/models/chat_message.py (2)
UserMessage(65-70)BaseMessage(49-52)
flo_ai/flo_ai/arium/llm_router.py (1)
flo_ai/flo_ai/arium/memory.py (5)
MessageMemory(133-153)MessageMemoryItem(22-33)get(112-113)get(149-153)get(168-175)
flo_ai/flo_ai/arium/memory.py (1)
flo_ai/flo_ai/models/chat_message.py (1)
BaseMessage(49-52)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (77)
flo_ai/examples/vertexai_agent_example.py (1)
48-51: Consistent optionalcountrydefault; behavior preservedSwitching
country’s default to''keeps thelocationformatting logic identical for omitted values (if countrystill treats it as falsy) while aligning this example with the other weather tools’ signatures and the Tool schema for an optional string. No issues from a typing or runtime perspective.flo_ai/pyproject.toml (1)
50-50: I'll help you verify the "ty" package dependency and the pre-release version constraint. Let me gather information about this package and check the codebase for any additional context.
<function_calls>
ty package PyPI python type checker
</function_calls>Now let me also search the codebase to see if "ty" appears elsewhere and get more context on the pyproject.toml file:
<function_calls>
#!/bin/bashSearch for all occurrences of "ty" in pyproject.toml
cat flo_ai/pyproject.toml | head -70 | tail -30
</function_calls><function_calls>
#!/bin/bashCheck the entire pyproject.toml structure for ty references
rg -n "ty" flo_ai/pyproject.toml
</function_calls>flo_ai/examples/vllm_agent_usage.py (1)
18-20: Early guard for missing vLLM base URL is a good additionThis check prevents confusing runtime failures by failing fast when
VLLM_BASE_URLis not configured; the error message is clear and close to where the LLM is constructed.flo_ai/examples/telemetry_example.py (2)
185-185: LGTM!The input type change from list to string aligns with the PR's goal of normalizing input shapes across the codebase. This makes the API more intuitive.
180-182: Based on my thorough verification of the codebase, I can now provide the rewritten review comment:The workflow.name assignment is fully supported and working as intended.
The
type: ignorecomment is appropriate for suppressing type checker warnings about dynamically added attributes. The Arium class deliberately allows setting thenameattribute after instantiation (seeArium.run()line 75:getattr(self, 'name', 'unnamed_workflow')), and this pattern is used throughout the codebase. The attribute is not read-only or incorrectly typed—it is simply not initialized in__init__, which is a valid Python pattern.flo_ai/examples/multi_tool_example.py (1)
68-68: LGTM!Changing the default from
Noneto''is a good improvement that simplifies the conditional logic on line 82 and maintains consistency across the codebase.flo_ai/examples/flo_tool_example.py (1)
64-64: LGTM!This change maintains consistency with the same function signature updates in other example files and simplifies downstream string handling.
flo_ai/flo_ai/telemetry/telemetry.py (1)
142-143: Essential type annotation fix.Using
Optional[str]instead ofstr = Noneis the correct way to type optional string parameters. The previous annotations were technically incorrect sincestrcannot beNone.flo_ai/flo_ai/arium/memory.py (4)
69-71: Excellent defensive coding with walrus operator.The walrus operator combined with a None check prevents potential
AttributeErrorfrom accessing.statuson a None value. This is a good improvement over the previous implementation.
108-109: LGTM!Renaming the parameter from
mtomessageimproves code readability and makes the intent clearer.
23-23: The review comment's concern is not supported by the codebase evidence.I've searched for all
MessageMemoryIteminstantiations in the repository. All 6 found instances explicitly provide theresultparameter:
- 2 instances in test files (test_llm_router.py)
- 1 instance in example code (simple_working_demo.py)
- 3 instances in the core arium.py implementation
No code in this repository instantiates
MessageMemoryItemwithout providing theresultparameter. Making it required does not break existing code.
83-88: All call sites tomark_step_completedandmark_step_failedare compliant with the required parameters.The methods are not called anywhere in the codebase, so there are no callers to update. This is not a breaking change.
flo_ai/flo_ai/formatter/yaml_format_parser.py (3)
92-94: LGTM!Using
Literal.__class_getitem__()for runtime construction of Literal types is the correct approach when building types dynamically.
96-101: LGTM!Extracting array type creation into a dedicated method improves code organization and makes the type resolution logic clearer.
158-159: LGTM!Adding explicit
encoding='utf-8'is a best practice that ensures consistent file reading across different platforms and locales.Also applies to: 203-204
flo_ai/flo_ai/arium/models.py (1)
32-33: LGTM!The
type: ignorecomment is appropriate here since accessing.__name__onfunctools.partial.funcis valid at runtime but may not be fully typed in the type checker's perspective.flo_ai/examples/simple_yaml_workflow.py (1)
107-116: LGTM!Converting
user_inputfrom a list to a plain string simplifies the code and aligns with the PR's goal of normalizing input types across the framework. The updated print statement correctly reflects this change.flo_ai/examples/variables_workflow_example.py (1)
145-148: Stringinputsusage matches updatedAriumBuilder.build_and_runAPIBoth workflow invocations now pass a plain string to
build_and_run(inputs=...), which aligns with theinputs: List[BaseMessage] | strsignature and is equivalent to the prior single-element list usage. No behavior change, just cleaner API use.Also applies to: 207-210
flo_ai/examples/example_graph_visualization.py (1)
11-12: MockLLM correctly implements theBaseLLMinterface for the fallback pathImporting
BaseLLMand havingMockLLMsubclass it with concretegenerate,stream,get_message_content, and tool/image formatting methods cleanly satisfies the abstract interface and keeps the visualization example working even without OpenAI. The minimal implementations are appropriate for this mock context.Also applies to: 44-65
flo_ai/flo_ai/llm/ollama_llm.py (1)
15-20:api_keytype narrowed toOptional[str]without runtime impactUpdating
api_keytoOptional[str] = NoneinOllamaLLM.__init__matches theBaseLLMconstructor and other LLMs in the codebase, improving type correctness without changing behavior.flo_ai/flo_ai/helpers/llm_factory.py (1)
141-145: Stricter config validation looks correct but may break existing setupsThe new checks requiring:
base_urlfor VertexAI,api_keyforopenai_vllm, andbase_urlfor RootFlo (even when using an access token),tighten configuration correctness and should surface misconfigured environments early with clear error messages.
However, these will fail any existing configs/tests that previously relied on implicit defaults or partial settings. Please confirm that:
- all production and example configs set these fields (or the relevant env vars), and
- documentation for these providers is updated to call out the now-required parameters.
Also applies to: 175-179, 215-219
flo_ai/examples/variables_workflow_yaml_example.py (1)
200-203: YAML workflowinputsnow correctly use single-string promptsBoth multi-agent YAML workflows now call
build_and_runwithinputsas a plain string rather than a single-element list, matching the updated builder API and keeping behavior equivalent.Also applies to: 290-293
flo_ai/examples/llm_router_example.py (1)
100-108: Router examples now use string inputs consistently with builder/run APIs
- All
build_and_runcalls now take a single string (or multi-line string) instead of a list of strings, and the task-classifier example passestaskdirectly. This matches theAriumBuilder.build_and_runsignature and keeps the semantics of the previous examples intact.- The
content_workflow_router(memory: BaseMemory) -> Literal['editor', 'reviewer'] # type: ignoredeclaration keeps the router’s intent and return type explicit while pragmatically silencing the decorator’s typing mismatch, which is acceptable for example code.No functional issues spotted with these adjustments.
Also applies to: 137-137, 173-187, 305-328, 387-397
flo_ai/examples/custom_plan_execute_demo.py (1)
100-107: Ensurearium.runsupports string inputs in line with other examples
taskis now a plain string passed directly toarium.run(task), which is cleaner and consistent with other examples that send string inputs into Arium flows. This depends onarium.runhandlingstrappropriately (e.g., wrapping it as aUserMessageinternally).Please confirm that
arium.runhas been updated to acceptstrinputs (and is covered by tests) so this example doesn’t regress at runtime.flo_ai/examples/arium_linear_usage.py (1)
29-37: Linear usage examples now align with single-stringbuild_and_runinputsBoth AriumBuilder flows now pass a single string (or multi-line string) into
build_and_run, matching the updatedinputstype while preserving the original behavior of a single initial user turn. The changes are consistent with other examples in this PR.Also applies to: 84-96
flo_ai/flo_ai/tool/flo_tool.py (2)
49-49: LGTM - Type ignore comments are appropriate for dynamic attribute assignment.Dynamic attribute assignment to function objects is a valid Python pattern, and the
# type: ignore[attr-defined]annotations correctly suppress the type checker warnings while documenting the intentional dynamic behavior.Also applies to: 62-62, 65-65
82-89: Defensivegetattrusage is appropriate, though unlikely to be needed for standard callables.The fallback to
'unknown'handles edge cases where the callable might not have__name__or__doc__(e.g., certain wrapped objects or proxies). For standard functions, these attributes are always present, but this defensive approach improves resilience.flo_ai/flo_ai/llm/anthropic_llm.py (2)
21-22: LGTM - Optional parameters allow environment-based configuration.Making
api_keyandbase_urloptional withNonedefaults allows theAsyncAnthropicclient to fall back to environment variables (ANTHROPIC_API_KEY, etc.), which is the standard pattern.
178-178: LGTM - Explicit type annotation improves code clarity.Adding the
Dict[str, Any]type annotation toanthropic_kwargsmakes the code more explicit and helps with static analysis.flo_ai/examples/arium_examples.py (2)
61-76: LGTM - Agent initialization is now more explicit.The updated Agent initialization with explicit
system_promptandllmparameters improves clarity and consistency with the new API design.
232-244: The review comment is incorrect — the codebase already requires Python 3.10+.The
pyproject.tomlspecifiesrequires-python = ">=3.10,<4.0"andsetup.pyspecifiespython_requires='>=3.10'. Since the codebase's minimum supported Python version is 3.10, the union syntaxList[BaseMessage] | stris appropriate and does not require migration toUnion[List[BaseMessage], str].flo_ai/examples/arium_yaml_example.py (2)
583-583: LGTM - Simplified to single string input.The change from list-based input to a single string aligns with the PR's goal of standardizing the input API.
823-830: LGTM - Multi-line string input is cleaner.Using a triple-quoted multi-line string instead of a list of concatenated strings improves readability.
flo_ai/flo_ai/models/base_agent.py (1)
45-45: Return type change fromstrtoList[BaseMessage]is not a breaking change within this codebase.The
base_agent.pyfile is a new addition to this commit, not a modification of existing code. Both the abstract method and all implementing subclasses (such asAgent) were introduced together with the correctList[BaseMessage]return type. All examples and usage patterns already expect and handle this return type, indicating the entire codebase was designed consistently around this interface from the start.flo_ai/flo_ai/llm/openai_vllm.py (1)
11-11: LGTM on the Optional type annotation.Making
api_keyoptional aligns with the base class signature and allows flexibility for configurations where the API key might be provided via environment variables.flo_ai/flo_ai/llm/base_llm.py (3)
10-16: LGTM on the constructor signature update.The
Optional[str]type forapi_keyproperly reflects that the API key may be optional (e.g., when provided via environment variables or when using local models).
47-56: Good defensive programming for function call extraction.The added guards ensure both
nameandargumentsattributes exist before constructing the result. This prevents potentialAttributeErrorexceptions when dealing with incomplete function call objects.
118-121: Return type broadened appropriately.The
Anyreturn type correctly reflects that subclass implementations return different types:dict(OpenAI),types.Part(Gemini),str(Anthropic/Ollama). This allows each LLM implementation to use its native format.flo_ai/examples/simple_working_demo.py (4)
14-17: LGTM on the updated imports.The imports align with the new memory model that uses
MessageMemoryItemandUserMessagefor type-safe message handling.
190-196: LGTM on memory item access patterns.The access patterns (
msg.result.role,msg.result.content,result[-1].result.content) correctly match theMessageMemoryItemstructure whereresultis aBaseMessagewithroleandcontentattributes.
247-249: LGTM on MessageMemoryItem construction.The usage correctly constructs
MessageMemoryItemwithnode='user'andresult=UserMessage(content=...), matching the expected signature from the memory module.
179-179:arium.run()correctly accepts string input per its type signature.Verification confirms the
run()method signature at line 34 offlo_ai/flo_ai/arium/arium.pyexplicitly acceptsinputs: List[BaseMessage] | str, which allows both list and string inputs. The call at line 179 insimple_working_demo.pypassing a stringtaskis valid and matches the updated API.flo_ai/flo_ai/llm/vertexai_llm.py (1)
13-15: LGTM on the Optional type annotations.The
Optional[str]types forbase_url,project, andlocationalign with the broader type system updates across LLM implementations.flo_ai/flo_ai/arium/nodes.py (3)
9-9: LGTM on TYPE_CHECKING import.Using
TYPE_CHECKINGto importAriumNodeTypecorrectly avoids circular dependency issues at runtime while maintaining proper type hints.
60-63: LGTM on broadened node type.Changing
execute_nodetype toAriumNodeTypeallows theForEachNodeto accept any valid node type in the workflow, includingExecutableNode,AriumNode,FunctionNode, etc.
131-131: LGTM on isinstance check.The
isinstance(self.execute_node, AriumNode)check properly narrows the type to accessAriumNode-specific attributes likearium.memory.flo_ai/flo_ai/llm/openai_llm.py (3)
21-23: LGTM on type signature updates.The change to
Optional[str]forapi_keyandbase_urlproperly reflects that these can be nullable, aligning with the broader PR goal of consistent typing across LLM implementations.
31-36: Client initialization simplified.The direct initialization of
AsyncOpenAIwith explicit parameters is cleaner and more maintainable. Note that passing**kwargsdirectly may forward unexpected parameters to the client.Ensure that the
**kwargspassed toAsyncOpenAIare compatible with the client's accepted parameters. Invalid kwargs could cause runtime errors.
155-157: Improved null-safety inget_message_content.The explicit check for
response.content is not Nonebefore accessing it prevents returning the string"None"when content is absent, which is a good defensive fix.flo_ai/flo_ai/arium/arium.py (3)
54-56: LGTM on explicit type annotation.The inline type annotation
inputs: list[BaseMessage]makes the code more readable and helps with type checking after the string-to-list conversion.
195-195: Verify behavior change in_emit_event.The condition now requires
events_filterto be truthy (if callback and events_filter and ...). Previously, ifevents_filterwas an empty list, events would still be checked against it. Now, an emptyevents_filterlist will suppress all events.Confirm this is the intended behavior. If users pass
events_filter=[]expecting no events, this is correct. However, if they expect the default behavior (all events), this could be a breaking change since line 68-69 sets the default to all event types only whenNone.
406-409: Type narrowing withAriumNodeTypeis a good improvement.Using the union type alias
AriumNodeTypeinstead of listing all node types inline improves maintainability. However, the mutable default on line 409 should still be fixed.flo_ai/flo_ai/llm/rootflo_llm.py (4)
196-197: Good defensive check for JWT generation.This explicit check prevents a confusing error from the
jwt.encodecall ifapp_secretisNone.
292-293: Defensive null check after initialization.While
_ensure_initialized()should set_llm, this explicit check provides a clear error message if something goes wrong.
321-337: Initialization guards on synchronous methods may fail.
format_tool_for_llm,format_tools_for_llm, andformat_image_in_messageare synchronous methods that check_llm is Nonebut cannot call_ensure_initialized()(which is async). If these are called beforegenerate()orstream(), they will raiseRuntimeError.Consider documenting this requirement or deferring to the base class implementation when
_llmisNone. Alternatively, provide a synchronous initialization path.
298-310: Remove the# type: ignore[override]comment;RootFloLLM.stream()signature aligns perfectly withBaseLLM.stream().The original review comment's premise is incorrect. Both
BaseLLM.stream()andRootFloLLM.stream()have identical signatures with no**kwargsparameter. The# type: ignore[override]comment appears unnecessary—the methods match exactly.Note: Other LLM implementations (OpenAI, OpenAIVLLM, Gemini) incorrectly add
**kwargsto theirstream()methods, which violates the base class contract. RootFloLLM correctly adheres to the defined interface.flo_ai/flo_ai/models/agent.py (2)
77-77: Return type change aligns with broader PR refactoring.Returning
List[BaseMessage]instead ofstrprovides richer context for downstream consumers, including the full conversation history.
204-206: Explicit return path added for loop exhaustion.This ensures the method always returns a value even if the while loop completes without an early return (e.g., after max retries).
flo_ai/flo_ai/llm/gemini_llm.py (3)
25-25: LGTM on Optional base_url.Consistent with the other LLM implementations in this PR.
245-260: Return type and validation improvements.The return type change from
dicttotypes.Partis more accurate. The added validation formime_typewhenbase64is provided prevents a confusing error from the Gemini SDK.
51-51: I'll help you verify this review comment about the type change for thefunctionsparameter in the Gemini LLM. Let me start by examining the code context and searching for all callers.
<function_calls>
cat -n flo_ai/flo_ai/llm/gemini_llm.py | head -80
</function_calls>Now let me search for all calls to the
generate()method with thefunctionsparameter:
<function_calls>
rg -n --type=py 'generate\s*([^)]functions\s=' -C 5
</function_calls>Let me also check for any imports or usage of
types.FunctionDeclaration:
<function_calls>
rg -n --type=py 'FunctionDeclaration|from.*types|import.*types' -C 2 flo_ai/flo_ai/llm/
</function_calls>Let me look at the broader context of the gemini_llm.py file to understand the function signature:
<function_calls>
wc -l flo_ai/flo_ai/llm/gemini_llm.py
</function_calls><function_calls>
cat -n flo_ai/flo_ai/llm/gemini_llm.py | sed -n '40,70p'
</function_calls>Let me search for all usages of the
generatemethod across the codebase:
<function_calls>
rg -n --type=py '.generate\s*(' -C 3 | head -100
</function_calls><function_calls>
rg -n --type=py 'functions\s*=' -C 2 | grep -A2 -B2 'functions'
</function_calls>Let me get a more focused search on the actual generate calls with functions:
<function_calls>
rg -n --type=py 'functions\s*=' -C 4
</function_calls>flo_ai/flo_ai/arium/base.py (4)
14-16: AriumNodeType definition looks good.The type alias correctly consolidates all node types that can be used in the Arium workflow. This provides better type safety and clearer API contracts.
233-235: Type ignore comments are appropriate for optional visualization dependencies.The
# type: ignorecomments are correctly applied for matplotlib and networkx imports, which are optional dependencies used only for visualization.
376-389: Path handling refactor is clean and appropriate.Using
created_output_pathas aPathobject and calling.parent.mkdir(parents=True, exist_ok=True)ensures the output directory exists before saving. The refactor maintains the same functionality while improving clarity.
96-96: Breaking change confirmed:to_nodesparameter removed default value and all callers already pass it explicitly.Verification shows the parameter at line 96 is indeed
List[str]without a default value. Examining all 20+add_edgecalls across the codebase (unit tests, examples, builder, and configuration loaders) confirms every caller passesto_nodesexplicitly—no code relies on a missing default value.flo_ai/flo_ai/arium/llm_router.py (5)
104-106: Good improvement:routemethod now explicitly typed as async with Optional execution_context.The signature change properly reflects that
execution_contextis optional and the method is asynchronous.
457-464: Conversation filtering logic correctly handles non-string content.Filtering to only include messages where
msg.result.contentis a string prevents type errors when joining the conversation text.
467-471: Null-safe handling of execution_context.The conditional check
if execution_context is not Nonebefore calling_get_next_step_in_patternprevents potential issues whenexecution_contextisNone.
948-951: Type ignore comments are necessary for dynamic Literal construction.The
# type: ignorecomments are appropriate here sinceLiteral[option_names]with a dynamic tuple is not statically verifiable by type checkers.
967-969: Good use ofcastfor dynamic attribute assignment.Using
cast(Any, router_function)properly handles the dynamic attribute assignment without triggering type checker errors.flo_ai/flo_ai/arium/builder.py (7)
32-39: Type annotations properly updated for the new type system.The updates to use
MessageMemoryandAriumNodeTypealign with the broader refactoring across the codebase.
251-260: Good improvement: Input handling now supports both list and single string inputs.The logic correctly handles different input types, wrapping strings in
UserMessageand validating list items.
267-269: Visualization now auto-builds if needed.The change to build the arium if
self._ariumisNoneis a good UX improvement, allowing users to callvisualize()without explicitly callingbuild()first.
452-453: Good defensive check for yaml_file parameter.The explicit check and error message provide clearer feedback when the yaml_file is unexpectedly None.
541-545: Safe function registry access pattern.The conditional check
if function_registry is not Nonebefore calling.get()preventsAttributeErrorwhenfunction_registryisNone.
914-918: Role extraction handles None values appropriately.The conditional expression ensures
roleis always a string, defaulting to empty string when not provided.
960-977: Output schema handling with safe defaults.The
Optional[Dict[str, Any]]type foroutput_schemaand the fallback to{}at line 977 ensure the agent builder receives a valid schema.
| self, | ||
| inputs: List[BaseMessage] | str, | ||
| variables: Optional[Dict[str, Any]] = None, | ||
| variables: Dict[str, Any] = {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutable default argument is a Python anti-pattern.
Using {} as a default argument creates a single shared dictionary instance that persists across all calls. If the dictionary is ever mutated, the mutation persists.
Apply this diff to fix the mutable default argument:
async def run(
self,
inputs: List[BaseMessage] | str,
- variables: Dict[str, Any] = {},
+ variables: Optional[Dict[str, Any]] = None,
event_callback: Optional[Callable[[AriumEvent], None]] = None,
events_filter: Optional[List[AriumEventType]] = None,
):Then at the start of the method body:
variables = variables if variables is not None else {}The same issue exists on lines 204, 361, and 409.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around lines 37, 204, 361 and 409, several
function/method parameters use a mutable default argument like {} which creates
a single shared dict across calls; change each default from {} to None (e.g.,
variables: Optional[Dict[str, Any]] = None) and at the start of each function
body set variables = variables if variables is not None else {} to ensure a
fresh dict per call, adding Optional to typing imports if needed.
| event_callback: Optional[Callable[[AriumEvent], None]] = None, | ||
| events_filter: Optional[List[AriumEventType]] = None, | ||
| variables: Optional[Dict[str, Any]] = None, | ||
| variables: Dict[str, Any] = {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutable default argument - same issue.
Same anti-pattern as line 37. Use Optional[Dict[str, Any]] = None and initialize inside the method.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around line 204, the function signature uses a
mutable default argument variables: Dict[str, Any] = {} which is an
anti-pattern; change the parameter to Optional[Dict[str, Any]] = None and inside
the function initialize it with an empty dict when None (e.g., if variables is
None: variables = {}), ensuring no shared mutable state across calls.
| self, | ||
| inputs: List[BaseMessage], | ||
| variables: Dict[str, Any], | ||
| variables: Dict[str, Any] = {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutable default argument - same issue.
Same anti-pattern as line 37.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/arium.py around line 361, the function parameter uses a
mutable default argument "variables: Dict[str, Any] = {}" which causes
shared-state bugs; change the signature to use None as the default (variables:
Optional[Dict[str, Any]] = None) and then inside the function initialize
variables = {} if variables is None (mirroring the fix applied at line 37) so
each call gets a fresh dict.
| self, | ||
| inputs: List[BaseMessage] | str, | ||
| variables: Optional[Dict[str, Any]] = None, | ||
| variables: Dict[str, Any] = {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutable default argument: variables: Dict[str, Any] = {}.
Using a mutable default argument like {} is a well-known Python anti-pattern. If the dictionary is ever mutated, the changes persist across calls.
Apply this diff to fix the mutable default argument:
async def build_and_run(
self,
inputs: List[BaseMessage] | str,
- variables: Dict[str, Any] = {},
+ variables: Optional[Dict[str, Any]] = None,
) -> List[MessageMemoryItem]:
"""Build the Arium and run it with the given inputs and optional runtime variables."""
arium = self.build()
new_inputs = []
+ if variables is None:
+ variables = {}
if isinstance(inputs, list):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variables: Dict[str, Any] = {}, | |
| async def build_and_run( | |
| self, | |
| inputs: List[BaseMessage] | str, | |
| variables: Optional[Dict[str, Any]] = None, | |
| ) -> List[MessageMemoryItem]: | |
| """Build the Arium and run it with the given inputs and optional runtime variables.""" | |
| arium = self.build() | |
| new_inputs = [] | |
| if variables is None: | |
| variables = {} | |
| if isinstance(inputs, list): |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/builder.py around line 246, the function signature uses a
mutable default argument variables: Dict[str, Any] = {}, which can retain
mutations across calls; change the signature to use variables:
Optional[Dict[str, Any]] = None and inside the function replace any use of the
default with variables = {} if variables is None (or variables = dict(variables)
if you need a copy), ensuring you create a new dict per call.
| def get_routing_prompt( | ||
| self, | ||
| memory: MessageMemory, | ||
| memory: MessageMemory[MessageMemoryItem], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Invalid generic type annotation: MessageMemory is not a generic class.
Based on the provided context from flo_ai/flo_ai/arium/memory.py, MessageMemory is a plain class without generic type parameters. The annotation MessageMemory[MessageMemoryItem] will cause a TypeError at runtime.
Apply this diff to fix the type annotation:
def get_routing_prompt(
self,
- memory: MessageMemory[MessageMemoryItem],
+ memory: MessageMemory,
options: Dict[str, str],
execution_context: Optional[dict] = None,
) -> str:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| memory: MessageMemory[MessageMemoryItem], | |
| def get_routing_prompt( | |
| self, | |
| memory: MessageMemory, | |
| options: Dict[str, str], | |
| execution_context: Optional[dict] = None, | |
| ) -> str: |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/arium/llm_router.py around line 586, the parameter type uses an
invalid generic annotation MessageMemory[MessageMemoryItem]; replace that with
the non-generic class name MessageMemory (i.e., change memory:
MessageMemory[MessageMemoryItem] to memory: MessageMemory) so the annotation
matches the actual class definition and avoids a runtime TypeError.
| def create(yaml_dict: Optional[Dict] = None, yaml_path: Optional[str] = None): | ||
| return FloYamlParser.Builder(yaml_dict=yaml_dict, yaml_path=yaml_path).build() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confusing API: FloJsonParser.create accepts YAML parameters.
FloJsonParser.create() now accepts yaml_dict and yaml_path parameters, which is counterintuitive. This creates confusion between FloJsonParser and FloYamlParser.
Consider one of the following:
- Remove this method and require users to call
FloYamlParser.create()directly for YAML inputs - Rename the parameters to
dictandpathto be format-agnostic - Add a clear deprecation notice if this is for backward compatibility
🤖 Prompt for AI Agents
In flo_ai/flo_ai/formatter/yaml_format_parser.py around lines 136-137,
FloJsonParser.create currently accepts yaml-specific parameters (yaml_dict,
yaml_path) which is confusing; change the API to be format-agnostic by renaming
those parameters to dict and path (update signature, docstring and internal
references) and update all call sites/tests to use the new names; if you must
preserve the old names for backward compatibility, add a thin wrapper that
accepts yaml_dict/yaml_path, issues a deprecation warning, maps them to
dict/path, and forwards to the new signature.
| self.client = AsyncAnthropic( | ||
| api_key=self.api_key, | ||
| base_url=base_url, | ||
| default_headers=custom_headers, | ||
| **kwargs, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential duplicate keyword argument conflict with **kwargs.
If kwargs contains keys like api_key, base_url, or default_headers, Python will raise a TypeError: got multiple values for keyword argument. Consider filtering out these keys from kwargs before spreading.
+ # Filter out keys that are explicitly passed to avoid duplicates
+ filtered_kwargs = {k: v for k, v in kwargs.items()
+ if k not in ('api_key', 'base_url', 'default_headers')}
self.client = AsyncAnthropic(
api_key=self.api_key,
base_url=base_url,
default_headers=custom_headers,
- **kwargs,
+ **filtered_kwargs,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.client = AsyncAnthropic( | |
| api_key=self.api_key, | |
| base_url=base_url, | |
| default_headers=custom_headers, | |
| **kwargs, | |
| ) | |
| # Filter out keys that are explicitly passed to avoid duplicates | |
| filtered_kwargs = {k: v for k, v in kwargs.items() | |
| if k not in ('api_key', 'base_url', 'default_headers')} | |
| self.client = AsyncAnthropic( | |
| api_key=self.api_key, | |
| base_url=base_url, | |
| default_headers=custom_headers, | |
| **filtered_kwargs, | |
| ) |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/llm/anthropic_llm.py around lines 30 to 35, passing **kwargs
directly into AsyncAnthropic can cause a TypeError if kwargs contains keys like
api_key, base_url, or default_headers; remove those keys from kwargs before
calling AsyncAnthropic (e.g., create a filtered_kwargs that excludes 'api_key',
'base_url', and 'default_headers' or pop them out) and pass filtered_kwargs
instead so you don’t supply duplicate keyword arguments.
| async def generate( | ||
| self, messages: list[dict], output_schema: dict = None, **kwargs | ||
| self, | ||
| messages: list[dict], | ||
| functions: Optional[List[Dict[str, Any]]] = None, | ||
| output_schema: Optional[Dict[str, Any]] = None, | ||
| **kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused functions parameter in generate method.
The functions parameter is added to the method signature but is never used in the method body. Compare this with the stream method (lines 92-93) which does use the functions parameter.
Apply this diff to use the functions parameter consistently:
# Prepare OpenAI API parameters
vllm_openai_kwargs = {
'model': self.model,
'messages': messages,
'temperature': self.temperature,
**kwargs,
**self.kwargs,
}
+
+ if functions:
+ vllm_openai_kwargs['functions'] = functionsCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/llm/openai_vllm.py around lines 27 to 32, the generate method
accepts a functions parameter but never uses it; update the method to pass the
functions argument into the underlying request/response logic exactly as the
stream method does (e.g., include functions in the payload or call that builds
the model request), ensure any schema/validation logic still applies, and keep
the signature and behavior consistent with stream so functions-based calls are
honored.
| return [ | ||
| FunctionMessage( | ||
| content=str( | ||
| 'The final result based on the tool executions is: \n' | ||
| + str(function_response) | ||
| ), | ||
| name=function_name, | ||
| ) | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function_response and function_name may be undefined.
This fallback return references function_response and function_name, which are only defined inside the tool execution loop. If no tool calls were made (e.g., tool_call_count is 0 when entering line 402), these variables will be undefined, causing a NameError.
Consider returning the conversation history instead, or handling the case where no tools were called:
- return [
- FunctionMessage(
- content=str(
- 'The final result based on the tool executions is: \n'
- + str(function_response)
- ),
- name=function_name,
- )
- ]
+ # No assistant message extracted, return conversation history
+ return self.conversation_historyAlternatively, initialize sentinel values before the loop.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return [ | |
| FunctionMessage( | |
| content=str( | |
| 'The final result based on the tool executions is: \n' | |
| + str(function_response) | |
| ), | |
| name=function_name, | |
| ) | |
| ] | |
| # No assistant message extracted, return conversation history | |
| return self.conversation_history |
🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/agent.py around lines 425 to 433, the fallback return
uses function_response and function_name which can be undefined if no tool calls
occurred; fix by either initializing function_response and function_name to
sentinel defaults before the tool-execution loop (e.g., None or empty strings)
and branch the fallback to produce a safe message when they are unset, or
short-circuit earlier and return the conversation history (or a clear
FunctionMessage saying "no tools were called") when tool_call_count is 0; ensure
the final return never references uninitialized variables by checking their
existence or using the initialized sentinels.
| self.conversation_history = [] | ||
|
|
||
| async def _get_message_history(self, variables: Optional[Dict[str, Any]] = None): | ||
| async def _get_message_history(self, variables: Dict[str, Any] = {}): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutable default argument {} is a Python anti-pattern.
Using a mutable default argument like {} can cause subtle bugs. The same dictionary object is reused across all calls that don't provide an explicit variables argument. If the dictionary is ever mutated within the function, those mutations persist across calls.
Apply this fix using None with a runtime default:
- async def _get_message_history(self, variables: Dict[str, Any] = {}):
+ async def _get_message_history(self, variables: Optional[Dict[str, Any]] = None):
+ if variables is None:
+ variables = {}
message_history = []This requires adding Optional to the imports if not already present.
🤖 Prompt for AI Agents
In flo_ai/flo_ai/models/base_agent.py around line 87 the method signature uses a
mutable default argument async def _get_message_history(self, variables:
Dict[str, Any] = {}): which can cause state to persist across calls; change the
default to None (variables: Optional[Dict[str, Any]] = None) and inside the
function set variables = {} if variables is None. Also ensure Optional is
imported from typing (add it to the imports if missing).
Summary by CodeRabbit
Refactor
Bug Fixes
Chores
✏️ Tip: You can customize this high-level summary in your review settings.