Add integration test for upserting graph in StateManager#353
Conversation
NiveditJain
commented
Sep 4, 2025
- Introduced a new test case in test_upsert_graph.py to validate the upsert_graph functionality of the StateManager.
- Implemented a PrintNode class to handle input messages and verify the execution of graph nodes.
- Utilized asyncio for non-blocking runtime management and ensured proper cleanup of tasks after execution.
- Introduced a new test case in test_upsert_graph.py to validate the upsert_graph functionality of the StateManager. - Implemented a PrintNode class to handle input messages and verify the execution of graph nodes. - Utilized asyncio for non-blocking runtime management and ensured proper cleanup of tasks after execution.
|
Caution Review failedThe pull request is closed. 📝 WalkthroughSummary by CodeRabbit
Summary by CodeRabbit
WalkthroughAdds two asynchronous integration tests: one upserts a single-node graph ("test_graph") using a PrintNode and asserts validation; the other upserts a graph with a node that emits 10 outputs, triggers a run, and validates run results via the runtime API. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Test
participant Runtime
participant StateManager
participant Node
rect rgba(200,230,255,0.35)
Test->>Runtime: start runtime (daemon thread)
Test-->>Test: await short sleep (init)
end
rect rgba(220,255,220,0.35)
Test->>StateManager: upsert_graph("test_graph", [GraphNodeModel(Node, inputs)])
StateManager->>Runtime: register/translate graph
Runtime->>Node: register/instantiate node
Node->>Node: execute(inputs) (async)
StateManager-->>Test: return upsert result (validation status)
end
rect rgba(255,245,215,0.35)
Test->>Runtime: trigger run (test_fan_out)
Runtime->>Node: fan-out execution (produce 10 outputs)
Runtime-->>Test: run_id
Test->>Runtime: poll runs API -> retrieve run details
Runtime-->>Test: run status and metrics
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Summary of Changes
Hello @NiveditJain, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new integration test to ensure the correct functionality of the StateManager's upsert_graph method. The test validates the ability to dynamically add or update graph configurations within the system, utilizing asynchronous operations for efficient execution and proper resource management.
Highlights
- New Integration Test: A new test case test_upsert_graph has been added to validate the upsert_graph functionality of the StateManager.
- PrintNode Implementation: A PrintNode class was implemented to simulate graph node execution and handle input messages within the test.
- Asynchronous Runtime Management: The test leverages asyncio for non-blocking runtime management and includes robust cleanup mechanisms for asynchronous tasks.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request adds an integration test for StateManager.upsert_graph. The test is a good foundation. My feedback provides a single, comprehensive suggestion to refactor the test to be more robust and complete. The proposed changes include using public APIs, making the test less reliant on fixed waits, and adding verification for the end-to-end execution of the graph, which significantly increases the test's value.
There was a problem hiding this comment.
Actionable comments posted: 6
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
integration-tests/test_upsert_graph.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
integration-tests/test_upsert_graph.py (5)
python-sdk/exospherehost/node/BaseNode.py (1)
BaseNode(6-100)python-sdk/exospherehost/runtime.py (2)
Runtime(46-461)_start(431-445)python-sdk/exospherehost/statemanager.py (1)
upsert_graph(128-188)python-sdk/exospherehost/models.py (1)
GraphNodeModel(16-75)integration-tests/conftest.py (2)
running_server(99-107)base_url(93-95)
🔇 Additional comments (2)
integration-tests/test_upsert_graph.py (2)
57-64: Remove task-cancel cleanup when not starting the runtimeIf you adopt the registration-only approach above, this block is dead code.
- finally: - # Ensure proper cleanup of the runtime task - if runtime_task and not runtime_task.done(): - runtime_task.cancel() - try: - await runtime_task - except asyncio.CancelledError: - pass # Expected when cancelling + finally: + passLikely an incorrect or invalid review comment.
4-4: Imports valid:exospherehost/__init__.pyexports required symbols
BaseNode, Runtime, StateManager, and GraphNodeModel are all imported and included in__all__, so the top-level imports in the test are correct.
- Replaced asyncio task with a threading approach for starting the runtime, ensuring proper initialization without blocking. - Simplified the upsert_graph test case by removing unnecessary cleanup logic related to asyncio tasks. - Maintained the functionality of the test while improving the structure and readability.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
integration-tests/test_upsert_graph.py (5)
10-15: Drop type ignore; assert inputs are setAssert
self.inputsinstead of using a type ignore.async def execute(self): - print(self.inputs.message) # type: ignore + assert self.inputs is not None + print(self.inputs.message)
18-19: Use consistent naming: state_manager_urlAlign variable naming with API args.
- state_machine_url = running_server.base_url + state_manager_url = running_server.base_url @@ - state_manager_uri=state_machine_url, + state_manager_uri=state_manager_url, @@ - state_manager_uri=state_machine_url, + state_manager_uri=state_manager_url,Also applies to: 25-25, 35-35
1-4: Import uuid for unique graph namesPrevents CI collisions across runs.
import pytest import asyncio +import uuid from pydantic import BaseModel
7-15: This test doesn’t verify node executionIf execution coverage is desired, add a separate test that calls
StateManager.trigger(...)and asserts a node-completed signal/output.Would you like a follow-up test that triggers the graph and asserts PrintNode ran using stdout capture or runtime status?
28-31: Replace thread+sleep with deterministic registration to avoid flakiness and leaksStart-up threading and fixed sleep are brittle; directly register nodes for validation.
- thread = threading.Thread(target=runtime.start, daemon=True) - thread.start() - - await asyncio.sleep(2) + # Deterministically register node schemas; no worker loop needed for upsert + await runtime._register()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
integration-tests/test_upsert_graph.py(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
integration-tests/test_upsert_graph.py (5)
python-sdk/exospherehost/node/BaseNode.py (1)
BaseNode(6-100)python-sdk/exospherehost/runtime.py (1)
Runtime(46-461)python-sdk/exospherehost/statemanager.py (2)
StateManager(9-188)upsert_graph(128-188)python-sdk/exospherehost/models.py (1)
GraphNodeModel(16-75)integration-tests/conftest.py (2)
running_server(99-107)base_url(93-95)
- Introduced a new test case in test_fan_out.py to validate the fan-out behavior of the StateManager. - Implemented a Node1 class to handle input messages and return a count of executions. - Utilized threading for runtime management and ensured proper initialization without blocking. - Enhanced assertions to verify the success of the graph execution and the correctness of run details.