-
Notifications
You must be signed in to change notification settings - Fork 217
fix: decouple detection from video display for smooth FPS #232
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Add detection_fps parameter to control detection rate independently - Run detection asynchronously in background tasks - Cache detection results and overlay on frames without waiting - CloudDetectionProcessor defaults to 5.0 detection FPS - LocalDetectionProcessor defaults to 10.0 detection FPS - Video frames now pass through at full FPS regardless of detection speed
WalkthroughAdds asynchronous background detection to multiple processors: frames are published immediately and annotated from a cached latest result while background tasks run inference at a configurable Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Frame as Frame Source
participant Proc as Processor (Local/Cloud/Pose)
participant Cache as Result Cache
participant BG as Background Task
participant Out as Video Output
Frame->>Proc: deliver frame
Proc->>Cache: read latest cached result
Cache-->>Proc: return cached detections (if any)
Proc->>Out: annotate with cached result (if present) and publish frame
alt detection interval elapsed and not running
Proc->>BG: spawn _run_detection_background(image, request_time)
BG->>BG: run inference (async)
BG->>Cache: update cache with new results + timestamp
BG-->>Proc: (update state / emit DetectionCompletedEvent)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
280-283: Inconsistent error return and broad exception handling.Two issues here:
Returns empty
{}on error, but_cached_resultsexpects{"detections": []}. This inconsistency could cause issues when accessingresults.get('detections', [])later.Uses
except Exception as ewhich violates coding guidelines.except Exception as e: logger.exception(f"❌ Local inference failed: {e}") - return {} + return {"detections": []}For the exception type, consider using more specific exceptions per coding guidelines.
🧹 Nitpick comments (6)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (4)
228-265: Unhandled fire-and-forget task may silently swallow exceptions.The task created via
asyncio.create_taskon line 245 is not stored or tracked. While exceptions are caught inside_run_detection_background, if the task is garbage collected before completion, it could cause issues in some edge cases.Consider storing the task reference to ensure proper lifecycle management:
+ self._detection_task: Optional[asyncio.Task] = NoneThen in
_process_and_add_frame:- asyncio.create_task(self._run_detection_background(frame_array.copy())) + self._detection_task = asyncio.create_task(self._run_detection_background(frame_array.copy()))
267-270: Avoid bareexcept Exception as e.Per coding guidelines, use specific exception handling instead of catching the broad
Exceptiontype. Consider catching more specific exceptions likeValueError,RuntimeError, or exceptions from the av/PIL libraries.- except Exception as e: - logger.exception(f"❌ Frame processing failed: {e}") + except (ValueError, RuntimeError, av.error.FFmpegError) as e: + logger.exception(f"❌ Frame processing failed: {e}")Based on coding guidelines, which state: "Never write
except Exception as e- use specific exception handling".
272-282: Avoid bareexcept Exception as ein background detection.Same guideline violation here. Consider using specific exception types.
- except Exception as e: - logger.warning(f"⚠️ Background detection failed: {e}") + except (ValueError, RuntimeError, ConnectionError) as e: + logger.warning(f"⚠️ Background detection failed: {e}")Based on coding guidelines.
284-289: Avoidhasattr()per coding guidelines.The coding guidelines state to prefer normal attribute access over
hasattr. Use a try/except withAttributeErroror check againstNoneif the attribute is optional.def close(self): """Clean up resources.""" self._shutdown = True - if hasattr(self, "executor"): - self.executor.shutdown(wait=False) + if self.executor is not None: + self.executor.shutdown(wait=False)Since
executoris always initialized in__init__, thehasattrcheck is unnecessary anyway.Based on coding guidelines.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (2)
357-359: Avoid bareexcept Exception as e.Per coding guidelines, use specific exception handling.
- except Exception as e: - logger.exception(f"❌ Frame processing failed: {e}") + except (ValueError, RuntimeError, av.error.FFmpegError) as e: + logger.exception(f"❌ Frame processing failed: {e}")Based on coding guidelines.
361-371: Avoid bareexcept Exception as ein background detection.Same guideline violation. The implementation otherwise matches the cloud processor, which is good for consistency.
- except Exception as e: - logger.warning(f"⚠️ Background detection failed: {e}") + except (ValueError, RuntimeError) as e: + logger.warning(f"⚠️ Background detection failed: {e}")Based on coding guidelines.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py(8 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py(7 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.pyplugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
🧬 Code graph analysis (2)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (2)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (2)
_run_detection_background(361-371)_run_inference(268-283)plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (1)
annotate_detections(48-111)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (2)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (2)
_run_detection_background(272-282)_run_inference(184-196)plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (1)
annotate_detections(48-111)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Mypy
- GitHub Check: unit / Mypy
- GitHub Check: unit / Test "not integration"
🔇 Additional comments (5)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (2)
44-56: Docstring updates look good.The updated class and parameter documentation clearly explains the async detection behavior and the purpose of
detection_fps.
87-90: Async detection state initialization is correct.The state variables for tracking background detection are properly initialized with sensible defaults.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (3)
42-58: Docstring updates align well with cloud processor.The async detection documentation is clear and consistent with the cloud processor implementation.
104-107: Async detection state initialization is consistent with cloud processor.Good consistency between the two processors.
318-359: Frame processing logic is correct and consistent.The async detection triggering and frame annotation using cached results follows the same pattern as the cloud processor. The logic for determining when to start detection is sound.
Minor: Same fire-and-forget task concern as cloud processor (line 335).
- Remove fps parameter from moondream processors (use detection_fps only) - Add async detection pattern to YOLOPoseProcessor with detection_fps=15.0 - Fix YOLOPoseVideoTrack to output at 30 FPS (was 1 FPS) - Add async detection pattern to RoboflowCloudDetectionProcessor with detection_fps=5.0 - Add async detection pattern to RoboflowLocalDetectionProcessor with detection_fps=10.0 - All processors now pass frames through at full FPS while detection runs in background
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (2)
25-28: Duplicate constant definitions – lines 25-26 are dead code.The constants
DEFAULT_WIDTHandDEFAULT_HEIGHTare defined twice in succession. Lines 25-26 are immediately shadowed by lines 27-28, rendering them useless. Remove the duplicates to avoid confusion.DEFAULT_WIDTH = 640 DEFAULT_HEIGHT = 480 -DEFAULT_WIDTH = 1920 -DEFAULT_HEIGHT = 1080Or, if the 1920×1080 values are intended:
-DEFAULT_WIDTH = 640 -DEFAULT_HEIGHT = 480 DEFAULT_WIDTH = 1920 DEFAULT_HEIGHT = 1080
521-525: Avoidhasattr– prefer normal attribute access.Per coding guidelines, avoid
hasattr. Sinceself.executoris always initialized in__init__, direct access is safe. If guarding against partial initialization, use explicitNonechecks.def close(self): """Clean up resources.""" self._shutdown = True - if hasattr(self, "executor"): + if self.executor is not None: self.executor.shutdown(wait=False)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (4)
193-197: Use specific exception types instead of broadException.As per coding guidelines, avoid catching broad
Exception. For model compilation failures, catch specific exceptions likeRuntimeErrorortorch.jit.CompilationError.- except Exception as compile_error: + except (RuntimeError, AttributeError) as compile_error:
200-218: Use specific exception types instead of broadException.As per coding guidelines, avoid catching broad
Exception. Consider catching specific Hugging Face/transformers exceptions likeOSError,HTTPError, orrequests.exceptions.RequestException.- except Exception as e: + except (OSError, ValueError, ImportError) as e:
277-279: Use specific exception types instead of broadException.As per coding guidelines, avoid catching broad
Exception. For inference failures, consider specific exceptions likeRuntimeError,torch.cuda.OutOfMemoryError, orValueError.- except Exception as e: + except (RuntimeError, ValueError, OSError) as e:
305-307: Use specific exception types instead of broadException.As per coding guidelines, avoid catching broad
Exception. For model detection failures, catch specific exceptions likeRuntimeError,ValueError, orAttributeError.- except Exception as e: + except (RuntimeError, ValueError, AttributeError) as e:
🧹 Nitpick comments (6)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (1)
262-266: Consider storing task reference to prevent premature garbage collection.The fire-and-forget
asyncio.create_taskwithout storing the reference can theoretically allow the task to be garbage collected before completion. While thetry/finallyblock in_run_detection_backgroundmitigates state corruption, consider storing the task reference in an instance variable for explicit lifecycle management.+ self._detection_task: Optional[asyncio.Task] = None ... if should_detect: # Start detection in background (don't await) self._detection_in_progress = True self._last_detection_time = now - asyncio.create_task(self._run_detection_background(image.copy())) + self._detection_task = asyncio.create_task(self._run_detection_background(image.copy()))plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (2)
44-46: Ensuredetection_fpsdefault matches Moondream Cloud rate‑limit expectations.Docstring still notes a 2 RPS default limit for the Cloud API, but the new
detection_fpsdefault is 5.0 and each detection can fan out into multiple SDK calls (one perobject_type). That may easily exceed the documented default quota and turn into frequent 429s / warning logs in the background task.Consider either:
- Lowering the default
detection_fpsforCloudDetectionProcessorto something that’s safely within the typical rate limit, or- Explicitly documenting that
detection_fps=5.0assumes a raised quota, and perhaps warning/logging when users configure values that are likely to exceed the default API limits.Also applies to: 54-55, 67-67, 75-75, 122-122
222-261: Background detection scheduling and cached overlays look solid; small asyncio/time nit.The detection cadence logic (
detection_interval+_detection_in_progress) and use of_cached_resultsfor non‑blocking annotation are sound and should keep frames flowing at full FPS while bounding detection load. The copy passed into the background task avoids mutation races on the frame array.Two minor points to consider:
- Inside an
asyncfunction,asyncio.get_running_loop().time()is preferred overasyncio.get_event_loop().time()(Line 226) to avoid relying on the global‑loop semantics and to align with modern asyncio usage.- If background detection repeatedly fails or stalls,
_cached_resultswill preserve very old detections indefinitely; depending on UX, you may want an age/TTL check using_last_frame_timeor_last_detection_timeto eventually clear stale overlays.plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (3)
240-256: LGTM with minor comment clarification needed.The forwarder handling correctly uses a shared forwarder when provided or creates its own. However, the comment at line 248 mentions "default FPS" but no FPS parameter is passed to VideoForwarder—this aligns with the PR objective to decouple detection from display, so the comment could be clarified.
Consider updating the comment for clarity:
- # Create our own VideoForwarder at default FPS + # Create our own VideoForwarder (runs at native stream FPS)
333-333: Track background tasks to prevent silent failures.The
asyncio.create_task()creates a task without retaining a reference, which can lead to silent failures if the task raises an exception and there's no exception handler. Consider storing the task reference or using a task group to track completion.One approach is to store the task and add a done callback:
- asyncio.create_task(self._run_detection_background(frame_array.copy())) + task = asyncio.create_task(self._run_detection_background(frame_array.copy())) + task.add_done_callback(lambda t: t.exception() if not t.cancelled() else None)Alternatively, for Python 3.11+, use a task group (though this would require restructuring).
321-323: Consider precomputing detection interval in__init__.The detection interval calculation at line 322 is computed on every frame. Since
detection_fpsis constant, consider precomputingself._detection_intervalin__init__to avoid repeated division operations on the hot path.In
__init__after line 88:self.detection_fps = detection_fps + self._detection_interval = ( + 1.0 / detection_fps if detection_fps > 0 else float("inf") + )Then at line 321-323:
- detection_interval = ( - 1.0 / self.detection_fps if self.detection_fps > 0 else float("inf") - ) - should_detect = ( + should_detect = ( not self._detection_in_progress - and (now - self._last_detection_time) >= detection_interval + and (now - self._last_detection_time) >= self._detection_interval )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (5)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py(7 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py(8 hunks)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py(7 hunks)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py(6 hunks)plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py(6 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.pyplugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.pyplugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.pyplugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.pyplugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
🧬 Code graph analysis (2)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (3)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (4)
_process_frame(242-288)_run_detection_background(290-327)_run_inference(329-361)events(204-207)plugins/roboflow/vision_agents/plugins/roboflow/utils.py (1)
annotate_image(8-36)plugins/roboflow/vision_agents/plugins/roboflow/events.py (2)
DetectedObject(7-12)DetectionCompletedEvent(16-31)
plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (3)
agents-core/vision_agents/core/processors/base_processor.py (1)
AudioVideoProcessor(117-146)agents-core/vision_agents/core/utils/video_forwarder.py (1)
add_frame_handler(57-85)agents-core/vision_agents/core/utils/video_track.py (1)
add_frame(42-54)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Mypy
- GitHub Check: unit / Ruff
- GitHub Check: unit / Mypy
- GitHub Check: unit / Test "not integration"
🔇 Additional comments (10)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (2)
59-99: Docstring is well-structured and comprehensive.The updated docstring clearly documents the asynchronous behavior and provides a helpful usage example. The Args section follows Google style guidelines appropriately.
147-160: LGTM on the async state initialization and video track setup.The detection state variables are properly initialized, and the 30 FPS video track with matching buffer size is a sensible choice for smooth playback.
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (2)
84-84: Sensible default for cloud detection rate.The 5.0 FPS default for
detection_fpsis appropriate for cloud inference, balancing responsiveness with API call costs and rate limits.
246-274: Background detection implementation is well-structured.The
_run_detection_backgroundmethod properly caches both detections and the class mapping, and thefinallyblock ensures_detection_in_progressis always reset. The event emission logic is correct.plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (5)
40-44: LGTM!Clean and minimal track class initialization with explicit FPS.
47-112: LGTM – well-structured async detection state.The new
detection_fpsparameter and cached state fields are properly initialized. The Google-style docstring is clear. Note that this processor defaults to 15.0 FPS while the PR description mentions different defaults for Cloud (5.0) and Local (10.0) processors – verify this is the intended value for this YOLO pose variant.
127-138: LGTM!Subscribing without an fps limit allows frames to pass through at the forwarder's native rate, achieving the smooth passthrough goal.
188-211: LGTM!Clean annotation logic with proper array copying. The threshold filtering and skeleton/wrist drawing are consistent with the synchronous processing path.
213-222: LGTM!Retaining the legacy method for backward compatibility is a good practice.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
140-151: Shared vs. ownedVideoForwarderwiring looks correct.Subscribing to a provided
shared_forwarderat its native FPS and only creating your ownVideoForwarder(withmax_buffer=30) when needed is a clean separation. The handler registration with a named consumer (name="moondream") reads clearly and shouldn’t introduce extra buffering or throttling beyond what the forwarder is already doing.
| async def _run_detection_background(self, frame_array: np.ndarray): | ||
| """Run detection in background and update cached results.""" | ||
| try: | ||
| results = await self._run_inference(frame_array) | ||
| self._cached_results = results | ||
| self._last_results = results | ||
| logger.debug( | ||
| f"🔍 Detection complete: {len(results.get('detections', []))} objects" | ||
| ) | ||
| except Exception as e: | ||
| logger.warning(f"⚠️ Background detection failed: {e}") | ||
| finally: | ||
| self._detection_in_progress = False | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "moondream_cloud_processor.py" | head -20Repository: GetStream/Vision-Agents
Length of output: 157
🏁 Script executed:
cat -n "plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py" | sed -n '260,290p'Repository: GetStream/Vision-Agents
Length of output: 1425
🏁 Script executed:
cat -n "plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py" | grep -A 30 "def _run_inference"Repository: GetStream/Vision-Agents
Length of output: 1478
🏁 Script executed:
cat -n "plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py" | sed -n '1,20p'Repository: GetStream/Vision-Agents
Length of output: 675
🏁 Script executed:
grep -n "logger\." "plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py" | head -15Repository: GetStream/Vision-Agents
Length of output: 1035
Avoid broad except Exception as e in background detection helper.
The _run_detection_background method at line 277-278 catches a blanket Exception that duplicates error handling already present in _run_inference (line 188-190), which logs exceptions and returns a safe default {"detections": []}. The broad exception catch conflicts with Python coding guidelines requiring specific exception handling.
Simplify by removing the except block and keeping only try/finally to guarantee the _detection_in_progress flag is reset:
async def _run_detection_background(self, frame_array: np.ndarray):
"""Run detection in background and update cached results."""
try:
results = await self._run_inference(frame_array)
self._cached_results = results
self._last_results = results
logger.debug(
- f"🔍 Detection complete: {len(results.get('detections', []))} objects"
+ "🔍 Detection complete: %d objects",
+ len(results.get("detections", [])),
)
finally:
self._detection_in_progress = FalseCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py
around lines 268 to 281, remove the broad "except Exception as e" block and its
logger.warning call, leaving a try/finally so that _run_inference handles errors
and returns the safe default; keep assigning self._cached_results and
self._last_results and the debug log inside the try, and ensure the finally
still sets self._detection_in_progress = False.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
Outdated
Show resolved
Hide resolved
| try: | ||
| frame_array = frame.to_ndarray(format="rgb24") | ||
| results = await self._run_inference(frame_array) | ||
| now = asyncio.get_event_loop().time() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use asyncio.get_running_loop() instead of deprecated get_event_loop().
The asyncio.get_event_loop() method is soft-deprecated in Python 3.10+ when called from a running async context. Use asyncio.get_running_loop().time() for better compatibility.
- now = asyncio.get_event_loop().time()
+ now = asyncio.get_running_loop().time()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| now = asyncio.get_event_loop().time() | |
| now = asyncio.get_running_loop().time() |
🤖 Prompt for AI Agents
In
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
around line 318, replace the deprecated asyncio.get_event_loop().time() call
with asyncio.get_running_loop().time(); specifically, call
asyncio.get_running_loop().time() when inside an async context to obtain the
current loop time, ensuring the function runs only while the event loop is
running and avoids deprecated behavior.
| # Start detection in background (don't await) | ||
| self._detection_in_progress = True | ||
| self._last_detection_time = now | ||
| asyncio.create_task(self._run_detection_background(frame_array.copy())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
wc -l moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 99
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
sed -n '320,350p' moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 1384
🏁 Script executed:
rg -n 'VideoFrame|to_ndarray|from_ndarray' --type=py plugins/moondream/Repository: GetStream/Vision-Agents
Length of output: 4811
🏁 Script executed:
rg -n 'import.*av|from av' --type=py plugins/moondream/Repository: GetStream/Vision-Agents
Length of output: 783
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
sed -n '1,100p' moondream_local_processor.py | grep -E 'import|from'Repository: GetStream/Vision-Agents
Length of output: 790
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
sed -n '355,381p' moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 1135
🏁 Script executed:
rg -n '_run_detection_background' --type=py plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 234
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
rg -A 20 'async def _run_detection_background' moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 897
🏁 Script executed:
rg -n 'def annotate_detections' --type=py plugins/moondream/Repository: GetStream/Vision-Agents
Length of output: 163
🏁 Script executed:
rg -B 2 -A 15 'def annotate_detections' --type=py plugins/moondream/Repository: GetStream/Vision-Agents
Length of output: 1756
🌐 Web query:
PyAV av.VideoFrame.to_ndarray returns view or copy numpy array
💡 Result:
Short answer: to_ndarray returns a new (copied) NumPy array, not a view into the frame's internal buffers.
Reasoning / pointers:
- The docs describe VideoFrame.to_ndarray as "Get a numpy array of this frame" and show it as the public conversion path (i.e. a conversion/formatting operation via the VideoReformatter), which implies creation of array data rather than exposing internal plane buffers. [1][2]
- In practice, if you need zero-copy access you should work with the frame's planes/buffers (VideoFrame.planes / VideoPlane) and construct a NumPy array with np.frombuffer or memoryview yourself; to_ndarray is intended as the convenient conversion and will produce an independent array. [2][3]
References:
- PyAV VideoFrame.to_ndarray docs. [1]
- PyAV VideoFrame / VideoReformatter docs (conversion/reformat behavior). [2]
- PyAV cookbook example using frame.to_ndarray (shows conversion usage). [3]
References:
[1] https://pyav.org/docs/8.0.1/api/video.html
[2] https://pyav.basswood-io.com/docs/stable/api/video.html
[3] https://pyav.org/docs/stable/cookbook/numpy.html
🌐 Web query:
PyAV frame buffer lifecycle reuse next frame
💡 Result:
Short answer: PyAV may reuse underlying FFmpeg AVFrame buffers when decoding; a Frame object you get from container.decode() can point to reused memory until you copy or force writability. To keep or reuse data safely you must either copy the frame (e.g., convert to a numpy/bytes/image) or call Frame.make_writable() before modifying — otherwise the next decoded frame may overwrite the same buffer. See PyAV Frame docs for make_writable and the cookbook for decode/iteration patterns. [1][2]
Practical notes:
- If you need a persistent numpy array do arr = frame.to_ndarray() (or frame.to_image()/copy of frame.planes) which performs a copy. [1][2]
- If you want to modify a decoded frame in-place, call frame.make_writable() first; that will ensure the frame has its own buffer rather than a shared/reused one. [1]
- Iterating next(container.decode(...)) returns Frame objects that may share/transfer buffers internally — don’t retain references to frame.planes memory expecting it to remain unchanged unless you copied it. [1][2]
Sources:
- PyAV Frame API (make_writable(), pts, frame lifecycle). [PyAV docs]. [1]
- PyAV Basics / cookbook (decode iteration; examples). [PyAV docs]. [2]
🏁 Script executed:
cd plugins/moondream/vision_agents/plugins/moondream/detection && \
sed -n '314,355p' moondream_local_processor.pyRepository: GetStream/Vision-Agents
Length of output: 1907
🏁 Script executed:
rg -n 'def _process_and_add_frame' -A 50 --type=py plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py | head -60Repository: GetStream/Vision-Agents
Length of output: 2528
🏁 Script executed:
asyncio.create_task(self._run_detection_background(frame_array))Repository: GetStream/Vision-Agents
Length of output: 247
Remove redundant frame_array.copy() call—the array is already independent.
The to_ndarray() call at line 317 already returns a new NumPy array copy (per PyAV's design), so the additional .copy() at line 333 is unnecessary. The array is never modified after being passed to the async task, making the second copy purely wasteful (~6MB per detection for 1080p RGB24).
🤖 Prompt for AI Agents
In
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
around line 333, remove the redundant .copy() when scheduling the background
task—pass the frame_array directly to
asyncio.create_task(self._run_detection_background(frame_array)) because
to_ndarray() (line 317) already returns a fresh NumPy array and the frame is not
modified by the task; simply delete the .copy() call to avoid the extra memory
copy and associated allocation cost.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py
Outdated
Show resolved
Hide resolved
| detected_objects = [ | ||
| DetectedObject( | ||
| label=self._model.class_names[class_id], | ||
| x1=x1, | ||
| y1=y1, | ||
| x2=x2, | ||
| y2=y2, | ||
| ) | ||
| for class_id, (x1, y1, x2, y2) in zip( | ||
| detections.class_id, detections.xyxy.astype(float) | ||
| ) | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type mismatch: DetectedObject expects int coordinates, but floats are passed.
The DetectedObject TypedDict (from events.py) defines x1, y1, x2, y2 as int, but detections.xyxy.astype(float) produces floats. This will cause type inconsistency.
for class_id, (x1, y1, x2, y2) in zip(
- detections.class_id, detections.xyxy.astype(float)
+ detections.class_id, detections.xyxy.astype(int)
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| detected_objects = [ | |
| DetectedObject( | |
| label=self._model.class_names[class_id], | |
| x1=x1, | |
| y1=y1, | |
| x2=x2, | |
| y2=y2, | |
| ) | |
| for class_id, (x1, y1, x2, y2) in zip( | |
| detections.class_id, detections.xyxy.astype(float) | |
| ) | |
| ] | |
| detected_objects = [ | |
| DetectedObject( | |
| label=self._model.class_names[class_id], | |
| x1=x1, | |
| y1=y1, | |
| x2=x2, | |
| y2=y2, | |
| ) | |
| for class_id, (x1, y1, x2, y2) in zip( | |
| detections.class_id, detections.xyxy.astype(int) | |
| ) | |
| ] |
🤖 Prompt for AI Agents
In plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py
around lines 303 to 314, the DetectedObject TypedDict requires integer
coordinates but the code uses floats from detections.xyxy.astype(float); convert
the coordinates to ints before constructing DetectedObject: either convert the
numpy array to integer type (e.g., detections.xyxy.astype(int)) or explicitly
cast/round each coordinate (e.g., int(round(value))) when creating the
DetectedObject so x1, y1, x2, y2 are ints and types align.
| except Exception as e: | ||
| logger.warning(f"⚠️ Background detection failed: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use specific exception handling instead of bare Exception.
As per the coding guidelines, avoid except Exception as e. Consider catching specific exceptions that might occur during inference (e.g., RuntimeError, ValueError, or library-specific exceptions from rfdetr).
- except Exception as e:
- logger.warning(f"⚠️ Background detection failed: {e}")
+ except (RuntimeError, ValueError) as e:
+ logger.warning(f"⚠️ Background detection failed: {e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.warning(f"⚠️ Background detection failed: {e}") | |
| except (RuntimeError, ValueError) as e: | |
| logger.warning(f"⚠️ Background detection failed: {e}") |
🤖 Prompt for AI Agents
In plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py
around lines 324-325, replace the broad "except Exception as e" with targeted
exception handling for the likely inference errors (e.g., RuntimeError,
ValueError, and the specific exception class(es) from the rfdetr library or
whatever inference backend is used); catch those exceptions explicitly, log them
as before (including exception type and message), and allow unexpected
exceptions to propagate (or re-raise after logging) so they aren't silently
swallowed. Also import any library-specific exception classes at the top of the
file and, if useful, add a final generic except that re-raises after logging to
preserve behavior for truly unknown errors.
| # Start detection in background (don't await) | ||
| self._detection_in_progress = True | ||
| self._last_detection_time = now | ||
| asyncio.create_task(self._run_detection_background(frame_array.copy())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fire-and-forget task may lose exceptions or be garbage-collected prematurely.
The task created by asyncio.create_task() is not stored, which can lead to:
- Unobserved exceptions being silently swallowed
- The task being garbage-collected before completion in edge cases
Consider storing the task reference and handling completion/errors appropriately.
+ self._detection_task: Optional[asyncio.Task] = NoneThen in the method:
- asyncio.create_task(self._run_detection_background(frame_array.copy()))
+ self._detection_task = asyncio.create_task(
+ self._run_detection_background(frame_array.copy())
+ )
+ self._detection_task.add_done_callback(
+ lambda t: t.exception() if not t.cancelled() else None
+ )Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py
around line 159, the fire-and-forget asyncio.create_task call is not stored
which can swallow exceptions or allow premature GC; capture the returned Task,
add it to a persistent container (e.g., self._bg_tasks set or list) and attach a
done callback that logs or handles exceptions and removes the task from the
container, and ensure that on shutdown/stop you await or cancel remaining tasks
to guarantee proper completion/cleanup.
| except Exception as e: | ||
| logger.exception(f"❌ Frame processing failed: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid broad except Exception as e – use specific exceptions.
Per coding guidelines, avoid catching bare Exception. Consider catching specific exceptions like av.error.FFmpegError, ValueError, or similar, and let unexpected errors propagate.
- except Exception as e:
- logger.exception(f"❌ Frame processing failed: {e}")
+ except (av.error.FFmpegError, ValueError) as e:
+ logger.exception(f"❌ Frame processing failed: {e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.exception(f"❌ Frame processing failed: {e}") | |
| except (av.error.FFmpegError, ValueError) as e: | |
| logger.exception(f"❌ Frame processing failed: {e}") |
🤖 Prompt for AI Agents
In plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py
around lines 171-172, the code currently catches a broad Exception; replace this
with targeted exception handling (e.g., catch av.error.FFmpegError for
FFmpeg/decoding errors, ValueError/TypeError for data issues, and
OSError/IOError for I/O problems). Import the specific exception classes at the
top of the file, add separate except blocks that log the exception via
logger.exception with contextual messages, and for any truly unexpected errors
either omit a broad catch or re-raise the exception after logging so they
propagate instead of being silently swallowed.
| except Exception as e: | ||
| logger.warning(f"⚠️ Background pose detection failed: {e}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broad exception handling – use specific exception types.
Same as above: per coding guidelines, prefer specific exception types over except Exception as e.
- except Exception as e:
- logger.warning(f"⚠️ Background pose detection failed: {e}")
+ except (asyncio.TimeoutError, RuntimeError) as e:
+ logger.warning(f"⚠️ Background pose detection failed: {e}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.warning(f"⚠️ Background pose detection failed: {e}") | |
| except (asyncio.TimeoutError, RuntimeError) as e: | |
| logger.warning(f"⚠️ Background pose detection failed: {e}") |
…processors - Add detection_fps parameter to control inference rate independently of video FPS - Remove fps parameter from processors (video now runs at native rate) - Implement parallel detection with timestamp-based result ordering - Reduce queue sizes to minimize latency (max_buffer=5, max_queue_size=5) - Remove redundant MoondreamVideoTrack and YOLOPoseVideoTrack classes - Reduce ThreadPoolExecutor max_workers from 10/24 to 2 - Fix indentation in roboflow_cloud_processor.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (1)
298-331: Critical indentation error in_run_detection_background.Lines 304-327 appear to have incorrect indentation—the code inside the
ifblock starting at line 299 is not properly indented. This will cause a syntax error or incorrect logic flow.# Emit detection event if objects found if ( self._model is not None and detections.class_id is not None and detections.class_id.size > 0 ): - img_height, img_width = image.shape[0:2] - detected_objects = [ - DetectedObject( - label=self._model.class_names[class_id], - x1=x1, - y1=y1, - x2=x2, - y2=y2, - ) - for class_id, (x1, y1, x2, y2) in zip( - detections.class_id, detections.xyxy.astype(float) - ) - ] - self.events.send( + img_height, img_width = image.shape[0:2] + detected_objects = [ + DetectedObject( + label=self._model.class_names[class_id], + x1=int(x1), + y1=int(y1), + x2=int(x2), + y2=int(y2), + ) + for class_id, (x1, y1, x2, y2) in zip( + detections.class_id, detections.xyxy.astype(int) + ) + ] + self.events.send(Also note:
DetectedObjectTypedDict expectsintcoordinates, butastype(float)produces floats.plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (1)
105-116: Missing fallback whenshared_forwarderis None.Unlike other processors in this PR,
process_videodoesn't create its ownVideoForwarderwhenshared_forwarderis None. This will cause anAttributeErrorwhenadd_frame_handleris called onNone.async def process_video( self, incoming_track: aiortc.mediastreams.MediaStreamTrack, participant: Any, shared_forwarder=None, ): - # Use the shared forwarder at its native FPS - self._video_forwarder = shared_forwarder - logger.info("🎥 YOLO subscribing to shared VideoForwarder") - self._video_forwarder.add_frame_handler( - self._process_and_add_frame, name="yolo" - ) + if shared_forwarder is not None: + self._video_forwarder = shared_forwarder + logger.info("🎥 YOLO subscribing to shared VideoForwarder") + else: + self._video_forwarder = VideoForwarder( + incoming_track, + max_buffer=5, + name="yolo_forwarder", + ) + logger.info("🎥 YOLO creating own VideoForwarder") + self._video_forwarder.add_frame_handler( + self._process_and_add_frame, name="yolo" + )
♻️ Duplicate comments (6)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
255-272: Avoid broadexcept Exception as ein background detection helper.Per coding guidelines, use specific exception handling. Since
_run_inferencealready handles its own exceptions and returns a safe default, consider usingtry/finallyto guarantee the state cleanup, or catch specific exceptions likeRuntimeError,ValueError.async def _run_detection_background( self, frame_array: np.ndarray, request_time: float ): """Run detection in background and update cached results if newer.""" try: results = await self._run_inference(frame_array) if request_time > self._last_result_time: self._cached_results = results self._last_result_time = request_time logger.debug( f"🔍 Detection complete: {len(results.get('detections', []))} objects" ) else: logger.debug("🔍 Detection complete but discarded (newer result exists)") - except Exception as e: - logger.warning(f"⚠️ Background detection failed: {e}") + except (RuntimeError, ValueError, OSError) as e: + logger.warning(f"⚠️ Background detection failed: {e}")plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (2)
312-312: Useasyncio.get_running_loop()instead of deprecatedget_event_loop().- now = asyncio.get_event_loop().time() + now = asyncio.get_running_loop().time()
362-363: Use specific exception types instead of broadException.Per coding guidelines, catch specific exceptions rather than bare
Exception.- except Exception as e: - logger.warning(f"⚠️ Background detection failed: {e}") + except (RuntimeError, ValueError, OSError) as e: + logger.warning(f"⚠️ Background detection failed: {e}")plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (1)
147-148: Use specific exception types instead of broadException.Both exception handlers at lines 147-148 and 166-167 catch bare
Exception. Per coding guidelines, use specific types.- except Exception as e: - logger.exception(f"❌ Frame processing failed: {e}") + except (av.error.FFmpegError, ValueError, RuntimeError) as e: + logger.exception(f"❌ Frame processing failed: {e}")- except Exception as e: - logger.warning(f"⚠️ Background pose detection failed: {e}") + except (asyncio.TimeoutError, RuntimeError, ValueError) as e: + logger.warning(f"⚠️ Background pose detection failed: {e}")Also applies to: 166-167
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (2)
255-262: Type mismatch:DetectedObjectexpectsintcoordinates.The
DetectedObjectTypedDict definesx1,y1,x2,y2asint, butdetections.xyxy.astype(float)produces floats.for class_id, (x1, y1, x2, y2) in zip( - detections.class_id, detections.xyxy.astype(float) + detections.class_id, detections.xyxy.astype(int) )
276-277: Use specific exception handling instead of bareException.Per coding guidelines, catch specific exceptions from the inference SDK or network operations.
- except Exception as ex: - logger.warning(f"⚠️ Background detection failed: {ex}") + except (RuntimeError, ValueError, ConnectionError) as ex: + logger.warning(f"⚠️ Background detection failed: {ex}")
🧹 Nitpick comments (4)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
215-253: Useasyncio.get_running_loop()instead of deprecatedget_event_loop().The
asyncio.get_event_loop()at line 219 is soft-deprecated in Python 3.10+ when called from a running async context.- now = asyncio.get_event_loop().time() + now = asyncio.get_running_loop().time()plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (1)
251-257: Useasyncio.get_running_loop()instead of deprecatedget_event_loop().- now = asyncio.get_event_loop().time() + now = asyncio.get_running_loop().time()plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (1)
122-122: Useasyncio.get_running_loop()instead of deprecatedget_event_loop().- now = asyncio.get_event_loop().time() + now = asyncio.get_running_loop().time()plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (1)
208-208: Useasyncio.get_running_loop()instead of deprecatedget_event_loop().- now = asyncio.get_event_loop().time() + now = asyncio.get_running_loop().time()
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (10)
plugins/moondream/tests/test_moondream.py(4 hunks)plugins/moondream/vision_agents/plugins/moondream/__init__.py(0 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py(7 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py(10 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py(0 hunks)plugins/roboflow/example/roboflow_example.py(1 hunks)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py(8 hunks)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py(7 hunks)plugins/ultralytics/vision_agents/plugins/ultralytics/__init__.py(1 hunks)plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py(5 hunks)
💤 Files with no reviewable changes (2)
- plugins/moondream/vision_agents/plugins/moondream/init.py
- plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.pyplugins/ultralytics/vision_agents/plugins/ultralytics/__init__.pyplugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.pyplugins/moondream/tests/test_moondream.pyplugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.pyplugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.pyplugins/roboflow/example/roboflow_example.pyplugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py
**/*test*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*test*.py: Never mock in tests; use pytest for testing
Mark integration tests with @pytest.mark.integration decorator
@pytest.mark.asyncio is not needed - it is automatic
Files:
plugins/moondream/tests/test_moondream.py
🧠 Learnings (1)
📚 Learning: 2025-11-24T17:04:43.030Z
Learnt from: CR
Repo: GetStream/Vision-Agents PR: 0
File: .cursor/rules/python.mdc:0-0
Timestamp: 2025-11-24T17:04:43.030Z
Learning: Applies to **/*.py : Never write `except Exception as e` - use specific exception handling
Applied to files:
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.pyplugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.pyplugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.pyplugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py
🧬 Code graph analysis (6)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (3)
agents-core/vision_agents/core/utils/video_track.py (1)
QueuedVideoTrack(15-90)agents-core/vision_agents/core/utils/video_forwarder.py (2)
VideoForwarder(25-185)add_frame_handler(57-85)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (2)
_run_detection_background(241-277)_run_inference(279-333)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_local_processor.py (6)
agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(15-90)add_frame(42-54)agents-core/vision_agents/core/utils/video_forwarder.py (3)
remove_frame_handler(87-107)VideoForwarder(25-185)add_frame_handler(57-85)plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (3)
_process_frame(202-239)_run_detection_background(241-277)_run_inference(279-333)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (2)
_run_detection_background(347-363)_run_inference(258-273)plugins/roboflow/vision_agents/plugins/roboflow/utils.py (1)
annotate_image(8-36)plugins/roboflow/vision_agents/plugins/roboflow/events.py (1)
DetectedObject(7-12)
plugins/moondream/tests/test_moondream.py (2)
agents-core/vision_agents/core/utils/video_track.py (1)
QueuedVideoTrack(15-90)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
publish_video_track(153-155)
plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py (2)
agents-core/vision_agents/core/utils/video_forwarder.py (2)
VideoForwarder(25-185)add_frame_handler(57-85)agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(15-90)add_frame(42-54)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (3)
agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(15-90)add_frame(42-54)agents-core/vision_agents/core/utils/video_forwarder.py (1)
add_frame_handler(57-85)plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (1)
annotate_detections(48-111)
plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (3)
agents-core/vision_agents/core/utils/video_track.py (2)
QueuedVideoTrack(15-90)add_frame(42-54)plugins/roboflow/vision_agents/plugins/roboflow/utils.py (1)
annotate_image(8-36)plugins/roboflow/vision_agents/plugins/roboflow/events.py (2)
DetectedObject(7-12)DetectionCompletedEvent(16-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Test "not integration"
🔇 Additional comments (8)
plugins/roboflow/example/roboflow_example.py (1)
22-26: LGTM!The parameter update from
fpstodetection_fps=15aligns with the new API for background detection. This value provides a good balance between detection responsiveness and CPU/GPU load for local inference.plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
60-116: Well-structured initialization with the new async detection pattern.The addition of
detection_fpsparameter and parallel detection state (_last_detection_time,_last_result_time,_cached_results) properly supports the background detection architecture. The default of 5.0 FPS is sensible for cloud API rate limits.plugins/ultralytics/vision_agents/plugins/ultralytics/__init__.py (1)
7-12: LGTM!Streamlined public API surface by removing
YOLOPoseVideoTrackexport. This aligns with the architectural shift toQueuedVideoTrackand keeps the plugin interface clean.plugins/moondream/tests/test_moondream.py (2)
48-57: LGTM!Test correctly updated to use
QueuedVideoTrackwith matching parameters (fps=30, max_queue_size=5) that align with the processor's internal video track configuration.
234-237: Test assertion correctly updated to check_cached_results.The assertion now validates the new caching mechanism used by the background detection pattern.
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
67-129: Solid initialization with async detection support.The
detection_fpsparameter and parallel detection state are well-structured. Default of 10.0 FPS is appropriate for local inference which is typically faster than cloud APIs.plugins/roboflow/vision_agents/plugins/roboflow/roboflow_cloud_processor.py (2)
1-28: LGTM!The
asyncioimport is correctly placed at module level. Clean import organization.
79-150: Well-structured async detection initialization.The parallel detection state and
detection_fpsconfiguration are properly set up. Default of 5.0 FPS is appropriate for cloud APIs with rate limits.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.