-
Notifications
You must be signed in to change notification settings - Fork 240
Format code with ruff #200
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@dangusev has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 13 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
WalkthroughAdds Ruff formatting checks and a formatter CI step, introduces LLMTurn and several new event fields, exports EventManager, tracks scheduled event handler tasks, changes VideoForwarder to stop when last handler is removed, converts one MCPToolConverter staticmethod to instance method, and adds Inworld TTS error handling and a dev check command. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant EventMgr as EventManager
participant Handler as Handler Task
Note right of App: schedule_handler({...})
App->>EventMgr: schedule_handler(handler)
EventMgr->>Handler: create async task
EventMgr->>EventMgr: store task in _handler_tasks[id] %%#ef9a9a
Handler->>EventMgr: on completion -> remove from _handler_tasks
sequenceDiagram
participant App as Application
participant VF as VideoForwarder
participant H as Handlers
App->>VF: add_frame_handler(h1)
VF->>H: register h1
App->>VF: remove_frame_handler(h1)
VF->>H: unregister h1
Note over VF: if no handlers remain
VF->>VF: await self.stop() %%#b3e5fc
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
agents-core/vision_agents/core/llm/wrap_method.py (1)
54-60: Fix the example code or add clarifying comments.Both issues in the example are confirmed:
Line 55:
mc.native_method(mc, ...)explicitly passesselfa second time. When calling an instance method, Python automatically passes the instance as the first argument; the extramcbecomes the first positional argument to the wrapper, causing a parameter mismatch in_native_method.Line 59:
max_tokens="42"passes a string when the type hint on line 15 expectsint.Since this file has
# mypy: ignore-errorsand appears to be example/documentation code, either:
- Correct the example to be valid: remove the explicit
mcargument and usemax_tokens=42(int)- Or add a comment explaining this demonstrates buggy usage intentionally
🧹 Nitpick comments (5)
agents-core/vision_agents/core/events/manager.py (1)
553-557: Improved handler logging; optional loop hoistIncluding
module_namein the debug log for each handler call is useful for tracing where subscriptions originate and looks correct. If you ever profile this hot path, consider hoistingloop = asyncio.get_running_loop()outside the handler loop to avoid repeated lookups, but it's purely an optional micro-optimization.agents-core/vision_agents/core/agents/agent_session.py (1)
29-32: Constructor now matches documented AgentSessionContextManager usageAdding
__init__(self, agent: Agent, connection_cm=None)cleanly wires the documentedagentand optionalconnection_cminto the context manager and keeps existing enter/exit behavior intact. If you want to tighten typing later, you could optionally annotateconnection_cmas an appropriate optional context manager type, but the current form is fine and backward compatible.agents-core/vision_agents/core/stt/stt.py (1)
67-75: Event emission helpers remain consistent; consider exposing eager_end_of_turn on transcriptsThe refactored calls to
self.events.send(...)remain functionally equivalent and correctly populatesession_id,plugin_name,participant, and error metadata. One optional enhancement:STTTranscriptEventnow has aneager_end_of_turnfield, but_emit_transcript_eventdoesn’t expose or forward it yet, whereas_emit_turn_ended_eventdoes. If you intend consumers to correlate “eager” turn endings on both events, consider adding an optional parameter and wiring it through.Also applies to: 82-89, 105-113, 125-135
agents-core/vision_agents/core/events/__init__.py (1)
10-10: EventManager re-export is reasonable public API expansionExposing
EventManagerviavision_agents.core.eventssimplifies imports and matches how other event types are exported. Just be aware this cementsEventManageras public surface, so future breaking changes there should follow your normal deprecation process.Also applies to: 122-131
agents-core/vision_agents/core/stt/events.py (1)
15-25: STTTranscriptEvent eager_end_of_turn field is a sensible additionAdding
eager_end_of_turn: bool = Falsegives STT transcripts parity withTurnEndedEventfor turn-detection semantics, while keeping a backwards-compatible default. The convenience properties still delegate cleanly toTranscriptResponse. Looks good.Also applies to: 26-49
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (107)
.git-blame-ignore-revs(1 hunks).github/workflows/run_tests.yml(1 hunks)agents-core/vision_agents/_generate_sfu_events.py(10 hunks)agents-core/vision_agents/core/__init__.py(1 hunks)agents-core/vision_agents/core/agents/agent_launcher.py(2 hunks)agents-core/vision_agents/core/agents/agent_session.py(1 hunks)agents-core/vision_agents/core/agents/agent_types.py(1 hunks)agents-core/vision_agents/core/cli.py(0 hunks)agents-core/vision_agents/core/cli/cli_runner.py(0 hunks)agents-core/vision_agents/core/edge/sfu_events.py(5 hunks)agents-core/vision_agents/core/events/__init__.py(1 hunks)agents-core/vision_agents/core/events/manager.py(2 hunks)agents-core/vision_agents/core/llm/function_registry.py(4 hunks)agents-core/vision_agents/core/llm/llm_test.py(0 hunks)agents-core/vision_agents/core/llm/wrap_method.py(4 hunks)agents-core/vision_agents/core/mcp/__init__.py(1 hunks)agents-core/vision_agents/core/mcp/mcp_server_local.py(3 hunks)agents-core/vision_agents/core/mcp/mcp_server_remote.py(3 hunks)agents-core/vision_agents/core/mcp/tool_converter.py(2 hunks)agents-core/vision_agents/core/profiling/base.py(2 hunks)agents-core/vision_agents/core/stt/events.py(5 hunks)agents-core/vision_agents/core/stt/stt.py(5 hunks)agents-core/vision_agents/core/tts/tts.py(2 hunks)agents-core/vision_agents/core/turn_detection/events.py(2 hunks)agents-core/vision_agents/core/turn_detection/turn_detection.py(2 hunks)agents-core/vision_agents/core/utils/__init__.py(0 hunks)agents-core/vision_agents/core/utils/audio_forwarder.py(0 hunks)agents-core/vision_agents/core/utils/audio_queue.py(8 hunks)agents-core/vision_agents/core/utils/examples.py(2 hunks)agents-core/vision_agents/core/utils/video_forwarder.py(7 hunks)agents-core/vision_agents/core/utils/video_queue.py(2 hunks)agents-core/vision_agents/core/utils/video_track.py(4 hunks)agents-core/vision_agents/core/vad/events.py(6 hunks)agents-core/vision_agents/core/vad/vad.py(4 hunks)conftest.py(3 hunks)dev.py(1 hunks)examples/01_simple_agent_example/simple_agent_example.py(1 hunks)examples/other_examples/07_function_calling_example/claude_example.py(2 hunks)examples/other_examples/07_function_calling_example/gemini_example.py(2 hunks)examples/other_examples/07_function_calling_example/openai_example.py(2 hunks)examples/other_examples/openai_realtime_webrtc/openai_realtime_example.py(0 hunks)examples/other_examples/plugins_examples/audio_moderation/main.py(2 hunks)examples/other_examples/plugins_examples/mcp/main.py(3 hunks)examples/other_examples/plugins_examples/stt_deepgram_transcription/main.py(2 hunks)examples/other_examples/plugins_examples/stt_moonshine_transcription/main.py(2 hunks)examples/other_examples/plugins_examples/tts_cartesia/main.py(1 hunks)examples/other_examples/plugins_examples/tts_elevenlabs/main.py(1 hunks)examples/other_examples/plugins_examples/tts_kokoro/main.py(1 hunks)examples/other_examples/plugins_examples/vad_silero/main.py(2 hunks)examples/other_examples/plugins_examples/video_moderation/main.py(3 hunks)examples/other_examples/plugins_examples/wizper_stt_translate/main.py(3 hunks)plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py(15 hunks)plugins/anthropic/vision_agents/plugins/anthropic/events.py(1 hunks)plugins/aws/example/aws_llm_function_calling_example.py(3 hunks)plugins/aws/example/aws_qwen_example.py(2 hunks)plugins/aws/example/aws_realtime_function_calling_example.py(4 hunks)plugins/aws/example/aws_realtime_nova_example.py(1 hunks)plugins/aws/vision_agents/plugins/aws/aws_realtime.py(0 hunks)plugins/aws/vision_agents/plugins/aws/events.py(1 hunks)plugins/deepgram/tests/test_deepgram_stt.py(1 hunks)plugins/deepgram/vision_agents/plugins/deepgram/__init__.py(0 hunks)plugins/deepgram/vision_agents/plugins/deepgram/deepgram_stt.py(8 hunks)plugins/elevenlabs/example/elevenlabs_example.py(1 hunks)plugins/elevenlabs/tests/test_elevenlabs_stt.py(3 hunks)plugins/elevenlabs/vision_agents/plugins/elevenlabs/__init__.py(0 hunks)plugins/elevenlabs/vision_agents/plugins/elevenlabs/stt.py(7 hunks)plugins/fast_whisper/example/fast_whisper_example.py(2 hunks)plugins/fast_whisper/tests/test_fast_whisper_stt.py(1 hunks)plugins/fast_whisper/vision_agents/plugins/fast_whisper/__init__.py(0 hunks)plugins/fish/example/fish_example.py(2 hunks)plugins/fish/tests/test_fish_stt.py(2 hunks)plugins/fish/vision_agents/__init__.py(0 hunks)plugins/fish/vision_agents/plugins/fish/__init__.py(0 hunks)plugins/fish/vision_agents/plugins/fish/stt.py(1 hunks)plugins/gemini/tests/test_realtime_function_calling.py(10 hunks)plugins/gemini/vision_agents/plugins/gemini/events.py(1 hunks)plugins/getstream/tests/test_getstream_plugin.py(1 hunks)plugins/getstream/tests/test_message_chunking.py(10 hunks)plugins/getstream/tests/test_stream_conversation.py(19 hunks)plugins/getstream/vision_agents/plugins/getstream/__init__.py(0 hunks)plugins/heygen/example/avatar_example.py(3 hunks)plugins/heygen/example/avatar_realtime_example.py(2 hunks)plugins/heygen/tests/test_heygen_plugin.py(5 hunks)plugins/heygen/vision_agents/plugins/heygen/__init__.py(0 hunks)plugins/heygen/vision_agents/plugins/heygen/heygen_rtc_manager.py(9 hunks)plugins/heygen/vision_agents/plugins/heygen/heygen_session.py(8 hunks)plugins/heygen/vision_agents/plugins/heygen/heygen_types.py(1 hunks)plugins/heygen/vision_agents/plugins/heygen/heygen_video_track.py(3 hunks)plugins/inworld/example/inworld_tts_example.py(2 hunks)plugins/inworld/tests/test_tts.py(1 hunks)plugins/inworld/vision_agents/plugins/inworld/__init__.py(0 hunks)plugins/inworld/vision_agents/plugins/inworld/tts.py(6 hunks)plugins/moondream/example/moondream_vlm_example.py(1 hunks)plugins/moondream/tests/test_moondream.py(19 hunks)plugins/moondream/tests/test_moondream_local.py(5 hunks)plugins/moondream/tests/test_moondream_local_vlm.py(4 hunks)plugins/moondream/tests/test_moondream_vlm.py(4 hunks)plugins/moondream/vision_agents/plugins/moondream/__init__.py(1 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py(9 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py(7 hunks)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py(2 hunks)plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py(4 hunks)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_cloud_vlm.py(8 hunks)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_local_vlm.py(12 hunks)plugins/openai/tests/test_openai_llm.py(1 hunks)plugins/openai/tests/test_openai_realtime.py(1 hunks)plugins/openai/vision_agents/plugins/openai/events.py(1 hunks)
⛔ Files not processed due to max files limit (30)
- plugins/openai/vision_agents/plugins/openai/rtc_manager.py
- plugins/openrouter/example/openrouter_example.py
- plugins/openrouter/vision_agents/plugins/openrouter/init.py
- plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
- plugins/sample_plugin/example/my_example.py
- plugins/smart_turn/vision_agents/plugins/smart_turn/init.py
- plugins/smart_turn/vision_agents/plugins/smart_turn/smart_turn_detection.py
- plugins/ultralytics/tests/test_ultralytics.py
- plugins/ultralytics/vision_agents/plugins/ultralytics/yolo_pose_processor.py
- plugins/vogent/example/basic_usage.py
- plugins/vogent/tests/test_vogent_td.py
- plugins/vogent/vision_agents/plugins/vogent/init.py
- plugins/vogent/vision_agents/plugins/vogent/vogent_turn_detection.py
- plugins/wizper/tests/test_wizper_stt.py
- plugins/wizper/vision_agents/plugins/wizper/stt.py
- plugins/xai/vision_agents/plugins/xai/events.py
- plugins/xai/vision_agents/plugins/xai/llm.py
- tests/test_agent.py
- tests/test_agent_tracks.py
- tests/test_audio_forwarder.py
- tests/test_audio_queue.py
- tests/test_conversation.py
- tests/test_events.py
- tests/test_function_calling.py
- tests/test_mcp_integration.py
- tests/test_openai_function_calling_integration.py
- tests/test_pyproject_sources.py
- tests/test_queue_and_video_forwarder.py
- tests/test_queued_video_track.py
- tests/test_vad_base.py
💤 Files with no reviewable changes (15)
- agents-core/vision_agents/core/utils/audio_forwarder.py
- agents-core/vision_agents/core/utils/init.py
- plugins/fish/vision_agents/init.py
- plugins/deepgram/vision_agents/plugins/deepgram/init.py
- plugins/getstream/vision_agents/plugins/getstream/init.py
- plugins/inworld/vision_agents/plugins/inworld/init.py
- plugins/fish/vision_agents/plugins/fish/init.py
- agents-core/vision_agents/core/cli.py
- plugins/fast_whisper/vision_agents/plugins/fast_whisper/init.py
- plugins/heygen/vision_agents/plugins/heygen/init.py
- plugins/elevenlabs/vision_agents/plugins/elevenlabs/init.py
- plugins/aws/vision_agents/plugins/aws/aws_realtime.py
- agents-core/vision_agents/core/cli/cli_runner.py
- agents-core/vision_agents/core/llm/llm_test.py
- examples/other_examples/openai_realtime_webrtc/openai_realtime_example.py
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-11-05T10:57:48.661Z
Learnt from: yarikdevcom
Repo: GetStream/Vision-Agents PR: 132
File: agents-core/vision_agents/core/profiling/base.py:19-23
Timestamp: 2025-11-05T10:57:48.661Z
Learning: In the Profiler class in agents-core/vision_agents/core/profiling/base.py, synchronous file I/O is acceptable for writing the profile HTML output in the on_finish method since it executes at agent shutdown and blocking is not a concern.
Applied to files:
agents-core/vision_agents/core/profiling/base.py
📚 Learning: 2025-10-13T22:00:34.300Z
Learnt from: dangusev
Repo: GetStream/Vision-Agents PR: 98
File: plugins/deepgram/vision_agents/plugins/deepgram/stt.py:135-150
Timestamp: 2025-10-13T22:00:34.300Z
Learning: In the Deepgram STT plugin (plugins/deepgram/vision_agents/plugins/deepgram/stt.py), the `started()` method is designed to wait for the connection attempt to complete, not to guarantee a successful connection. It's acceptable for the connection attempt to fail, and downstream code handles the case where `self.dg_connection` is `None`. The `_connected_once` event is set in the `finally` block intentionally to signal attempt completion.
Applied to files:
plugins/deepgram/vision_agents/plugins/deepgram/deepgram_stt.py
📚 Learning: 2025-11-13T21:25:18.084Z
Learnt from: Nash0x7E2
Repo: GetStream/Vision-Agents PR: 179
File: plugins/inworld/vision_agents/plugins/inworld/tts.py:73-76
Timestamp: 2025-11-13T21:25:18.084Z
Learning: For Inworld AI TTS API authentication in plugins/inworld/vision_agents/plugins/inworld/tts.py, the Authorization header format `f"Basic {self.api_key}"` is correct. The API key obtained from Inworld's Dashboard is already in the proper format and does not require manual Base64 encoding of key:secret.
Applied to files:
plugins/inworld/vision_agents/plugins/inworld/tts.py
🧬 Code graph analysis (63)
plugins/moondream/tests/test_moondream.py (1)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
CloudDetectionProcessor(35-251)
examples/other_examples/plugins_examples/tts_kokoro/main.py (7)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (1)
simple_response(74-94)plugins/xai/vision_agents/plugins/xai/llm.py (1)
simple_response(70-95)agents-core/vision_agents/core/agents/agents.py (2)
simple_response(446-457)subscribe(468-480)agents-core/vision_agents/core/events/manager.py (1)
subscribe(301-375)examples/other_examples/plugins_examples/tts_cartesia/main.py (1)
handle_tts_audio(55-58)examples/other_examples/plugins_examples/tts_elevenlabs/main.py (1)
handle_tts_audio(55-58)agents-core/vision_agents/core/tts/events.py (1)
TTSAudioEvent(11-19)
plugins/aws/example/aws_realtime_nova_example.py (4)
plugins/aws/tests/test_aws.py (1)
llm(35-39)plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)
simple_response(270-285)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)plugins/aws/vision_agents/plugins/aws/aws_llm.py (1)
simple_response(93-113)
examples/01_simple_agent_example/simple_agent_example.py (2)
agents-core/vision_agents/core/llm/llm.py (2)
LLM(49-418)register_function(212-225)agents-core/vision_agents/core/edge/types.py (1)
User(15-18)
plugins/aws/example/aws_realtime_function_calling_example.py (4)
agents-core/vision_agents/core/llm/llm.py (1)
register_function(212-225)plugins/aws/example/aws_llm_function_calling_example.py (2)
get_weather(40-45)calculate(50-59)plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)
simple_response(270-285)plugins/aws/vision_agents/plugins/aws/aws_llm.py (1)
simple_response(93-113)
examples/other_examples/plugins_examples/stt_moonshine_transcription/main.py (4)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (1)
simple_response(74-94)plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)
simple_response(270-285)plugins/xai/vision_agents/plugins/xai/llm.py (1)
simple_response(70-95)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)
examples/other_examples/plugins_examples/mcp/main.py (3)
agents-core/vision_agents/core/mcp/mcp_base.py (1)
MCPBaseServer(10-189)agents-core/vision_agents/core/agents/agents.py (2)
say(844-875)finish(588-621)agents-core/vision_agents/core/cli/cli_runner.py (1)
cli(25-140)
agents-core/vision_agents/core/utils/video_track.py (1)
agents-core/vision_agents/core/utils/video_queue.py (1)
VideoLatestNQueue(7-30)
plugins/moondream/tests/test_moondream_vlm.py (2)
plugins/moondream/tests/test_moondream_local_vlm.py (1)
golf_frame(30-32)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_cloud_vlm.py (2)
CloudVLM(27-253)simple_response(201-225)
plugins/fish/example/fish_example.py (2)
plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)
simple_response(270-285)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)
plugins/openai/vision_agents/plugins/openai/events.py (1)
agents-core/vision_agents/core/events/base.py (1)
PluginBaseEvent(52-54)
plugins/elevenlabs/tests/test_elevenlabs_stt.py (2)
plugins/elevenlabs/vision_agents/plugins/elevenlabs/stt.py (1)
process_audio(87-121)conftest.py (8)
mia_audio_16khz(142-181)participant(136-138)wait_for_result(100-113)get_full_transcript(115-121)mia_audio_48khz(185-224)STTSession(63-121)mia_audio_48khz_chunked(246-296)silence_2s_48khz(228-242)
plugins/moondream/tests/test_moondream_local.py (1)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
LocalDetectionProcessor(34-340)
examples/other_examples/plugins_examples/stt_deepgram_transcription/main.py (2)
agents-core/vision_agents/core/stt/events.py (2)
processing_time_ms(40-41)processing_time_ms(71-72)agents-core/vision_agents/core/agents/agents.py (1)
say(844-875)
examples/other_examples/07_function_calling_example/gemini_example.py (3)
agents-core/vision_agents/core/llm/llm.py (2)
LLM(49-418)register_function(212-225)examples/other_examples/07_function_calling_example/claude_example.py (2)
calculate_sum(30-32)main(12-52)examples/other_examples/07_function_calling_example/openai_example.py (2)
calculate_sum(30-32)main(12-52)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py (1)
agents-core/vision_agents/core/utils/video_queue.py (1)
VideoLatestNQueue(7-30)
examples/other_examples/plugins_examples/audio_moderation/main.py (2)
agents-core/vision_agents/core/stt/events.py (4)
confidence(32-33)confidence(63-64)processing_time_ms(40-41)processing_time_ms(71-72)examples/other_examples/plugins_examples/video_moderation/main.py (1)
moderate(90-104)
agents-core/vision_agents/core/mcp/mcp_server_local.py (2)
agents-core/vision_agents/core/mcp/mcp_server_remote.py (3)
connect(44-98)_cleanup_connection(121-140)disconnect(100-119)agents-core/vision_agents/core/mcp/mcp_base.py (3)
_update_activity(41-43)_start_timeout_monitor(45-50)_stop_timeout_monitor(65-69)
plugins/elevenlabs/vision_agents/plugins/elevenlabs/stt.py (2)
agents-core/vision_agents/core/stt/stt.py (1)
_emit_transcript_event(53-75)plugins/gemini/vision_agents/plugins/gemini/gemini_realtime.py (1)
_should_reconnect(71-89)
examples/other_examples/plugins_examples/tts_cartesia/main.py (7)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (1)
simple_response(74-94)plugins/xai/vision_agents/plugins/xai/llm.py (1)
simple_response(70-95)agents-core/vision_agents/core/agents/agents.py (2)
simple_response(446-457)subscribe(468-480)agents-core/vision_agents/core/events/manager.py (1)
subscribe(301-375)examples/other_examples/plugins_examples/tts_elevenlabs/main.py (1)
handle_tts_audio(55-58)examples/other_examples/plugins_examples/tts_kokoro/main.py (1)
handle_tts_audio(55-58)agents-core/vision_agents/core/tts/events.py (1)
TTSAudioEvent(11-19)
examples/other_examples/plugins_examples/tts_elevenlabs/main.py (3)
agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)examples/other_examples/plugins_examples/tts_cartesia/main.py (1)
handle_tts_audio(55-58)agents-core/vision_agents/core/tts/events.py (1)
TTSAudioEvent(11-19)
plugins/moondream/tests/test_moondream_local_vlm.py (4)
agents-core/vision_agents/core/agents/agent_launcher.py (1)
warmup(47-117)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
warmup(132-134)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_local_vlm.py (3)
warmup(100-103)simple_response(327-351)LocalVLM(31-366)plugins/moondream/tests/test_moondream_vlm.py (1)
golf_frame(34-36)
plugins/heygen/tests/test_heygen_plugin.py (5)
plugins/heygen/vision_agents/plugins/heygen/heygen_session.py (1)
HeyGenSession(11-229)plugins/heygen/vision_agents/plugins/heygen/heygen_types.py (1)
VideoQuality(6-11)plugins/heygen/vision_agents/plugins/heygen/heygen_video_track.py (2)
HeyGenVideoTrack(14-187)stop(178-187)plugins/heygen/vision_agents/plugins/heygen/heygen_rtc_manager.py (2)
HeyGenRTCManager(19-271)is_connected(256-258)plugins/heygen/vision_agents/plugins/heygen/heygen_avatar_publisher.py (3)
AvatarPublisher(21-409)publish_video_track(358-372)state(374-386)
plugins/heygen/vision_agents/plugins/heygen/heygen_rtc_manager.py (2)
plugins/heygen/vision_agents/plugins/heygen/heygen_session.py (3)
HeyGenSession(11-229)start_session(88-134)close(219-229)plugins/heygen/vision_agents/plugins/heygen/heygen_avatar_publisher.py (1)
close(388-409)
plugins/fish/tests/test_fish_stt.py (1)
conftest.py (4)
STTSession(63-121)mia_audio_16khz(142-181)wait_for_result(100-113)mia_audio_48khz(185-224)
plugins/getstream/tests/test_message_chunking.py (1)
plugins/getstream/vision_agents/plugins/getstream/stream_conversation.py (2)
_smart_chunk(206-301)_split_large_block(303-332)
plugins/openai/tests/test_openai_realtime.py (3)
plugins/gemini/tests/test_gemini_realtime.py (1)
realtime(15-23)plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
_set_instructions(470-474)agents-core/vision_agents/core/llm/llm.py (1)
_set_instructions(206-210)
plugins/getstream/tests/test_getstream_plugin.py (2)
agents-core/vision_agents/core/events/manager.py (2)
EventManager(56-561)wait(483-496)agents-core/vision_agents/core/edge/events.py (2)
TrackAddedEvent(18-24)TrackRemovedEvent(28-34)
plugins/deepgram/vision_agents/plugins/deepgram/deepgram_stt.py (1)
agents-core/vision_agents/core/stt/stt.py (1)
_emit_turn_ended_event(77-89)
plugins/gemini/vision_agents/plugins/gemini/events.py (1)
agents-core/vision_agents/core/events/base.py (1)
PluginBaseEvent(52-54)
plugins/aws/example/aws_llm_function_calling_example.py (4)
agents-core/vision_agents/core/llm/llm.py (1)
register_function(212-225)plugins/aws/example/aws_realtime_function_calling_example.py (1)
calculate(71-96)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)plugins/aws/vision_agents/plugins/aws/aws_llm.py (1)
simple_response(93-113)
plugins/deepgram/tests/test_deepgram_stt.py (3)
conftest.py (5)
mia_audio_48khz(185-224)silence_2s_48khz(228-242)STTSession(63-121)participant(136-138)wait_for_result(100-113)agents-core/vision_agents/core/stt/stt.py (1)
process_audio(138-143)agents-core/vision_agents/core/edge/types.py (1)
Participant(22-24)
examples/other_examples/plugins_examples/video_moderation/main.py (2)
agents-core/vision_agents/core/stt/events.py (4)
confidence(32-33)confidence(63-64)processing_time_ms(40-41)processing_time_ms(71-72)examples/other_examples/plugins_examples/audio_moderation/main.py (1)
moderate(45-59)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (5)
plugins/openai/tests/test_openai_llm.py (1)
llm(41-43)agents-core/vision_agents/core/llm/events.py (2)
LLMResponseChunkEvent(87-102)LLMResponseCompletedEvent(106-112)agents-core/vision_agents/core/llm/llm.py (3)
_dedup_and_execute(371-405)get_available_functions(227-229)LLMResponseEvent(38-42)agents-core/vision_agents/core/llm/llm_types.py (2)
NormalizedToolCallItem(107-111)ToolSchema(64-67)plugins/anthropic/vision_agents/plugins/anthropic/events.py (1)
ClaudeStreamEvent(7-11)
agents-core/vision_agents/core/llm/function_registry.py (1)
agents-core/vision_agents/core/llm/llm_types.py (1)
ToolSchema(64-67)
plugins/getstream/tests/test_stream_conversation.py (3)
plugins/getstream/tests/test_message_chunking.py (3)
mock_channel(224-250)conversation(16-28)conversation(253-260)agents-core/vision_agents/core/agents/conversation.py (2)
send_message(80-120)upsert_message(122-210)plugins/getstream/vision_agents/plugins/getstream/stream_conversation.py (1)
StreamConversation(17-353)
conftest.py (2)
agents-core/vision_agents/core/stt/events.py (3)
STTTranscriptEvent(17-49)STTErrorEvent(84-96)STTPartialTranscriptEvent(53-80)agents-core/vision_agents/core/edge/types.py (1)
Participant(22-24)
agents-core/vision_agents/core/stt/stt.py (4)
agents-core/vision_agents/core/events/manager.py (1)
send(437-481)agents-core/vision_agents/core/stt/events.py (3)
STTTranscriptEvent(17-49)STTPartialTranscriptEvent(53-80)STTErrorEvent(84-96)agents-core/vision_agents/core/edge/sfu_events.py (13)
participant(1496-1501)participant(1504-1507)participant(1545-1550)participant(1553-1556)participant(1625-1630)participant(1633-1636)participant(2100-2105)participant(2108-2111)participant(2156-2161)participant(2164-2167)Participant(229-270)error(935-940)error(1906-1910)agents-core/vision_agents/core/turn_detection/events.py (1)
TurnEndedEvent(29-46)
plugins/heygen/vision_agents/plugins/heygen/heygen_video_track.py (4)
agents-core/vision_agents/core/utils/video_queue.py (2)
VideoLatestNQueue(7-30)put_latest_nowait(24-30)agents-core/vision_agents/core/utils/video_track.py (2)
recv(48-74)stop(76-77)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py (2)
recv(48-77)stop(79-81)agents-core/vision_agents/core/utils/video_forwarder.py (1)
stop(118-128)
examples/other_examples/plugins_examples/vad_silero/main.py (2)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (1)
simple_response(74-94)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)
agents-core/vision_agents/core/turn_detection/turn_detection.py (1)
agents-core/vision_agents/core/turn_detection/events.py (1)
TurnStartedEvent(11-25)
agents-core/vision_agents/core/vad/vad.py (2)
agents-core/vision_agents/core/edge/sfu_events.py (1)
Participant(229-270)agents-core/vision_agents/core/edge/types.py (1)
Participant(22-24)
agents-core/vision_agents/core/agents/agent_launcher.py (9)
plugins/moondream/example/moondream_vlm_example.py (1)
create_agent(16-28)examples/other_examples/openai_realtime_webrtc/openai_realtime_example.py (1)
create_agent(23-37)examples/other_examples/gemini_live_realtime/gemini_live_example.py (1)
create_agent(19-29)agents-core/vision_agents/core/stt/stt.py (1)
warmup(43-51)agents-core/vision_agents/core/tts/tts.py (1)
warmup(73-81)agents-core/vision_agents/core/turn_detection/turn_detection.py (1)
warmup(33-41)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
warmup(132-134)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_local_vlm.py (1)
warmup(100-103)agents-core/vision_agents/core/llm/llm.py (1)
warmup(65-73)
plugins/moondream/example/moondream_vlm_example.py (1)
agents-core/vision_agents/core/edge/types.py (1)
User(15-18)
plugins/gemini/tests/test_realtime_function_calling.py (4)
agents-core/vision_agents/core/llm/events.py (2)
RealtimeResponseEvent(46-54)RealtimeAudioOutputEvent(37-42)plugins/gemini/vision_agents/plugins/gemini/gemini_realtime.py (4)
_convert_tools_to_provider_format(448-470)connect(265-286)simple_response(158-179)get_config(311-336)agents-core/vision_agents/core/events/manager.py (1)
subscribe(301-375)agents-core/vision_agents/core/llm/llm.py (1)
register_function(212-225)
plugins/fast_whisper/example/fast_whisper_example.py (1)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (1)
device(128-130)
agents-core/vision_agents/core/mcp/tool_converter.py (2)
agents-core/vision_agents/core/llm/llm_types.py (1)
ToolSchema(64-67)examples/other_examples/plugins_examples/mcp/main.py (1)
call_tool(54-59)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (5)
plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (2)
annotate_detections(48-111)parse_detection_bbox(14-30)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py (1)
MoondreamVideoTrack(16-81)agents-core/vision_agents/core/processors/base_processor.py (3)
VideoProcessorMixin(66-73)VideoPublisherMixin(84-86)publish_video_track(85-86)agents-core/vision_agents/core/utils/video_forwarder.py (2)
VideoForwarder(26-167)add_frame_handler(58-86)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (3)
_process_and_add_frame(305-330)publish_video_track(251-253)_run_detection_sync(272-303)
agents-core/vision_agents/core/stt/events.py (1)
agents-core/vision_agents/core/events/base.py (1)
PluginBaseEvent(52-54)
examples/other_examples/07_function_calling_example/claude_example.py (1)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py (1)
create_message(96-342)
agents-core/vision_agents/core/mcp/mcp_server_remote.py (2)
agents-core/vision_agents/core/mcp/mcp_server_local.py (3)
connect(47-89)_cleanup_connection(112-131)disconnect(91-110)agents-core/vision_agents/core/mcp/mcp_base.py (3)
_update_activity(41-43)_start_timeout_monitor(45-50)_stop_timeout_monitor(65-69)
agents-core/vision_agents/core/utils/video_forwarder.py (3)
agents-core/vision_agents/core/utils/video_queue.py (1)
VideoLatestNQueue(7-30)agents-core/vision_agents/core/utils/video_track.py (1)
recv(48-74)conftest.py (2)
recv(326-349)recv(372-377)
examples/other_examples/07_function_calling_example/openai_example.py (2)
agents-core/vision_agents/core/llm/llm.py (2)
LLM(49-418)register_function(212-225)plugins/xai/vision_agents/plugins/xai/llm.py (1)
create_response(97-155)
plugins/aws/example/aws_qwen_example.py (3)
plugins/aws/vision_agents/plugins/aws/aws_realtime.py (1)
simple_response(270-285)agents-core/vision_agents/core/agents/agents.py (1)
simple_response(446-457)plugins/aws/vision_agents/plugins/aws/aws_llm.py (1)
simple_response(93-113)
plugins/heygen/vision_agents/plugins/heygen/heygen_session.py (2)
plugins/heygen/vision_agents/plugins/heygen/heygen_rtc_manager.py (1)
close(260-271)plugins/heygen/vision_agents/plugins/heygen/heygen_avatar_publisher.py (1)
close(388-409)
agents-core/vision_agents/_generate_sfu_events.py (1)
agents-core/vision_agents/core/edge/sfu_events.py (1)
name(2197-2201)
plugins/heygen/example/avatar_realtime_example.py (1)
plugins/openai/vision_agents/plugins/openai/openai_realtime.py (1)
Realtime(48-561)
plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_local_vlm.py (5)
agents-core/vision_agents/core/agents/agent_types.py (1)
AgentOptions(15-25)plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (1)
handle_device(7-11)agents-core/vision_agents/core/utils/video_queue.py (1)
VideoLatestNQueue(7-30)agents-core/vision_agents/core/utils/video_forwarder.py (1)
VideoForwarder(26-167)agents-core/vision_agents/core/llm/llm.py (1)
LLMResponseEvent(38-42)
plugins/moondream/vision_agents/plugins/moondream/detection/moondream_local_processor.py (5)
plugins/moondream/vision_agents/plugins/moondream/moondream_utils.py (3)
parse_detection_bbox(14-30)annotate_detections(48-111)handle_device(7-11)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_video_track.py (1)
MoondreamVideoTrack(16-81)agents-core/vision_agents/core/processors/base_processor.py (3)
AudioVideoProcessor(117-146)VideoProcessorMixin(66-73)VideoPublisherMixin(84-86)agents-core/vision_agents/core/agents/agent_types.py (1)
AgentOptions(15-25)plugins/moondream/vision_agents/plugins/moondream/detection/moondream_cloud_processor.py (1)
_process_and_add_frame(215-244)
plugins/fast_whisper/tests/test_fast_whisper_stt.py (2)
plugins/fast_whisper/vision_agents/plugins/fast_whisper/stt.py (1)
process_audio(88-140)agents-core/vision_agents/core/tts/testing.py (1)
errors(69-70)
plugins/inworld/example/inworld_tts_example.py (2)
plugins/inworld/tests/test_tts.py (1)
tts(14-15)plugins/inworld/vision_agents/plugins/inworld/tts.py (1)
TTS(19-172)
plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_cloud_vlm.py (4)
agents-core/vision_agents/core/utils/video_queue.py (1)
VideoLatestNQueue(7-30)agents-core/vision_agents/core/utils/video_forwarder.py (1)
VideoForwarder(26-167)plugins/moondream/vision_agents/plugins/moondream/vlm/moondream_local_vlm.py (1)
_process_frame(240-318)agents-core/vision_agents/core/llm/llm.py (1)
LLMResponseEvent(38-42)
plugins/heygen/example/avatar_example.py (1)
agents-core/vision_agents/core/edge/types.py (1)
User(15-18)
🪛 GitHub Actions: CI (unit)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py
[error] 332-332: Argument 1 to "LLMResponseEvent" has incompatible type "Any | AsyncStream[Any]"; expected "Message"
[error] 453-453: Signature of "_extract_tool_calls_from_stream_chunk" incompatible with supertype "vision_agents.core.llm.llm.LLM" (override)
plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py
Outdated
Show resolved
Hide resolved
3c84778 to
3ee11e2
Compare
ruff check)Summary by CodeRabbit
New Features
Bug Fixes
Chores