Skip to content

Commit 87674bc

Browse files
committed
Apply pre-commit fixes
1 parent 0198bfb commit 87674bc

File tree

65 files changed

+130
-161
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+130
-161
lines changed

.coderabbit.yaml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ reviews:
44
- path: "/**/test_*.py"
55
instructions: |
66
Review the following unit test code written using the pytest test library. Ensure that:
7-
- The code adheres to best practices associated with Pytest. The project uses uv.
7+
- The code adheres to best practices associated with Pytest. The project uses uv.
88
- Descriptive test names are used to clearly convey the intent of each test.
9-
- For Integration tests, we should avoid mocking. Integration tests of classes should use the actual client.
10-
- The test should convey actual value. Tests which initialise a class and test the values of those classes should be discouraged
11-
- Discourage verbosity and redundant tests
12-
tone_instructions: "If you must write a poem, write it in the style of Sylvia Plath."
9+
- For Integration tests, we should avoid mocking. Integration tests of classes should use the actual client.
10+
- The test should convey actual value. Tests which initialise a class and test the values of those classes should be discouraged
11+
- Discourage verbosity and redundant tests
12+
tone_instructions: "If you must write a poem, write it in the style of Sylvia Plath."

.cursor/rules/python.mdc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ globs:
44
alwaysApply: true
55
---
66

7-
never mock.
8-
tests use pytest.
7+
never mock.
8+
tests use pytest.
99
never adjust sys.path.
1010

1111
never write: except Exception as e

.github/workflows/release.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,4 +56,4 @@ jobs:
5656

5757
- name: Install core PyPI using uv
5858
run: |
59-
uv pip install vision-agents==${{ needs.build-core.outputs.version }}
59+
uv pip install vision-agents==${{ needs.build-core.outputs.version }}

PRODUCTION.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,4 @@ A distributed approach where agents are available in many regions is ideal
1717

1818
- Fly
1919
- Digital Ocean
20-
- AWS
20+
- AWS

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,7 @@ While building the integrations, here are the limitations we've noticed (Dec 202
225225
* Longer videos can cause the AI to lose context. For instance if it's watching a soccer match it will get confused after 30 seconds
226226
* Most applications require a combination of small specialized models like Yolo/Roboflow/Moondream, API calls to get more context and larger models like gemini/openAI
227227
* Image size & FPS need to stay relatively low due to performance constraints
228-
* Video doesn’t trigger responses in realtime models. You always need to send audio/text to trigger a response.
228+
* Video doesn’t trigger responses in realtime models. You always need to send audio/text to trigger a response.
229229

230230
## Star History
231231

SECURITY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,5 +25,5 @@ While we appreciate any information that you are willing to provide, please make
2525

2626
### Scope
2727

28-
Only code in this repository is in scope.
28+
Only code in this repository is in scope.
2929
Third-party services (hosted demo, npm registry, etc.) are handled separately.

agents-core/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ Build Vision Agents quickly with any model or video provider.
1010

1111
Created by Stream, uses [Stream's edge network](https://getstream.io/video/) for ultra-low latency.
1212

13-
See [Github](https://github.com/GetStream/Vision-Agents).
13+
See [Github](https://github.com/GetStream/Vision-Agents).

agents-core/vision_agents/PROTOBUF_GENERATION.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ class Participant(DataClassJsonMixin):
5959
is_speaking: Optional[bool] = None
6060
audio_level: Optional[float] = None
6161
# ... all other fields
62-
62+
6363
@classmethod
6464
def from_proto(cls, proto_obj) -> 'Participant':
6565
"""Create from protobuf Participant."""
@@ -243,7 +243,7 @@ The EventManager has been updated to seamlessly handle the new protobuf events:
243243
```python
244244
from vision_agents.core.events.manager import EventManager
245245
from vision_agents.core.edge.sfu_events import AudioLevelEvent
246-
246+
247247
manager = EventManager()
248248
manager.register(AudioLevelEvent)
249249
```
@@ -255,13 +255,13 @@ The EventManager has been updated to seamlessly handle the new protobuf events:
255255
event = AudioLevelEvent.from_proto(proto, session_id='session123')
256256
manager.send(event) # BaseEvent fields preserved
257257
```
258-
258+
259259
- Send raw protobuf messages (auto-wrapped):
260260
```python
261261
proto = events_pb2.AudioLevel(user_id='user456', level=0.95)
262262
manager.send(proto) # Automatically wrapped in AudioLevelEvent
263263
```
264-
264+
265265
- Create events without payload (all fields optional):
266266
```python
267267
event = AudioLevelEvent() # No protobuf payload needed
@@ -283,4 +283,3 @@ The EventManager has been updated to seamlessly handle the new protobuf events:
283283
- **Simplified logic**: Single check distinguishes raw protobuf from wrapped events
284284
- **Type safety**: All generated events properly inherit from BaseEvent
285285
- **Flexible usage**: Use raw protobuf or wrapped events interchangeably
286-

docs/ai/instructions/ai-events-example.md

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ class MyPlugin:
4747
def __init__(self):
4848
# EventManager is automatically available
4949
self.events = EventManager()
50-
50+
5151
# Register your custom events
5252
self.events.register_events_from_module(events)
5353
```
@@ -121,7 +121,7 @@ class MyValidatedEvent(PluginBaseEvent):
121121
type: str = field(default='plugin.myplugin.validated', init=False)
122122
text: str = ""
123123
confidence: float = 0.0
124-
124+
125125
def __post_init__(self):
126126
if not self.text:
127127
raise ValueError("Text cannot be empty")
@@ -160,17 +160,17 @@ class MyPlugin:
160160
message="Processing started",
161161
data=data
162162
))
163-
163+
164164
# Do processing
165165
result = await self._process(data)
166-
166+
167167
# Emit success event
168168
self.events.send(MyPluginEvent(
169169
plugin_name="myplugin",
170170
message="Processing completed",
171171
data=result
172172
))
173-
173+
174174
except Exception as e:
175175
# Emit error event
176176
self.events.send(MyPluginErrorEvent(
@@ -205,21 +205,21 @@ class MyPlugin:
205205
super().__init__()
206206
self.events.register_events_from_module(events)
207207
self._setup_event_handlers()
208-
208+
209209
def _setup_event_handlers(self):
210210
"""Set up event handlers for the plugin."""
211-
211+
212212
@self.events.subscribe
213213
async def handle_stt_transcript(event: STTTranscriptEvent):
214214
"""Handle speech-to-text transcripts."""
215215
if event.is_final:
216216
await self._process_transcript(event.text)
217-
217+
218218
@self.events.subscribe
219219
async def handle_llm_response(event: LLMResponseEvent):
220220
"""Handle LLM responses."""
221221
await self._process_llm_response(event.text)
222-
222+
223223
@self.events.subscribe
224224
async def handle_error_events(event: MyPluginErrorEvent | STTErrorEvent):
225225
"""Handle error events."""
@@ -268,16 +268,16 @@ class MyPlugin(PluginBase):
268268
super().__init__()
269269
# Register custom events
270270
self.events.register_events_from_module(events)
271-
271+
272272
async def process(self, data):
273273
# Send custom events
274274
self.events.send(events.MyPluginStartEvent(
275275
plugin_name="myplugin",
276276
config=self.config
277277
))
278-
278+
279279
result = await self._process_data(data)
280-
280+
281281
self.events.send(events.MyPluginDataEvent(
282282
plugin_name="myplugin",
283283
data=result,
@@ -296,21 +296,21 @@ class SimpleSTT(STT):
296296
def __init__(self):
297297
super().__init__()
298298
# No need to register custom events - use base class events
299-
299+
300300
async def transcribe(self, audio_data: bytes) -> str:
301301
try:
302302
result = await self._call_api(audio_data)
303-
303+
304304
# Send base class event
305305
self.events.send(STTTranscriptEvent(
306306
plugin_name="simple_stt",
307307
text=result.text,
308308
confidence=result.confidence,
309309
is_final=True
310310
))
311-
311+
312312
return result.text
313-
313+
314314
except Exception as e:
315315
# Send error event
316316
self.events.send(STTErrorEvent(
@@ -332,32 +332,32 @@ class MyAgent(Agent):
332332
def __init__(self, **kwargs):
333333
super().__init__(**kwargs)
334334
self._setup_agent_handlers()
335-
335+
336336
def _setup_agent_handlers(self):
337337
@self.events.subscribe
338338
async def handle_agent_say(event: AgentSayEvent):
339339
"""Handle when agent wants to say something."""
340340
print(f"Agent wants to say: {event.text}")
341-
341+
342342
# Process the speech request
343343
await self._process_speech_request(event)
344-
344+
345345
# Three ways to send events:
346-
346+
347347
# Method 1: Direct event sending
348348
def send_custom_event(self, data):
349349
self.events.send(MyCustomEvent(
350350
plugin_name="agent",
351351
data=data
352352
))
353-
353+
354354
# Method 2: Convenience method
355355
def send_event_convenience(self, data):
356356
self.send(MyCustomEvent(
357357
plugin_name="agent",
358358
data=data
359359
))
360-
360+
361361
# Method 3: High-level speech
362362
async def make_agent_speak(self, text):
363363
await self.say(text, metadata={"source": "custom_handler"})
@@ -423,7 +423,7 @@ class ValidatedEvent(PluginBaseEvent):
423423
type: str = field(default='plugin.myplugin.validated', init=False)
424424
text: str = ""
425425
confidence: float = 0.0
426-
426+
427427
def __post_init__(self):
428428
if not self.text.strip():
429429
raise ValueError("Text cannot be empty")
@@ -440,20 +440,20 @@ from my_plugin import MyPlugin
440440
@pytest.mark.asyncio
441441
async def test_plugin_events():
442442
plugin = MyPlugin()
443-
443+
444444
# Track events
445445
events_received = []
446-
446+
447447
@plugin.events.subscribe
448448
async def track_events(event):
449449
events_received.append(event)
450-
450+
451451
# Trigger event
452452
await plugin.process_data("test")
453-
453+
454454
# Wait for events
455455
await plugin.events.wait()
456-
456+
457457
# Verify events
458458
assert len(events_received) > 0
459459
assert any(isinstance(e, MyPluginEvent) for e in events_received)
@@ -483,15 +483,15 @@ class OpenAILLM(LLM):
483483
super().__init__()
484484
self.events.register_events_from_module(events)
485485
self.model = model
486-
486+
487487
def _standardize_and_emit_event(self, event: ResponseStreamEvent):
488488
# Send raw OpenAI event
489489
self.events.send(events.OpenAIStreamEvent(
490490
plugin_name="openai",
491491
event_type=event.type,
492492
event_data=event
493493
))
494-
494+
495495
if event.type == "response.error":
496496
self.events.send(events.LLMErrorEvent(
497497
plugin_name="openai",
@@ -519,10 +519,10 @@ class Realtime(realtime.Realtime):
519519
def __init__(self, **kwargs):
520520
super().__init__(**kwargs)
521521
self.events.register_events_from_module(events)
522-
522+
523523
async def connect(self):
524524
# ... connection logic ...
525-
525+
526526
# Emit connection event
527527
self.events.send(events.GeminiConnectedEvent(
528528
plugin_name="gemini",

docs/ai/instructions/ai-llm.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,19 +14,19 @@ class MyLLM(LLM):
1414
super().__init__()
1515
self.model = model
1616
self.client = client
17-
18-
17+
18+
1919
# native method wrapped. wrap the native method, every llm has its own name for this
2020
# openai calls it create response, anthropic create message. so the name depends on your llm
2121
async def mynativemethod(self, *args, **kwargs):
22-
22+
2323
# some details to get right here...
2424
# ensure conversation history is maintained. typically by passing it ie:
2525
if self._instructions:
2626
kwargs["system"] = [{"text": self._instructions}]
27-
27+
2828
response_iterator = await self.client.mynativemethod(self, *args, **kwargs)
29-
29+
3030
# while receiving streaming do this
3131
total_text = ""
3232
for chunk in response_iterator:
@@ -39,7 +39,7 @@ class MyLLM(LLM):
3939
delta=chunk.text,
4040
))
4141
total_text += chunk.text
42-
42+
4343
llm_response = LLMResponseEvent(response_iterator, total_text)
4444
# and when completed
4545
self.events.send(LLMResponseCompletedEvent(
@@ -57,7 +57,7 @@ class MyLLM(LLM):
5757
# call the LLM with the given text
5858
# be sure to use the streaming version
5959
self.mynativemethod(...)
60-
60+
6161
@staticmethod
6262
def _normalize_message(my_input) -> List["Message"]:
6363
# convert the message to a list of messages so our conversation storage gets it
@@ -77,4 +77,4 @@ class MyLLM(LLM):
7777
If you need more examples look in
7878

7979
- gemini_llm.py
80-
- aws_llm.py (AWS Bedrock implementation)
80+
- aws_llm.py (AWS Bedrock implementation)

0 commit comments

Comments
 (0)