A resilient, edge-optimized middleware that bridges the gap between Frigate NVR's object detection and Local LLMs (via Ollama) to provide rich, narrative summaries of security events to Home Assistant.
- Edge Optimized: Designed specifically for consumer hardware (e.g., NVIDIA GTX 1060 6GB).
- Fail-Over Inference: Automatically switches from a high-quality 7B model (LLaVA) to a lightweight 1.8B model (Moondream) if VRAM is exhausted, ensuring continuous operation.
- Home Assistant Integration: Automatically discovered via MQTT, pushing a daily "AI Digest" and granular event statistics.
- Batch Processing: Intelligently batches requests to prevent thermal throttling and maintain system stability.
- Frigate Write-back: (Optional) Automatically updates Frigate events with rich AI-generated descriptions or sub-labels.
- Context Aware: Correlates visual events with Home Assistant sensor data (e.g., "Person detected" + "Door opened").
graph LR
Frigate[Frigate NVR] -- Review Items & Snapshots --> Agent[Aegis Analyst]
Agent -- Tier 1/3 Requests --> Ollama[Ollama LLM]
Agent -- Tier 2 Context --> HA[Home Assistant]
Agent -- Discovery & States --> MQTT[MQTT Broker]
MQTT --> HA
graph TD
subgraph "Tier 1: Perception"
F[Frigate Review Item] -- Native GenAI or VLM --> Desc[Visual Description]
end
subgraph "Tier 2: Correlation"
Desc -- + HA History --> Corr[Semantic Narrative]
end
subgraph "Tier 3: Synthesis"
Corr -- Text LLM --> Brief[Daily Security Briefing]
end
Brief -- MQTT --> HA_S[Home Assistant Sensors]
- Perception (Tier 1): Frigate NVR detects objects and (ideally) generates a description using its native Generative AI. The agent retrieves these Review Items, prioritizing existing descriptions. If missing, it fallbacks to its own VLM logic.
- Analysis (Tier 2): The agent performs Semantic Correlation. It correlates the visual description with Home Assistant's History API (e.g., "Person detected" + "Door opened") to produce a cohesive narrative.
- Reporting (Tier 3): A Text LLM (e.g., Llama 3) synthesizes the day's enriched narratives into a dense, narrative Daily Report.
- Presentation: Results are published via MQTT as granular sensors for easy graphing and dashboarding.
- Inference Backend (Ollama):
- NVIDIA GPU (Recommended: 6GB+ VRAM).
- Ollama (v0.1.30+) running on host or in a separate container.
- Frigate NVR (v0.14+):
- It is highly recommended to enable Frigate's native Generative AI for Tier 1 analysis.
To offload perception to Frigate, add the following to your config.yml:
genai:
enabled: True
provider: ollama
base_url: http://ollama:11434
model: llava:7b # Or any supported VLMAegis Analyst will automatically detect these descriptions and skip its internal VLM step, focusing entirely on Home Assistant correlation.
This project is optimized for edge hardware with limited VRAM.
- Primary Vision Model:
llava:7b(Quantized Q4_K_M takes ~4.8GB VRAM). - Fallback Model:
moondream(Takes ~1.6GB VRAM). - Fail-over Pattern: If the primary model causes an Out-Of-Memory (OOM) error, the system automatically degrades to the fallback model to ensure reporting continuity.
Run these commands on your host machine to pull the required models:
# Vision Model
ollama pull llava:7b
# Analyst Model
ollama pull llama3
# Fallback Model (Edge optimized)
ollama pull moondreamCreate a docker-compose.yml file and mount your configuration:
services:
aegis-analyst:
image: ghcr.io/michaelwoods/aegis-analyst:latest
container_name: aegis-analyst
restart: unless-stopped
volumes:
- ./config.yaml:/app/config.yaml:ro
- ./data:/app/data
environment:
- TZ=America/New_YorkStart the agent:
docker compose up -d aegis-analystClone the repository and set up your environment:
git clone https://github.com/your-username/aegis-analyst.git
cd aegis-analyst
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp config.example.yaml config.yamlEdit config.yaml to match your network environment:
- Frigate URL: e.g.,
http://192.168.1.50:5000 - Ollama:
vision_model: e.g.,llava:7btext_model: e.g.,llama3fallback_model: e.g.,moondream
- MQTT Broker: e.g.,
192.168.1.50 - Home Assistant:
context_entities: List of sensors (e.g.,binary_sensor.front_door) to query for historical context.
To run the agent on a schedule (e.g., every hour) without Docker, you can use Cron:
# Example: Run every hour
0 * * * * /path/to/aegis-analyst/.venv/bin/python /path/to/aegis-analyst/agent.py >> /var/log/aegis_analyst.log 2>&1The agent uses MQTT Discovery to automatically create devices and entities.
sensor.aegis_analyst_status: Current operational state.sensor.aegis_analyst_daily_security_report: Full narrative summary in itscontentattribute.sensor.aegis_analyst_[label]_count: Individual count sensors for each object type.
Visualize the report using a Vertical Stack card:
type: vertical-stack
cards:
- type: entities
entities:
- entity: sensor.aegis_analyst_status
name: Status
- entity: sensor.aegis_analyst_bird_count
name: Birds Detected
- entity: sensor.aegis_analyst_car_count
name: Cars Detected
- entity: sensor.aegis_analyst_person_count
name: People Detected
- type: markdown
title: Daily Security Briefing
content: |
{{ state_attr('sensor.aegis_analyst_daily_security_report', 'content') }}We use a Makefile to enforce quality standards.
make install # Setup environment
make lint # Run Ruff linter
make type-check# Run MyPy
make test # Run all tests
make check # Run all of the aboverequests.exceptions.ConnectionError: Ensure Ollama is running and binding to0.0.0.0if the agent is running inside Docker.Ollama error: CUDA out of memory: The agent should auto-switch tomoondream. If it still fails, reducebatch_limitinconfig.yaml.- Home Assistant shows "Unknown": The sensors are only created after the first successful run.
- Fork the repository.
- Create a feature branch (
git checkout -b feature/amazing-feature). - Commit your changes.
- Run tests (
make check). - Open a Pull Request.
MIT License - Copyright (c) 2026 Michael Woods
