Conversation
detect.py now tries 3 import paths: 1. Bundled copy (scripts/env_config.py) 2. Repo path (skills/lib/env_config.py) 3. Inline PyTorch-only fallback (no optimization) deploy.sh copies env_config.py into scripts/ during install. Fixes smoke test timeout in Aegis SkillHandler.
…built-in py_compile
…ative $LIB_DIR All 3 inline Python blocks now use $ENV_CONFIG_DIR which checks: 1. $SKILL_DIR/scripts/env_config.py (bundled copy) 2. $LIB_DIR/env_config.py (repo lib/) Fixes ModuleNotFoundError when installed at ~/.aegis-ai/skills/
…lib/ When Aegis installs the skill, only the skill directory is copied. skills/lib/ is outside the skill folder and never shipped. This commit includes env_config.py directly in scripts/ so it's always available at runtime and during deploy.
… >=8.3, torch >=2.4)
Expand HomeSec-Bench VLM Scene Analysis from 35 to 47 tests with a new Indoor Safety Hazards category covering fire, electrical, trip/fall, child safety, and blocked exit scenarios using AI-generated indoor security camera frames. New test scenarios: - Stove smoke, candle near curtain, space heater near drapes, iron left on - Overloaded power strip, frayed electrical cord - Toys on stairs, wet floor - Person fallen, items on high shelf - Open cabinet with chemicals - Cluttered/blocked exit Total benchmark: 131 → 143 tests VLM suite: 35 → 47 tests Version: 2.0.0 → 2.1.0
New DeepCamera analysis skill wrapping the SmartHome-Bench dataset for evaluating VLM performance on video anomaly detection. - SKILL.md with YAML manifest, params, and protocol docs - config.yaml with mode/maxVideos/categories params - run-benchmark.cjs: video download (yt-dlp), frame sampling (ffmpeg), multi-image VLM evaluation, binary anomaly scoring, JSONL protocol - generate-report.cjs: HTML report with confusion matrix, per-category metrics (accuracy/precision/recall/F1), model comparison - fixtures/annotations.json: 99 curated clips across 7 categories (Wildlife, Senior Care, Baby Monitoring, Pet Monitoring, Home Security, Package Delivery, General Activity) - deploy.sh: system dep checks + npm install - Added to skills.json registry and README catalog
- New HomeSafe-Bench skill: 40 indoor safety VLM tests across 5 categories (fire/smoke, electrical, trip/fall, child safety, falling objects) - 26/40 AI-generated fixture frames (remaining pending image gen quota) - Runtime disk space pre-check in SmartHome-Bench (15GB full / 2GB subset) - Register homesafe-bench in skills.json - All datasets download at runtime, not during deployment
Add remaining 14 AI-generated frames via Vertex AI Imagen 3: - child_01, child_04-08 (child safety category) - falling_01-08 (falling objects category) All 40 test scenarios now have matching fixture images: 8 fire/smoke, 8 electrical, 8 trip/fall, 8 child safety, 8 falling objects
numpy 2.x breaks coremltools PyTorch→MIL converter with: 'only 0-dimensional arrays can be converted to Python scalars' - Pin numpy>=1.24.0,<2.0.0 in requirements.txt and requirements_mps.txt - Add runtime numpy version guard in env_config.py export_model() that detects numpy 2.x and gracefully skips CoreML export - Track homesafe-bench package-lock.json
Feature/smarthome bench
- Remove vlm-scene-analysis (planned, not implemented) - Remove dinov3-grounding (planned, not implemented) - Remove person-recognition (planned, not implemented) - Add homesafe-bench to skill catalog table (was missing)
- Add amd-smi static --json as primary detection strategy (ROCm 6.3+/7.x) - Keep rocm-smi as fallback for legacy ROCm <6.3 - Add multi-GPU selection: pick GPU with most VRAM, set HIP_VISIBLE_DEVICES - Add _check_rocm_runtime() to verify ROCmExecutionProvider in onnxruntime (prevents CPU-only onnxruntime from shadowing onnxruntime-rocm) - Update deploy.sh heuristic to also check for amd-smi - Document GPU package conflict in requirements.txt - Add 14 unit tests for all ROCm detection paths
feat(env_config): fix ROCm GPU detection for ROCm 7.2+
- load_optimized() now catches device='cuda' failures on ROCm systems where PyTorch-ROCm is not installed, degrades to CPU gracefully - deploy.sh removes CPU-only onnxruntime before installing onnxruntime-rocm to prevent the shadowing bug
- _try_rocm() checks torch.cuda.is_available() before setting device='cuda' If PyTorch-ROCm is not installed, device stays 'cpu' from the start - load_optimized() fallback pre-checks torch.cuda instead of catching NVIDIA driver exceptions reactively (cleaner logs, no crash) - Added test: no-PyTorch-ROCm falls back to cpu device (15 tests total)
Root cause: ultralytics AutoUpdate detects onnx/onnxslim/onnxruntime as missing during ONNX export and auto-installs CPU onnxruntime, re-shadowing onnxruntime-rocm. Three-layer defense: - requirements_rocm.txt: pre-install onnx + onnxslim so ultralytics doesn't trigger AutoUpdate for ONNX export deps - deploy.sh: set YOLO_AUTOINSTALL=0 during export step - deploy.sh: post-export cleanup removes CPU onnxruntime if present
Instead of installing wrong packages then cleaning up: - Phase 1: PyTorch from ROCm --index-url (forces ROCm build, not CUDA) - Phase 2: remaining packages incl. onnxruntime-rocm, onnx, onnxslim - YOLO_AUTOINSTALL=0 prevents ultralytics from auto-installing CPU onnxruntime Removed: pre-install onnxruntime cleanup, post-export onnxruntime cleanup (no longer needed when packages are installed correctly)
deploy.sh now reads ROCm version from /opt/rocm/.info/version, amd-smi, or rocminfo and constructs the PyTorch index URL dynamically (e.g. rocm7.2 instead of hardcoded rocm6.2). Falls back to 6.2 only if version detection fails.
PyTorch only publishes wheels for specific ROCm versions (e.g. 6.2, 7.0, 7.1) — not every point release. For ROCm 7.2, deploy now tries: 7.2 → 7.1 → 7.0 → 6.4 → 6.3 → 6.2 → 6.1 → 6.0 Stops at first successful install. Falls back to PyPI CPU torch if no ROCm wheels found at all.
Ultralytics' ONNX loader only supports CUDAExecutionProvider (NVIDIA). On ROCm, it falls back to CPU even though ROCMExecutionProvider is available. PyTorch + HIP runs natively on AMD GPUs via device='cuda'. - Change ROCm BackendSpec: onnx → pytorch (skip ONNX export entirely) - Set YOLO_AUTOINSTALL=0 in detect.py to prevent ultralytics from auto-installing onnxruntime-gpu (NVIDIA) at runtime
Feature/rocm gpu detection
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.