AnnotateANU combines the power of Meta's SAM3 for instant segmentation with a strictly local-first architecture.
Your images never leave your browser. Free, open-source, and built for high-performance computer vision workflows.
-
⚡ Automated Segmentation: SAM3 inference runs locally or via optimized endpoints to auto-segment objects instantly. Use text prompts or bounding boxes to get pixel-perfect masks in milliseconds.
-
🎯 Manual Precision: Need to tweak the AI's work? Use our pixel-perfect pen, rectangle, and polygon tools for fine-tuning your annotations with complete control.
-
📦 Batch Workflow: Load hundreds of images at once. Our interface handles batch processing without browser lag, making large dataset annotation a breeze.
-
⌨️ Lightning Shortcuts: Designed for power users. Keep your hands on the keyboard and annotate without breaking flow with comprehensive keyboard shortcuts.
-
💾 Export Ready: Export to COCO JSON, YOLO format, or ZIP archives with one click. Industry-standard formats ready for your ML pipelines.
-
🔒 Local-First Storage: Your data stays local with IndexedDB - no server uploads, total privacy. All processing happens in your browser or on your local backend.
AnnotateANU is a simple monorepo with two independent applications:
sam3-app/ # Simple Monorepo
├── apps/
│ ├── web/ # React annotation interface
│ │ ├── src/
│ │ ├── Dockerfile
│ │ └── package.json
│ └── api-inference/ # FastAPI SAM3 backend
│ ├── src/app/
│ ├── Dockerfile
│ └── pyproject.toml
├── docker-compose.yml # Orchestrates all services
├── Makefile # Development commands
├── package.json # Root config
└── README.md
- Docker & Docker Compose (recommended)
- Python 3.12+ and uv (for local backend development)
- Node.js 18+ and npm (for local frontend development)
- HuggingFace Account & Token (required for SAM3 model access)
SAM3 is a gated model. You must:
- Create account: https://huggingface.co/join
- Request access: https://huggingface.co/facebook/sam3
- Generate token: https://huggingface.co/settings/tokens
- Add to apps/api-inference/.env:
cp apps/api-inference/.env.example apps/api-inference/.env
# Edit apps/api-inference/.env and add:
HF_TOKEN=hf_your_token_here# 1. Setup environment
cp apps/api-inference/.env.example apps/api-inference/.env
# Edit apps/api-inference/.env and add your HF_TOKEN
# 2. Start all services
make docker-up
# 3. Access the application
# Frontend: http://localhost:5173
# Backend API: http://localhost:8000
# API Docs: http://localhost:8000/docsWe are constantly evolving. Here's what's shipping next to AnnotateANU:
Connect your existing custom models via API. Pre-label your images using your own weights to bootstrap the annotation process even faster.
Move beyond browser storage. We're adding native integration for MinIO and S3-compatible object storage, allowing you to pull and sync datasets directly from your cloud buckets.
We welcome contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, we'd love your help.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Want to influence what we build next? Join our community on GitHub and share your ideas!
MIT License - see LICENSE file for details.
Ready to speed up your CV pipeline?
© 2025 AnnotateANU. Built for the Computer Vision Community.
