Secure and Fast Container Runtime for AI Coding Tools on Linux and macOS
Run AI coding assistants (Claude Code, Aider, and more) in isolated, production-grade Incus containers with zero permission headaches, perfect file ownership, and true multi-session support.
Security First: Unlike Docker or bare-metal execution, your environment variables, SSH keys, and Git credentials are never exposed to AI tools. Containers run in complete isolation with no access to your host credentials unless explicitly mounted.
Think Docker for AI coding tools, but with system containers that actually work like real machines.
Currently supported:
- Claude Code (default) - Anthropic's official CLI tool
Coming soon:
- Aider - AI pair programming in your terminal
- Cursor - AI-first code editor
- And more...
The tool abstraction layer makes it easy to add support for new AI coding assistants.
Core Capabilities
- Multi-slot support - Run parallel AI coding sessions for the same workspace with full isolation
- Session resume - Resume conversations with full history and credentials restored (workspace-scoped)
- Persistent containers - Keep containers alive between sessions (installed tools preserved)
- Workspace isolation - Each session mounts your project directory
- Slot isolation - Each parallel slot has its own home directory (files don't leak between slots)
- Workspace files persist even in ephemeral mode - Only the container is deleted, your work is always saved
- Container snapshots - Create checkpoints, rollback changes, and branch experiments with full state preservation
Security & Isolation
- Automatic UID mapping - No permission hell, files owned correctly
- System containers - Full security isolation, better than Docker privileged mode
- Project separation - Complete isolation between workspaces
- Credential protection - No risk of SSH keys,
.envfiles, or Git credentials being exposed to AI tools
Safe Dangerous Operations
- AI coding tools often need broad filesystem access or bypass permission checks
- These operations are safe inside containers because the "root" is the container root, not your host system
- Containers are ephemeral - any changes are contained and don't affect your host
- This gives AI tools full capabilities while keeping your system protected
# Install
curl -fsSL https://raw.githubusercontent.com/mensfeld/code-on-incus/master/install.sh | bash
# Build image (first time only, ~5-10 minutes)
coi build
# Start coding with your preferred AI tool (defaults to Claude Code)
cd your-project
coi shell
# That's it! Your AI coding assistant is now running in an isolated container with:
# - Your project mounted at /workspace
# - Correct file permissions (no more chown!)
# - Full Docker access inside the container
# - GitHub CLI available for PR/issue management
# - All workspace changes persisted automatically
# - No access to your host SSH keys, env vars, or credentialsIncus is a modern Linux container and virtual machine manager, forked from LXD. Unlike Docker (which uses application containers), Incus provides system containers that behave like lightweight VMs with full init systems.
| Feature | code-on-incus (Incus) | Docker |
|---|---|---|
| Container Type | System containers (full OS) | Application containers |
| Init System | Full systemd/init | No init (single process) |
| UID Mapping | Automatic UID shifting | Manual mapping required |
| Security | Unprivileged by default | Often requires privileged mode |
| File Permissions | Preserved (UID shifting) | Host UID conflicts |
| Startup Time | ~1-2 seconds | ~0.5-1 second |
| Docker-in-Container | Native support | Requires DinD hacks |
No Permission Hell - Incus automatically maps container UIDs to host UIDs. Files created by AI tools in-container have correct ownership on host. No chown needed.
True Isolation - Full system container means AI tools can run Docker, systemd services, etc. Safer than Docker's privileged mode.
Persistent State - System containers can be stopped/started without data loss. Ideal for long-running AI coding sessions.
Resource Efficiency - Share kernel like Docker, lower overhead than VMs, better density for parallel sessions.
# One-shot install
curl -fsSL https://raw.githubusercontent.com/mensfeld/code-on-incus/master/install.sh | bash
# This will:
# - Download and install coi to /usr/local/bin
# - Check for Incus installation
# - Verify you're in incus-admin group
# - Show next stepsFor users who prefer to verify each step or cannot use the automated installer:
Prerequisites:
-
Linux OS - Only Linux is supported (Incus is Linux-only)
- Supported architectures: x86_64/amd64, aarch64/arm64
-
Incus installed and initialized
Ubuntu/Debian:
sudo apt update sudo apt install -y incus
Arch/Manjaro:
sudo pacman -S incus # Enable and start the service (not auto-started on Arch) sudo systemctl enable --now incus.socket # Configure idmap for unprivileged containers echo "root:1000000:1000000000" | sudo tee -a /etc/subuid echo "root:1000000:1000000000" | sudo tee -a /etc/subgid sudo systemctl restart incus.service
See Incus installation guide for other distributions.
Initialize Incus (all distros):
sudo incus admin init --auto
-
User in incus-admin group
sudo usermod -aG incus-admin $USER # Log out and back in for group changes to take effect
Installation Steps:
-
Download the binary for your platform:
# For x86_64/amd64 curl -fsSL -o coi https://github.com/mensfeld/code-on-incus/releases/latest/download/coi-linux-amd64 # For aarch64/arm64 curl -fsSL -o coi https://github.com/mensfeld/code-on-incus/releases/latest/download/coi-linux-arm64
-
Verify the download (optional but recommended):
# Check file size and type ls -lh coi file coi -
Install the binary:
chmod +x coi sudo mv coi /usr/local/bin/ sudo ln -sf /usr/local/bin/coi /usr/local/bin/claude-on-incus
-
Verify installation:
coi --version
Alternative: Build from Source
If you prefer to build from source or need a specific version:
# Prerequisites: Go 1.24.4 or later
git clone https://github.com/mensfeld/code-on-incus.git
cd code-on-incus
make build
sudo make installPost-Install Setup:
-
Optional: Set up ZFS for instant container creation
# Install ZFS # Ubuntu/Debian (may not be available for all kernels): sudo apt-get install -y zfsutils-linux # Arch/Manjaro (replace 617 with your kernel version from uname -r): # sudo pacman -S linux617-zfs zfs-utils # Create ZFS storage pool (50GiB) sudo incus storage create zfs-pool zfs size=50GiB # Configure default profile to use ZFS incus profile device set default root pool=zfs-pool
This reduces container startup time from 5-10s to ~50ms. If ZFS is not available, containers will use default storage (slower but fully functional).
-
Verify group membership (must be done in a new shell/login):
groups | grep incus-admin
Troubleshooting:
- "Permission denied" errors: Ensure you're in the
incus-admingroup and have logged out/in - "incus: command not found": Install Incus following the official guide
- Cannot download binary: Check your internet connection and GitHub access, or build from source
# Build the unified coi image (5-10 minutes)
coi build
# Custom image from your own build script
coi build custom my-rust-image --script build-rust.sh
coi build custom my-image --base coi --script setup.shWhat's included in the coi image:
- Ubuntu 22.04 base
- Docker (full Docker-in-container support)
- Node.js 20 + npm
- Claude Code CLI (default AI tool)
- GitHub CLI (
gh) - tmux for session management
- Common build tools (git, curl, build-essential, etc.)
Custom images: Build your own specialized images using build scripts that run on top of the base coi image.
COI can run on macOS by using Incus inside a Colima or Lima VM. These tools provide Linux VMs on macOS that can run Incus.
Automatic Environment Detection: COI automatically detects when running inside a Colima or Lima VM and adjusts its configuration accordingly. No manual configuration needed!
- Colima/Lima handle UID mapping - These VMs mount macOS directories using virtiofs and map UIDs at the VM level
- COI detects the environment - Checks for virtiofs mounts in
/proc/mountsand thelimauser - UID shifting is auto-disabled - COI automatically disables Incus's
shift=trueoption to avoid conflicts with VM-level mapping
Important: Network isolation modes (restricted, allowlist) require firewalld, which is not available in Colima/Lima VMs by default. Use --network=open for unrestricted network access:
# Inside Colima/Lima VM
coi shell --network=openOr set it as default in your config:
# ~/.config/coi/config.toml
[network]
mode = "open"If you're using Claude via AWS Bedrock, COI will automatically validate your setup when running in Colima/Lima and prevent startup with helpful error messages if anything is misconfigured.
Issue 1: Dual .aws Directory Locations
Colima creates two .aws locations that can get out of sync:
/home/lima/.aws/- Lima's VM home (where~expands inside Colima)/Users/<yourname>/.aws/- macOS home (mounted via virtiofs)
Solution: Pick one location and be consistent:
- Recommended: Use macOS path
/Users/<yourname>/.aws/ - Run
aws sso loginon your Mac (not inside Colima) - Ensure it's mounted to containers (see below)
Issue 2: Restrictive Permissions on SSO Cache
AWS SSO creates cache files with permissions -rw------- (600), which become unreadable inside containers.
Solution: After running aws sso login, fix permissions:
chmod 644 ~/.aws/sso/cache/*.jsonIssue 3: .aws Not Mounted
The container needs access to your AWS credentials.
Solution: Add to your config:
# ~/.config/coi/config.toml
[[mounts.default]]
host = "~/.aws"
container = "/home/code/.aws"- Configure Bedrock in
~/.claude/settings.json:
{
"anthropic": {
"apiProvider": "bedrock",
"bedrock": {
"region": "us-west-2",
"profile": "default"
}
}
}- Set up AWS SSO (on macOS, not in Colima):
aws configure sso
aws sso login
chmod 644 ~/.aws/sso/cache/*.json- Configure mount in
~/.config/coi/config.toml:
[[mounts.default]]
host = "/Users/<yourname>/.aws"
container = "/home/code/.aws"- Launch COI:
colima ssh
coi shell --network=openCOI will validate your setup and provide clear error messages if anything is missing or misconfigured.
# Install Colima (example)
brew install colima
# Start Colima VM with sufficient resources
colima start --cpu 4 --memory 8 --disk 50
# SSH into the VM
colima ssh
# Inside the VM, install Incus
sudo apt update && sudo apt install -y incus
sudo incus admin init --auto
sudo usermod -aG incus-admin $USER
newgrp incus-admin
# Install COI
curl -fsSL https://raw.githubusercontent.com/mensfeld/code-on-incus/master/install.sh | bash
# Build image and start a session
coi build
coi shell --network=openManual Override: In rare cases where auto-detection doesn't work, you can manually configure:
# ~/.config/coi/config.toml
[incus]
disable_shift = true# Interactive session (defaults to Claude Code)
coi shell
# Persistent mode - keep container between sessions
coi shell --persistent
# Use specific slot for parallel sessions
coi shell --slot 2
# Resume previous session (auto-detects latest for this workspace)
coi shell --resume
# Resume specific session by ID
coi shell --resume=<session-id>
# Attach to existing session
coi attach
# List active containers and saved sessions
coi list --all
# Gracefully shutdown specific container (60s timeout)
coi shutdown coi-abc12345-1
# Shutdown with custom timeout
coi shutdown --timeout=30 coi-abc12345-1
# Shutdown all containers
coi shutdown --all
# Force kill specific container (immediate)
coi kill coi-abc12345-1
# Kill all containers
coi kill --all
# Cleanup stopped/orphaned containers
coi clean--workspace PATH # Workspace directory to mount (default: current directory)
--slot NUMBER # Slot number for parallel sessions (0 = auto-allocate)
--persistent # Keep container between sessions
--resume [SESSION_ID] # Resume from session (omit ID to auto-detect latest for workspace)
--continue [SESSION_ID] # Alias for --resume
--profile NAME # Use named profile
--image NAME # Use custom image (default: coi)
--env KEY=VALUE # Set environment variables
--storage PATH # Mount persistent storage# List all containers and sessions
coi list --all
# Machine-readable JSON output (for programmatic use)
coi list --format=json
coi list --all --format=json
# Output shows container mode:
# coi-abc12345-1 (ephemeral) - will be deleted on exit
# coi-abc12345-2 (persistent) - will be kept for reuse
# Kill specific container (stop and delete)
coi kill <container-name>
# Kill multiple containers
coi kill <container1> <container2>
# Kill all containers (with confirmation)
coi kill --all
# Kill all without confirmation
coi kill --all --force
# Clean up stopped/orphaned containers
coi clean
coi clean --force # Skip confirmationLow-level container commands for advanced use cases:
# Launch a new container
coi container launch coi my-container
coi container launch coi my-container --ephemeral
# Start/stop/delete containers
coi container start my-container
coi container stop my-container
coi container stop my-container --force
coi container delete my-container
coi container delete my-container --force
# Execute commands in containers
coi container exec my-container -- ls -la /workspace
coi container exec my-container --user 1000 --env FOO=bar --cwd /workspace -- npm test
# Capture output in different formats
coi container exec my-container --capture -- echo "hello" # JSON output (default)
coi container exec my-container --capture --format=raw -- pwd # Raw stdout (for scripting)
# Check container status
coi container exists my-container
coi container running my-container
# Mount directories
coi container mount my-container workspace /home/user/project /workspace --shiftTransfer files and directories between host and containers:
# Push files/directories into a container
coi file push ./config.json my-container:/workspace/config.json
coi file push -r ./src my-container:/workspace/src
# Pull files/directories from a container
coi file pull my-container:/workspace/build.log ./build.log
coi file pull -r my-container:/root/.claude ./saved-sessions/session-123/Interact with running AI coding sessions for automation workflows:
# List all active tmux sessions
coi tmux list
# Send commands/prompts to a running session
coi tmux send coi-abc12345-1 "write a hello world script"
coi tmux send coi-abc12345-1 "/exit"
# Capture current output from a session
coi tmux capture coi-abc12345-1Note: Sessions use tmux internally, so standard tmux commands work after attaching with coi attach.
Advanced image operations:
# List images with filters
coi image list # List COI images
coi image list --all # List all local images
coi image list --prefix claudeyard- # Filter by prefix
coi image list --format json # JSON output
# Publish containers as images
coi image publish my-container my-custom-image --description "Custom build"
# Delete images
coi image delete my-custom-image
# Check if image exists
coi image exists coi
# Clean up old image versions
coi image cleanup claudeyard-node-42- --keep 3Create container snapshots for checkpointing, rollback, and branching workflows:
# Create snapshots
coi snapshot create # Auto-named snapshot (snap-YYYYMMDD-HHMMSS)
coi snapshot create checkpoint-1 # Named snapshot
coi snapshot create --stateful live # Include process memory state
coi snapshot create -c coi-abc-1 backup # Specific container
# List snapshots
coi snapshot list # Current workspace container
coi snapshot list -c coi-abc-1 # Specific container
coi snapshot list --all # All COI containers
coi snapshot list --format json # JSON output
# Restore from snapshot (requires stopped container)
coi snapshot restore checkpoint-1 # Restore with confirmation
coi snapshot restore checkpoint-1 -f # Skip confirmation
coi snapshot restore checkpoint-1 --stateful # Restore with process state
# Delete snapshots
coi snapshot delete checkpoint-1 # Delete specific snapshot
coi snapshot delete --all # Delete all (with confirmation)
coi snapshot delete --all -f # Delete all without confirmation
# Show snapshot details
coi snapshot info checkpoint-1 # Text output
coi snapshot info checkpoint-1 --format json # JSON outputContainer Resolution:
- Uses
--containerflag if provided - Falls back to
COI_CONTAINERenvironment variable - Auto-resolves from current workspace if exactly one container exists
- Error if multiple containers found (use
--containerto specify)
Safety Features:
- Restore requires container to be stopped (
coi container stop <name>) - Destructive operations require confirmation (skip with
--force) - Snapshots capture complete container state including session data
- Stateful snapshots include process memory for live state preservation
Session resume allows you to continue a previous AI coding session with full history and credentials restored.
Usage:
# Auto-detect and resume latest session for this workspace
coi shell --resume
# Resume specific session by ID
coi shell --resume=<session-id>
# Alias: --continue works the same
coi shell --continue
# List available sessions
coi list --allWhat's Restored:
- Full conversation history from previous session
- Tool credentials and authentication (no re-authentication needed)
- User settings and preferences
- Project context and conversation state
How It Works:
- After each session, tool state directory (e.g.,
.claude) is automatically saved to~/.coi/sessions-<tool>/ - On resume, session data is restored to the container before the tool starts
- Fresh credentials are injected from your host config directory
- The AI tool automatically continues from where you left off
Workspace-Scoped Sessions:
--resumeonly looks for sessions from the current workspace directory- Sessions from other workspaces are never considered (security feature)
- This prevents accidentally resuming a session with a different project context
- Each workspace maintains its own session history
Note: Resume works for both ephemeral and persistent containers. For ephemeral containers, the container is recreated but the conversation continues seamlessly.
By default, containers are ephemeral (deleted on exit). Your workspace files always persist regardless of mode.
Enable persistent mode to also keep the container and its installed packages:
Via CLI:
coi shell --persistentVia config (recommended):
# ~/.config/coi/config.toml
[defaults]
persistent = trueBenefits:
- Install once, use forever -
apt install,npm install, etc. persist - Faster startup - Reuse existing container instead of rebuilding
- Build artifacts preserved - No re-compiling on each session
What persists:
- Ephemeral mode: Workspace files + session data (container deleted)
- Persistent mode: Workspace files + session data + container state + installed packages
Config file: ~/.config/coi/config.toml
[defaults]
image = "coi"
persistent = true
mount_claude_config = true
[tool]
name = "claude" # AI coding tool to use (currently supports: claude)
# binary = "claude" # Optional: override binary name
[paths]
# Note: sessions_dir is deprecated - tool-specific dirs are now used automatically
# (e.g., ~/.coi/sessions-claude/, ~/.coi/sessions-aider/)
sessions_dir = "~/.coi/sessions" # Legacy path (not used for new sessions)
storage_dir = "~/.coi/storage"
[incus]
project = "default"
group = "incus-admin"
claude_uid = 1000
[profiles.rust]
image = "coi-rust"
environment = { RUST_BACKTRACE = "1" }
persistent = trueConfiguration hierarchy (highest precedence last):
- Built-in defaults
- System config (
/etc/coi/config.toml) - User config (
~/.config/coi/config.toml) - Project config (
./.coi.toml) - CLI flags
Control container resource consumption and runtime with configurable limits. All limits can be set via config file or CLI flags.
Add to your ~/.config/coi/config.toml:
[limits.cpu]
count = "2" # CPU cores: "2", "0-3", "0,1,3" or "" (unlimited)
allowance = "50%" # CPU time: "50%", "25ms/100ms" or "" (unlimited)
priority = 0 # CPU priority: 0-10 (higher = more priority)
[limits.memory]
limit = "2GiB" # Memory: "512MiB", "2GiB", "50%" or "" (unlimited)
enforce = "soft" # Enforcement: "hard" or "soft"
swap = "true" # Swap: "true", "false", or size like "1GiB"
[limits.disk]
read = "10MiB/s" # Read rate: "10MiB/s", "1000iops" or "" (unlimited)
write = "5MiB/s" # Write rate: "5MiB/s", "1000iops" or "" (unlimited)
max = "" # Combined I/O limit (overrides read/write)
priority = 0 # Disk priority: 0-10
[limits.runtime]
max_duration = "2h" # Max runtime: "2h", "30m", "1h30m" or "" (unlimited)
max_processes = 0 # Max processes: 100 or 0 (unlimited)
auto_stop = true # Auto-stop when max_duration reached
stop_graceful = true # Graceful (true) vs force (false) stopOverride config file settings with flags:
# CPU limits
coi shell --limit-cpu="2" --limit-cpu-allowance="50%"
# Memory limits
coi shell --limit-memory="2GiB" --limit-memory-swap="1GiB"
# Disk limits
coi shell --limit-disk-read="10MiB/s" --limit-disk-write="5MiB/s"
# Runtime limits
coi shell --limit-duration="2h" --limit-processes=100
# Combine multiple limits
coi shell \
--limit-cpu="2" \
--limit-memory="2GiB" \
--limit-duration="1h"Define limits per profile:
[profiles.limited]
image = "coi"
persistent = false
[profiles.limited.limits.cpu]
count = "2"
allowance = "50%"
[profiles.limited.limits.memory]
limit = "2GiB"
[profiles.limited.limits.runtime]
max_duration = "2h"
auto_stop = trueUse with: coi shell --profile limited
When max_duration is set and auto_stop = true:
- Container automatically stops after the specified duration
- Graceful stop preserves session data
- Force stop (
stop_graceful = false) terminates immediately - Useful for preventing runaway sessions or managing costs
Example:
# Auto-stop after 2 hours
coi shell --limit-duration="2h"
# The container will gracefully stop after 2 hours, saving session dataLimits are applied with this precedence (highest to lowest):
- CLI flags (e.g.,
--limit-cpu="2") - Profile limits (if
--profilespecified) - Config
[limits]section - Unlimited (Incus defaults)
Limit resources for expensive operations:
coi shell \
--limit-cpu="4" \
--limit-memory="4GiB" \
--limit-duration="30m"Prevent runaway processes:
coi shell \
--limit-processes=100 \
--limit-duration="1h" \
--limit-memory="2GiB"Development profile with limits:
[profiles.dev]
image = "coi"
persistent = true
[profiles.dev.limits.cpu]
count = "2"
[profiles.dev.limits.memory]
limit = "4GiB"
[profiles.dev.limits.runtime]
max_duration = "4h"Understanding how containers and sessions work in coi:
-
Containers are always launched as non-ephemeral (persistent in Incus terms)
- This allows saving session data even if the container is stopped from within (e.g.,
sudo shutdown 0) - Session data can be pulled from stopped containers, but not from deleted ones
- This allows saving session data even if the container is stopped from within (e.g.,
-
Inside the container:
tmux→bash→<ai-tool>- When the AI tool exits, you're dropped to bash
- From bash you can: type
exit, pressCtrl+b dto detach, or runsudo shutdown 0
-
On cleanup (when you exit/detach):
- Session data (tool config directory) is always saved to
~/.coi/sessions-<tool>/ - If
--persistentwas NOT set: container is deleted after saving - If
--persistentwas set: container is kept for reuse
- Session data (tool config directory) is always saved to
| Mode | Workspace Files | AI Tool Session | Container State |
|---|---|---|---|
| Default (ephemeral) | Always saved | Always saved | Deleted |
--persistent |
Always saved | Always saved | Kept |
-
--resume: Restores the AI tool conversation in a fresh container- Use when you want to continue a conversation but don't need installed packages
- Container is recreated, only tool session data is restored
- Workspace-scoped: Only finds sessions from the current workspace directory (security feature)
-
--persistent: Keeps the entire container with all modifications- Use when you've installed tools, built artifacts, or modified the environment
coi attachreconnects to the same container with everything intact
From inside the container:
exitin bash → exits bash but keeps container running (use for temporary shell exit)Ctrl+b d→ detaches from tmux, container stays runningsudo shutdown 0orsudo poweroff→ stops container, session is saved, then container is deleted (or kept if--persistent)
From outside (host):
coi shutdown <name>→ graceful stop with session save, then delete (60s timeout by default)coi shutdown --timeout=30 <name>→ graceful stop with 30s timeoutcoi shutdown --all→ graceful stop all containers (with confirmation)coi shutdown --all --force→ graceful stop all without confirmationcoi kill <name>→ force stop and delete immediatelycoi kill --all→ force stop and delete all containers (with confirmation)coi kill --all --force→ force stop all without confirmation
Quick task (default mode):
coi shell # Start session with default AI tool
# ... work with AI assistant ...
sudo poweroff # Shutdown container → session saved, container deleted
coi shell --resume # Continue conversation in fresh containerNote: exit in bash keeps the container running - use sudo poweroff or sudo shutdown 0 to properly end the session. Both require sudo but no password.
Long-running project (--persistent):
coi shell --persistent # Start persistent session
# ... install tools, build things ...
# Press Ctrl+b d to detach
coi attach # Reconnect to same container with all tools
sudo poweroff # When done, shutdown and save
coi shell --persistent --resume # Resume with all installed tools intactParallel sessions (multi-slot):
# Terminal 1: Start first session (auto-allocates slot 1)
coi shell
# ... working on feature A ...
# Press Ctrl+b d to detach (container stays running)
# Terminal 2: Start second session (auto-allocates slot 2)
coi shell
# ... working on feature B in parallel ...
# Both sessions share the same workspace but have isolated:
# - Home directories (~/slot1_file won't appear in slot 2)
# - Installed packages
# - Running processes
# - AI tool conversation history
# List both running sessions
coi list
# coi-abc12345-1 (ephemeral)
# coi-abc12345-2 (ephemeral)
# When done, shutdown all sessions
coi shutdown --allCOI provides network isolation to protect your host and private networks from container access.
Requirements: Network isolation (restricted/allowlist modes) requires firewalld to be installed and running. COI uses firewalld direct rules to filter container traffic in the FORWARD chain. If firewalld is not available, you'll need to use --network=open or install and configure firewalld. See Firewalld Setup below for instructions.
Restricted mode (default) - Blocks local networks, allows internet:
coi shell # Default behavior- Blocks: RFC1918 private networks (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16)
- Blocks: Cloud metadata endpoints (169.254.0.0/16)
- Allows: All public internet (npm, pypi, GitHub, APIs, etc.)
Allowlist mode - Only specific domains allowed:
coi shell --network=allowlist- Requires configuration with
allowed_domainslist - DNS resolution with automatic IP refresh every 30 minutes
- Always blocks RFC1918 private networks
- IP caching for DNS failure resilience
Open mode - No restrictions (trusted projects only):
coi shell --network=open# ~/.config/coi/config.toml
[network]
mode = "restricted" # restricted | open | allowlist
# Allowlist mode configuration
# Supports both domain names and raw IPv4 addresses
allowed_domains = [
"8.8.8.8", # Google DNS (REQUIRED for DNS resolution)
"1.1.1.1", # Cloudflare DNS (REQUIRED for DNS resolution)
"registry.npmjs.org", # npm package registry
"api.anthropic.com", # Claude API
"platform.claude.com", # Claude Platform
]
refresh_interval_minutes = 30 # IP refresh interval (0 to disable)Important for allowlist mode:
- Gateway IP is auto-detected - COI automatically detects and allows your network gateway IP. You don't need to add it manually. Containers must reach their gateway to route traffic.
- Public DNS servers required -
8.8.8.8and1.1.1.1must be in the allowlist for DNS resolution to work. - Firewall rule ordering - COI adds ALLOW rules first (for gateway, allowed domains/IPs), then REJECT rules (for RFC1918 ranges), then a default REJECT rule for allowlist mode.
- Supports both domain names (
github.com) and raw IPv4 addresses (8.8.8.8) - Subdomains must be listed explicitly (
github.com≠api.github.com) - Domains behind CDNs may have many IPs that change frequently
- DNS failures use cached IPs from previous successful resolution
Accessing services from the host (e.g., Puma web server, HTTP servers):
By default, COI allows the host machine to access services running in containers. This works by adding an allow rule for the gateway IP (which represents the host) before the RFC1918 block rules. Since firewalld evaluates rules by priority, the gateway IP is allowed while other private IPs are still blocked.
For example, if a web server runs on port 3000 in the container:
# Inside container: Puma/Rails server listening on 0.0.0.0:3000
# From host: Access via container IP
curl http://<container-ip>:3000Allowing access from entire local network:
For development environments where you want machines on your local network to access container services (e.g., accessing containers via tmux from multiple machines), add this to your config:
[network]
allow_local_network_access = true # Allow all RFC1918, not just gatewayallow_local_network_access = true, ALL RFC1918 private network traffic is allowed (no RFC1918 blocking). Use this only in trusted development environments where you need cross-machine access.
Default behavior: Only the host (gateway IP) can access container services. Other machines on your local network cannot, even if they're on the same subnet.
Note: Firewalld rules filter traffic at the FORWARD chain level. All traffic from the container to the gateway IP is permitted to allow host-to-container communication.
With standard bridge networking, containers are directly accessible from your host. If you run a web server, database, or API inside the container, you can access it from your host browser or tools using the container's IP address.
# Find container IP
coi list # Shows IPv4 for running containers
# Access service from host
curl http://<container-ip>:3000Troubleshooting: If you see "Connection refused" when trying to access container services:
- Verify container service is listening:
coi container exec <name> -- netstat -tlnp - Check container IP:
coi list(shows IPv4 for running containers) - Ensure firewall allows traffic to the bridge network
Network isolation (restricted/allowlist modes) requires firewalld. If you see the error "firewalld is not available", you have two options:
Option 1: Use open network mode (quick fix)
coi shell --network=openThis disables egress filtering but allows you to work immediately.
Option 2: Install and configure firewalld (recommended)
Firewalld provides the FORWARD chain filtering needed for network isolation. Follow these steps:
# 1. Install firewalld (Ubuntu/Debian)
sudo apt install firewalld
# 2. Enable and start firewalld
sudo systemctl enable --now firewalld
# 3. Enable masquerading for container NAT
sudo firewall-cmd --permanent --add-masquerade
sudo firewall-cmd --reload
# 4. Verify firewalld is running
sudo firewall-cmd --state
# 5. Allow COI to manage firewall rules (passwordless sudo)
echo "$USER ALL=(ALL) NOPASSWD: /usr/bin/firewall-cmd" | sudo tee /etc/sudoers.d/coi-firewalld
sudo chmod 440 /etc/sudoers.d/coi-firewalldKey Points:
- Firewalld must be running for network isolation to work
- COI adds direct rules to the FORWARD chain to filter container traffic
- Rules are scoped by container IP address for precise filtering
- Rules are removed when containers are stopped/deleted
How it works:
- COI gets the container's IP address from Incus
- Firewalld direct rules are added with priorities (lower = evaluated first)
- Restricted mode: Allow gateway, block RFC1918, allow all else
- Allowlist mode: Allow gateway, allow specific IPs, block RFC1918, block all else
When working with AI tools in sandboxed containers, be aware that the container has write access to your .git/ directory through the mounted workspace. This creates a potential attack surface where malicious code could modify git hooks or configuration files.
The Risk:
- AI tools can modify
.git/hooks/*(pre-commit, post-commit, pre-push hooks) - These hooks execute arbitrary code when you run git commands
- Modified
.gitattributescan define filters that execute code during git operations - Git configuration (
.git/config) could be altered to add malicious aliases
Best Practice: Disable Hooks When Committing AI-Generated Code
When committing code that was modified by AI tools, always bypass git hooks:
# Recommended: Commit with hooks disabled
git -c core.hooksPath=/dev/null commit --no-verify -m "your message"
# Or create an alias for convenience
alias gcs='git -c core.hooksPath=/dev/null commit --no-verify'Why This Works:
core.hooksPath=/dev/nulltells git to look for hooks in a non-existent directory--no-verifydisables pre-commit, commit-msg, and applypatch-msg hooks- This prevents any malicious hooks from executing even if they were modified inside the container
Additional Protection:
# Also disable git attributes filters (clean/smudge filters can execute code)
git -c core.hooksPath=/dev/null -c core.attributesFile=/dev/null commit --no-verify -m "msg"
# Or make it a shell function for repeated use
safe_commit() {
git -c core.hooksPath=/dev/null -c core.attributesFile=/dev/null commit --no-verify "$@"
}When Is This Necessary?
This protection is most important when:
- Committing code that was modified or generated by AI tools
- You commit changes outside the container (recommended practice)
- Your repository uses git hooks for automation (pre-commit, husky, etc.)
Note: COI sandboxes already protect your host environment from malicious code execution. This guidance is specifically about preventing hooks from running when you commit AI-generated changes from your host shell.
Use coi health to diagnose setup issues and verify your environment is correctly configured:
# Basic health check
coi health
# JSON output for scripting/automation
coi health --format json
# Verbose output with additional checks
coi health --verboseExample output:
Code on Incus Health Check
==========================
SYSTEM:
[OK] Operating system Ubuntu 24.04.3 LTS (amd64)
CRITICAL:
[OK] Incus Running (version 6.20)
[OK] Permissions User in incus-admin group
[OK] Default image coi (fingerprint: 1bf24b3a67...)
[OK] Image age 2 days old
NETWORKING:
[OK] Network bridge incusbr0 (10.128.178.1/24)
[OK] IP forwarding Enabled
[OK] Firewalld Running (restricted mode available)
STORAGE:
[OK] COI directory ~/.coi (writable)
[OK] Sessions dir ~/.coi/sessions-claude (writable)
[OK] Disk space 455.0 GB available
CONFIGURATION:
[OK] Config loaded ~/.config/coi/config.toml
[OK] Network mode restricted
[OK] Tool claude
STATUS:
[OK] Containers 1 running
[OK] Saved sessions 12 session(s)
STATUS: HEALTHY
All 16 checks passed
Exit codes:
0= healthy (all checks pass)1= degraded (warnings but functional)2= unhealthy (critical failures)
What's checked:
| Category | Checks |
|---|---|
| System | OS info, Colima/Lima detection |
| Critical | Incus availability, group permissions, default image, image age |
| Networking | Network bridge, IP forwarding, firewalld (mode-aware) |
| Storage | COI directory, sessions directory, disk space (warns if <5GB) |
| Configuration | Config files, network mode, tool |
| Status | Running containers, saved sessions |
| Optional | DNS resolution, passwordless sudo (with --verbose) |
Colima/Lima detection: When running inside a Colima or Lima VM, the health check automatically detects this and shows [colima] in the OS info. If firewalld is not available, it provides Colima-specific guidance.
Symptom: coi build hangs at "Still waiting for network..." even though the container has an IP address.
Cause: On Ubuntu systems with systemd-resolved, containers may receive 127.0.0.53 as their DNS server via DHCP. This is the host's stub resolver which only works on the host, not inside containers.
Automatic Fix: COI automatically detects and fixes this issue during build by:
- Detecting if DNS resolution fails but IP connectivity works
- Injecting public DNS servers (8.8.8.8, 8.8.4.4, 1.1.1.1) into the container
- The resulting image uses static DNS configuration
Permanent Fix: Configure your Incus network to provide proper DNS to containers:
# Option 1: Enable managed DNS (recommended)
incus network set incusbr0 dns.mode managed
# Option 2: Use public DNS servers
incus network set incusbr0 raw.dnsmasq "dhcp-option=6,8.8.8.8,8.8.4.4"After applying either fix, future containers will have working DNS automatically.
Note: The automatic fix only affects the built image. Other Incus containers on your system may still experience DNS issues until you apply the permanent fix.
Why doesn't COI automatically run incus network set for me?
COI deliberately uses an in-container fix rather than modifying your Incus network configuration:
- System-level impact - Changing Incus network settings affects all containers on that bridge, not just COI containers
- Network name varies - The bridge might not be named
incusbr0on all systems - Permissions - Users running
coi buildmight not have permission to modify Incus network settings - Intentional configurations - Some users have custom DNS configurations for their other containers
- Principle of least surprise - Modifying system-level Incus config without explicit consent could break other setups
The in-container approach is self-contained and only affects COI images, leaving your Incus configuration untouched.

