Production-ready AI coding assistant with Claude Code-style Unified Workflow Architecture
Features β’ Quick Start β’ Documentation β’ Roadmap
Unlike simple code generation tools, Agentic Coder provides a complete coding workflow similar to Claude Code and GitHub Copilot Workspace.
|
|
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Request β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Supervisor Agent (Reasoning LLM) β
β β
β Analyzes request β Determines response type β Routes to handler β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β β β
βΌ βΌ βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββ ββββββββββ ββββββββββ
βQUICK_QAβ βPLANNINGβ βCODE_GENβ β REVIEW β β DEBUG β
ββββββββββ ββββββββββ ββββββββββ ββββββββββ ββββββββββ
β β β β β
ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββ
β Unified Response + Artifacts β
ββββββββββββββββββββββββββββββββββββ
What makes it special:
- Single entry point handles all request types
- Supervisor uses Reasoning LLM (DeepSeek-R1) for intelligent analysis
- Automatic routing based on request complexity
- Consistent response format across all handlers
|
File & Git
|
Code Operations
|
Web & Search
Network Mode
|
Perfect for enterprise environments with strict security requirements.
| Mode | Description | Blocked Tools |
|---|---|---|
online |
Full functionality | None |
offline |
Air-gapped mode | web_search, http_request |
Security Policy:
- Block: Tools that send data externally
- Allow: Tools that only receive data (downloads)
- Allow: All local tools (file, git, code)
# Enable offline mode
NETWORK_MODE=offlineExecute untrusted code safely in isolated containers.
sandbox = registry.get_tool("sandbox_execute")
# Python execution
result = await sandbox.execute(
code="import os; print(os.getcwd())",
language="python",
timeout=60
)
# Shell execution
result = await sandbox.execute(
code="ls -la && whoami",
language="shell"
)Supported Languages: Python, Node.js, TypeScript, Shell
Offline Setup:
docker pull ghcr.io/agent-infra/sandbox:latest
# Works offline after first pullFull-featured command-line interface with:
- Command History - Persistent across sessions
- Auto-Completion - Tab completion for commands and files
- Slash Commands -
/help,/status,/preview,/config - Streaming Output - Real-time code generation display
# Start interactive mode
python -m cli
# One-shot mode
python -m cli "Create a Python REST API"
# With options
python -m cli --workspace ./project --model qwen2.5-coder:32b| Requirement | Version |
|---|---|
| Python | 3.12+ |
| Node.js | 20+ |
| Docker | Latest (for sandbox) |
| GPU | NVIDIA recommended (for vLLM) |
# 1. Clone
git clone https://github.com/your-org/agentic-coder.git
cd agentic-coder
# 2. Environment
cp .env.example .env
# 3. Backend
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
# 4. Frontend
cd ../frontend
npm install
# 5. Sandbox (optional)
docker pull ghcr.io/agent-infra/sandbox:latest# Terminal 1: vLLM (Reasoning)
vllm serve deepseek-ai/DeepSeek-R1 --port 8001
# Terminal 2: vLLM (Coding)
vllm serve Qwen/Qwen3-8B-Coder --port 8002
# Terminal 3: Backend
cd backend && uvicorn app.main:app --port 8000 --reload
# Terminal 4: Frontend
cd frontend && npm run devAccess: http://localhost:5173
./RUN_MOCK.sh # Linux/Mac
RUN_MOCK.bat # Windows# .env
# LLM Endpoints
VLLM_REASONING_ENDPOINT=http://localhost:8001/v1
VLLM_CODING_ENDPOINT=http://localhost:8002/v1
REASONING_MODEL=deepseek-ai/DeepSeek-R1
CODING_MODEL=Qwen/Qwen3-8B-Coder
# Network Mode
NETWORK_MODE=online # or 'offline'
# Sandbox
SANDBOX_IMAGE=ghcr.io/agent-infra/sandbox:latest
SANDBOX_HOST=localhost
SANDBOX_PORT=8080
SANDBOX_TIMEOUT=60POST /chat/unified
Content-Type: application/json
{
"message": "Create a Python calculator with tests",
"session_id": "session-123",
"workspace": "/path/to/workspace"
}POST /chat/unified/streamcd backend
pytest app/tools/tests/ -v
# 262 passed, 8 skipped| Document | Description |
|---|---|
| Agent Tools Guide | All 20 tools documentation |
| Architecture | System design |
| CLI Guide | Command-line interface |
| Mock Mode | Testing without GPU |
| Roadmap | Development roadmap & future plans |
- Phase 1 - Core tools (14 tools)
- Phase 2 - Network mode + Web tools
- Phase 2.5 - Code formatting tools
- Phase 3 - CLI & Performance optimization
- Phase 4 - Sandbox execution
- Phase 5 - Plan mode with approval workflow
- Phase 6 - Context window optimization
- Phase 7 - MCP (Model Context Protocol) integration
- Phase 8 - Multi-agent collaboration
See ROADMAP.md for detailed plans and feature backlog.
| Model | Type | Strengths |
|---|---|---|
| DeepSeek-R1 | Reasoning | Complex analysis, planning |
| Qwen3-Coder | Coding | Code generation, completion |
| GPT-OSS | General | Balanced performance |
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing) - Commit changes (
git commit -m 'Add amazing feature') - Push branch (
git push origin feature/amazing) - Open Pull Request
MIT License - see LICENSE for details.