▄▀█ █ █▀ ▀█▀ ▄▀█ █▄░█ █▀▄ ▄▀█ █▀█ █▀▄ █▀
█▀█ █ ▄█ █ █▀█ █░▀█ █▄▀ █▀█ █▀▄ █▄▀ ▄█
█▀▀ █▄░█ ▀█▀ █▀▀ █▀█ █▀█ █▀█ █ █▀ █▀▀
██▄ █░▀█ ░█░ ██▄ █▀▄ █▀▀ █▀▄ █ ▄█ ██▄
🤖 Enterprise AI Engineering Platform
Standards + CLI + Parlant Orchestrator
by Lorenzo Padovani - Padosoft
Enterprise AI Engineering Platform - A monorepo containing:
- @padosoft/ai-standards - Single Source of Truth (SSOT) for agents, guides, and quality gates
- @padosoft/ai-cli - TypeScript CLI for syncing to Copilot, Cursor, Gemini, Windsurf, Augment, etc.
- ai-orchestrator - Python Parlant-style governance engine with MCP tools
ai-enterprise/
├── package.json # Root workspace (@padosoft/ai-enterprise)
├── packages/
│ ├── cli/ # @padosoft/ai-cli - TypeScript CLI
│ │ ├── src/sync/ # CLI source code
│ │ ├── adapters/ # Templates and targets config
│ │ └── dist/ # Compiled output
│ │
│ ├── standards/ # @padosoft/ai-standards - SSOT
│ │ ├── agents/ # Claude agents (global, detective, cloudflare)
│ │ ├── docs/ # Standards documentation by stack
│ │ ├── config/ # Settings, quality gates
│ │ └── index.js # API for loading standards
│ │
│ ├── orchestrator/ # Python Parlant orchestrator
│ │ ├── src/ai_orchestrator/ # Python source
│ │ ├── migrations/ # MySQL schemas
│ │ └── pyproject.toml # Python package config
│ │
│ └── dashboard/ # React Enterprise Dashboard
│ ├── src/
│ │ ├── pages/ # 11 dashboard pages
│ │ ├── components/ # UI components (shadcn/ui style)
│ │ ├── stores/ # Zustand state management
│ │ └── api/ # API client
│ ├── package.json # React + Vite + Tailwind
│ └── vite.config.ts # Vite configuration
# Start servers
# Windows
start-servers.bat
# Linux/Mac
./start-servers.sh
# Stop servers
# Windows
stop-servers.bat
# Linux/Mac
./stop-servers.shThis starts/stops both the Python Orchestrator (port 8080) and the React Dashboard (port 3000).
🎓 Junior-Friendly Guide - Follow these steps exactly in order. Each step must complete before moving to the next.
Before starting, make sure you have installed:
| Software | Minimum Version | Check Command | Download |
|---|---|---|---|
| Node.js | v18+ | node --version |
nodejs.org |
| npm | v9+ | npm --version |
Included with Node.js |
| Python | v3.10+ | python --version |
python.org |
| pip | Latest | pip --version |
Included with Python |
| MySQL | v8.0+ | mysql --version |
mysql.com or Docker |
| Git | Any | git --version |
git-scm.com |
git clone https://github.com/padosoft/ai-enterprise.git
cd ai-enterprisenpm installThis installs all packages for the CLI and Dashboard. Wait for it to complete (may take 2-3 minutes).
Note on Migrations: The project includes 7 migration files that must be applied in order:
mysql_001_init.sql- Base schema (runs, steps)mysql_002_enterprise.sql- Enterprise features (guidelines, policies, webhooks, parallel execution)mysql_003_dashboard.sql- Dashboard-specific tables (alerts, settings)mysql_004_test_data.sql- Test data for developmentmysql_005_guidelines_source.sql- Guidelines source trackingmysql_006_guideline_id_expand.sql- Extended guideline IDsmysql_007_guideline_tags.sql- Tag system for guidelines
⚠️ Note: The filemysql_002_parallel_steps.sqlis obsolete and should NOT be applied. It conflicts withmysql_002_enterprise.sqlwhich includes all parallel execution features.
Option A: Using Docker (Recommended)
docker run --name ai-orch-mysql \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=ai_orch \
-e MYSQL_USER=ai_orch \
-e MYSQL_PASSWORD=super-secret \
-p 3306:3306 \
-d mysql:8.0
# Wait 30 seconds for MySQL to initialize, then apply all migrations in order:
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_001_init.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_002_enterprise.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_003_dashboard.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_004_test_data.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_005_guidelines_source.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_006_guideline_id_expand.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_007_guideline_tags.sqlOption B: Using Local MySQL
# Connect to MySQL as root
mysql -u root -p
# Create database and user
CREATE DATABASE ai_orch;
CREATE USER 'ai_orch'@'localhost' IDENTIFIED BY 'super-secret';
GRANT ALL PRIVILEGES ON ai_orch.* TO 'ai_orch'@'localhost';
FLUSH PRIVILEGES;
EXIT;
# Apply all migrations in order
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_001_init.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_002_enterprise.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_003_dashboard.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_004_test_data.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_005_guidelines_source.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_006_guideline_id_expand.sql
mysql -uai_orch -psuper-secret ai_orch < packages/orchestrator/migrations/mysql_007_guideline_tags.sqlcd packages/orchestrator
# Copy the example environment file
cp .env.example .envEdit .env with your settings:
# Database (use your actual values)
AI_ORCH_DB_HOST=localhost
AI_ORCH_DB_PORT=3306
AI_ORCH_DB_USER=ai_orch
AI_ORCH_DB_PASS=super-secret
AI_ORCH_DB_NAME=ai_orch
# Server
AI_ORCH_HTTP_HOST=0.0.0.0
AI_ORCH_HTTP_PORT=8080
# Paths (IMPORTANT: use absolute paths)
AI_ORCH_REPO_ROOT=/path/to/your/projects
AI_ORCH_ARTIFACTS_DIR=/path/to/.ai/artifacts
# CORS (dashboard URLs)
AI_ORCH_CORS_ORIGINS=http://localhost:5173,http://localhost:3000cd packages/orchestrator
# Create virtual environment
python -m venv .venv
# Activate it
# Windows:
.venv\Scripts\activate
# Linux/Mac:
source .venv/bin/activate
# Install the orchestrator package
pip install -e .
# Go back to root
cd ../..Using the startup scripts (Recommended):
# Windows
start-servers.bat
# Linux/Mac
./start-servers.shOr manually:
# Terminal 1: Start Python Orchestrator
cd packages/orchestrator
.venv\Scripts\activate # or: source .venv/bin/activate
python run_server.py 8080
# Terminal 2: Start React Dashboard
cd packages/dashboard
npm run dev- Dashboard: Open http://localhost:3000 - you should see the AI Orchestrator Dashboard
- API Health: Open http://localhost:8080/health - should return
{"status": "healthy"} - API Stats: Open http://localhost:8080/api/stats - should return JSON with statistics
# Start only the orchestrator
start-servers.bat --orchestrator-only
./start-servers.sh --orchestrator-only
# Start only the dashboard
start-servers.bat --dashboard-only
./start-servers.sh --dashboard-only
# Stop only the orchestrator
stop-servers.bat --orchestrator-only
./stop-servers.sh --orchestrator-only
# Stop only the dashboard
stop-servers.bat --dashboard-only
./stop-servers.sh --dashboard-only
# Show help
start-servers.bat --help
./start-servers.sh --help
stop-servers.bat --help
./stop-servers.sh --help| Problem | Solution |
|---|---|
| Python not found | Install Python 3.10+ and add to PATH |
| Node not found | Install Node.js 18+ and restart terminal |
| MySQL connection refused | Check MySQL is running: docker ps or mysql -uroot -p |
| Port 8080 already in use | Kill process: netstat -ano | findstr :8080 then taskkill /PID <pid> /F |
| Port 3000 already in use | Kill process: netstat -ano | findstr :3000 then taskkill /PID <pid> /F |
| "Module not found" in Python | Activate venv: .venv\Scripts\activate and run pip install -e . |
| Dashboard shows "Offline" | Check orchestrator is running at http://localhost:8080/health |
| CORS errors in browser | Verify AI_ORCH_CORS_ORIGINS in .env includes http://localhost:3000 |
The AI Orchestrator is tool-agnostic and works with any AI CLI that supports MCP (Model Context Protocol):
| AI Tool | MCP Support | Integration |
|---|---|---|
| Claude Code | ✅ Native | claude mcp add |
| Gemini CLI | ✅ Native | Add to MCP config |
| Cursor IDE | ✅ Via config | MCP server config |
| Continue.dev | ✅ Via config | MCP server config |
| Any MCP Client | ✅ | stdio or HTTP transport |
claude mcp add ai-orchestrator-local '{
"type": "stdio",
"command": "bash",
"args": ["-lc", "cd /path/to/ai-enterprise/packages/orchestrator && source .venv/bin/activate && python -m ai_orchestrator.server"]
}'Add to your ~/.gemini/settings.json:
{
"mcpServers": {
"ai-orchestrator": {
"command": "bash",
"args": ["-lc", "cd /path/to/ai-enterprise/packages/orchestrator && source .venv/bin/activate && python -m ai_orchestrator.server"]
}
}
}For tools that don't support stdio, use the HTTP server:
# Start with HTTP transport
python -m ai_orchestrator.server --transport http --port 8080
# MCP endpoint available at:
# POST http://localhost:8080/mcp/invokeAny HTTP client can invoke MCP tools:
curl -X POST http://localhost:8080/mcp/invoke \
-H "Content-Type: application/json" \
-d '{"tool": "list_runs", "arguments": {"limit": 10}}'# Clone the repository
git clone https://github.com/padosoft/ai-enterprise.git
cd ai-enterprise
# Install dependencies (builds CLI automatically)
npm install
# OR install globally from npm (when published)
npm i -g @padosoft/ai-enterprise# Install agents and settings to user home directory
ai bootstrap --user
# This installs:
# - Claude agents → ~/.claude/agents/
# - Claude config → ~/.claude/config/
# - Docs → ~/.ai-standards/docs/
# - Generated files → ~/.ai-standards/dist/# Generate and sync to all AI tools
ai sync --cursor-here --copilot-here --gemini-here --windsurf-here --augment-here
# With split options
ai sync --cursor-here --cursor-split # Split by category
ai sync --windsurf-here --windsurf-split # Split by stack
ai sync --augment-here --augment-split # Split by stack
# Auto-detect stack and generate project templates
ai sync --project-context# Core commands
ai bootstrap --user # Install global agents and settings
ai sync [options] # Generate and export AI tool configurations
ai harvest [options] # Import AI bundles from dependencies
ai update # Update global standards from source
ai validate # Check if configurations are up to date
ai check-updates # Check for package updates
ai print --target=<target> # Print generated rules for target
# Sync options
--with-harvest # Run harvest before sync
--cursor-here # Write to .cursor/rules/
--cursor-split # Split Cursor rules by category
--copilot-here # Write to .github/copilot-instructions.md
--gemini-here # Write to .gemini/GEMINI.md
--opencode-here # Write to .opencode/
--warp-here # Write to WARP.md
--windsurf-here # Write to .windsurf/rules/
--windsurf-split # Split Windsurf rules by stack
--augment-here # Write to .augment-guidelines
--augment-split # Split Augment rules by stack
--project-context # Auto-detect stack templates
# Harvest options
--clean # Clean existing deps before import
--dry-run # Preview without making changes
--packages pkg1,pkg2 # Import only specific packages
# Print targets
copilot, cursor, gemini, opencode, warp, warp-global, augmentThe system uses dynamic granularity selection:
- One complete file per stack with all major patterns
- Coherent context - DTO, Repository, Factory, Action patterns together
- Ideal for: Complete features, new implementations, onboarding
- Focused files on specific aspects (routes, validation, migrations)
- Deep details and edge cases
- Ideal for: Specific fixes, targeted modifications, troubleshooting
The task-router dynamically chooses:
// Complex task → Comprehensive
"Implement order system with DTO, Repository, Actions"
→ Loads: php-laravel-coding-guidelines.md + global standards
// Specific task → Micro-Guide
"Fix this route validation"
→ Loads: validation.md + global essentials
// Hybrid task → Both
"Add payment with Laravel patterns"
→ Loads: comprehensive for context + payments.md for detailsThe Python orchestrator provides Parlant-style governance - a paradigm shift from prompt-based programming to structured, enforceable contracts.
📖 Full Architecture Documentation - Deep dive into the Parlant philosophy and implementation.
| Traditional Prompt-Based | Parlant-Style |
|---|---|
| Rules in prompt text | Structured Guideline objects |
| Hope the model follows | External enforcement & validation |
| No priority system | Explicit numerical priorities |
| No audit trail | Complete event logging |
| Retry = repeat prompt | Structured retry hints |
- Deterministic Enforcement - Rules are validated externally, not interpreted by the model
- Priority Resolution - When rules conflict, priority determines the winner
- Context-Aware - Guidelines apply conditionally based on project stack
- Full Auditability - Every step, decision, and artifact is logged
- Structured Recovery - Failed steps get actionable retry hints
# 1. Start MySQL (Docker)
docker run --name ai-orch-mysql \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=ai_orch \
-e MYSQL_USER=ai_orch \
-e MYSQL_PASSWORD=super-secret \
-p 3306:3306 \
-d mysql:8.0
# 2. Apply all migrations in order
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_001_init.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_002_enterprise.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_003_dashboard.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_004_test_data.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_005_guidelines_source.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_006_guideline_id_expand.sql
docker exec -i ai-orch-mysql mysql -uai_orch -psuper-secret ai_orch \
< packages/orchestrator/migrations/mysql_007_guideline_tags.sql
# 3. Install Python package
cd packages/orchestrator
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e .
# 4. Set environment variables
export AI_ORCH_DB_HOST=localhost
export AI_ORCH_DB_USER=ai_orch
export AI_ORCH_DB_PASS=super-secret
export AI_ORCH_DB_NAME=ai_orch
export AI_ORCH_REPO_ROOT=/path/to/your/project
export AI_ORCH_ARTIFACTS_DIR=/path/to/.ai/artifacts
# 5. Run MCP server
ai-orchestrator# Add MCP server to Claude Code CLI
claude mcp add ai-orchestrator-local '{
"type": "stdio",
"command": "bash",
"args": [
"-lc",
"cd /path/to/ai-enterprise/packages/orchestrator && source .venv/bin/activate && python -m ai_orchestrator.server"
]
}'| Tool | Description |
|---|---|
orchestrate |
Start new orchestration run |
start_step |
Begin step execution |
commit_step |
Record step completion with artifacts |
finalize |
Complete run with success/failure |
get_run |
Get run details |
list_runs |
List orchestration runs |
get_step |
Get step details |
list_events |
List run events |
get_guidelines |
Get applicable guidelines for context |
get_ready_steps |
Get steps ready for parallel execution |
cancel_run |
Cancel active run |
| Feature | Description |
|---|---|
| Webhooks | External event notifications with HMAC signing and retry logic |
| Prometheus Metrics | Full observability with counters, gauges, histograms |
| Transactional Locking | MySQL advisory locks for concurrent access |
| Parallel Steps | Execute independent steps concurrently |
| HTTP Server | REST API for dashboard and MCP-over-HTTP |
# Install with all enterprise features
pip install -e "packages/orchestrator[enterprise]"
# Run with HTTP transport and metrics
python -m ai_orchestrator.server --transport http --port 8080 --metrics-port 9090| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/metrics |
GET | Prometheus metrics |
/api/stats |
GET | Detailed statistics with trends |
/api/stats/timeseries |
GET | Time series data for charts |
/api/health |
GET | Detailed system health (CPU, memory, services) |
/api/runs |
GET | Paginated runs list |
/api/runs/{id} |
GET | Run details with steps |
/api/runs/{id}/cancel |
POST | Cancel active run |
/api/runs/{id}/retry |
POST | Retry failed run |
/api/events |
GET | Event list |
/api/events/stream |
GET | SSE real-time events |
/api/webhooks |
GET/POST | List/create webhooks |
/api/webhooks/{id} |
PUT/DELETE | Update/delete webhook |
/api/webhooks/{id}/test |
POST | Test webhook |
/api/alerts |
GET | List alerts |
/api/alerts/{id}/acknowledge |
POST | Acknowledge alert |
/api/settings |
GET/PUT | Dashboard settings |
/api/discord/test |
POST | Test Discord notification |
/mcp/invoke |
POST | MCP tool invocation via HTTP |
The monorepo includes a full-featured React dashboard in packages/dashboard/:
# Install and run dashboard
cd packages/dashboard
npm install
npm run devDashboard Features:
- Overview - KPI cards, runs chart, active runs, recent events
- Runs Management - List, filter, cancel, retry runs with detailed view
- Metrics - Charts (area, pie, bar), tool usage, performance stats
- Alerts - System alerts with severity levels and acknowledgment
- Events - Audit log with date grouping and filtering
- Live Feed - Real-time SSE event stream with pause/resume
- Guidelines - CRUD for behavioral guidelines with enable/disable
- Webhooks - CRUD for webhooks with test functionality
- System Health - CPU, memory, disk, database, queue stats
- Settings - Theme, retention, Discord integration, alert thresholds
Discord Integration:
- Critical alerts to designated channel
- Weekly summary reports (separate channel)
- Configurable notification triggers
Guidelines are structured behavioral rules that govern AI agent behavior. Unlike free-form prompt instructions, Guidelines are:
- Prioritized - Lower numbers = higher priority (1-100 scale)
- Categorized -
behavior,security,quality,custom - Conditional - Can apply only to specific stacks or contexts
- Externally Enforced - Validated by the orchestrator, not just suggested
Guidelines come from three sources, each with different persistence:
| Source | Icon | Persistence | Edit Location |
|---|---|---|---|
| Database | HardDrive |
Persistent | Dashboard or API |
| Built-in | Package |
Read-only | Code changes only |
| Standards JSON | FileJson |
Session-only* | packages/standards/config/settings.json |
* Important: Guidelines loaded from
settings.jsonare resynced every time the server restarts. Any modifications made via the dashboard will be lost on restart. To permanently change these guidelines, edit the source JSON file.
| ID | Category | Priority | Name | Description |
|---|---|---|---|---|
| g-security-001 | security | 5 | command_safety | Never execute destructive commands (rm -rf, curl|bash, etc.) |
| g-security-002 | security | 6 | secret_protection | Never include secrets, API keys, or credentials in output |
| g-behavior-001 | behavior | 10 | contract_compliance | Always follow step contracts strictly |
| g-behavior-002 | behavior | 20 | minimal_context | Keep each step focused on its specific goal |
| g-behavior-003 | behavior | 30 | deterministic_changes | Prefer minimal, deterministic changes |
| g-quality-001 | quality | 40 | test_verification | Run tests before marking a coding step complete |
Quality gates defined in packages/standards/config/settings.json are automatically converted to Guidelines:
{
"quality_gates": {
"database": {
"no_offset_over_1000": {
"enabled": true,
"message": "BLOCKED: Use keyset pagination instead of OFFSET > 1000"
}
},
"security": {
"no_pii_in_logs": {
"enabled": true,
"message": "Never log PII (emails, IPs, passwords)"
}
}
}
}These become Guidelines with:
- ID:
qg-{category}-{gate_name}(e.g.,qg-database-no_offset_over_1000) - Source:
standards - Source Path: Full path to
settings.json
# List all guidelines
curl http://localhost:8080/api/guidelines
# Filter by category
curl http://localhost:8080/api/guidelines?category=security
# Filter by stack
curl http://localhost:8080/api/guidelines?stack=php-laravel- Navigate to Guidelines in the sidebar
- View guidelines grouped by source (Database, Built-in, Standards JSON)
- Add new guidelines using the Add Guideline button (creates in Database)
- Toggle guidelines on/off (Database only)
- Edit/delete guidelines (Database only)
Note: Built-in and Standards guidelines show as "read-only" in the dashboard. To modify them permanently, edit the source code or
settings.jsonfile.
from ai_orchestrator import (
ParlantEngine,
StackDetector,
get_standards,
detect_stacks,
# Enterprise features
get_metrics,
dispatch_webhook,
get_lock_manager,
create_http_app,
run_server,
)
# Stack detection
detector = StackDetector("/path/to/project")
stacks = detector.detect()
print(f"Primary: {detector.primary_stack}")
# Standards integration
standards = get_standards()
if standards.is_available:
guidelines = standards.guidelines
laravel_docs = standards.load_agent_content("global", "task-router")
# Parlant engine with auto-loaded guidelines
engine = ParlantEngine() # Loads from settings.json + defaults
context = {"stack": "php-laravel"}
applicable = engine.get_applicable_guidelines(context)
# Metrics collection
metrics = get_metrics()
metrics.run_started()
with metrics.tool_call_timer("my_tool"):
# ... tool execution
pass
metrics.run_completed("done", duration_seconds=10.5)
# Webhook dispatch (fire-and-forget)
dispatch_webhook("custom.event", run_id="run-123", data={"key": "value"})
# Distributed locking
lock_manager = get_lock_manager()
with lock_manager.acquire_step_lock("run-123", step_id=1):
# Exclusive access to this step
passQuality gates defined in packages/standards/config/settings.json:
- ❌ OFFSET > 1000 rows (use keyset pagination)
- ❌ Query without covered index on hot paths
- ❌ N+1 patterns (use eager loading)
- ❌ Controller without FormRequest validation
- ❌ Resource controller without Policy authorization
- ❌ Route without required middleware
- ❌ Handler without Zod schema validation
- ❌ Route without error boundary
- ❌ API route without CORS configuration
- ❌ PII in log statements
- ❌ Hardcoded secrets/credentials
- ❌ TODO without issue reference
| Tool | Global Location | Project Location | Split Support |
|---|---|---|---|
| Claude Code | ~/.claude/agents/ |
.claude/agents/ |
✅ |
| GitHub Copilot | ~/.config/github-copilot/ |
.github/copilot-instructions.md |
❌ |
| Cursor IDE | ❌ | .cursor/rules/ |
✅ |
| Google Gemini | ~/.gemini/ |
.gemini/GEMINI.md |
❌ |
| OpenCode AI | ~/.config/opencode/ |
.opencode/ |
✅ |
| Warp Terminal | ❌ | WARP.md |
❌ |
| Windsurf IDE | ❌ | .windsurf/rules/ |
✅ |
| Augment Code | ❌ | .augment-guidelines |
✅ |
The Debugging Detective system provides automatic analysis and auto-fixing:
- Error Analysis - Automatic error clustering for Laravel/Hono/Elasticsearch
- Performance Doctor - API optimization, caching, memory leak detection
- SQL Surgeon - N+1 detection, index optimization, slow query analysis
- Auto-Fixing - Automatic fixes with PR/approval workflow
# Windows PowerShell
.\detective-control.ps1 start
.\detective-control.ps1 set-mode analysis # Read-only
.\detective-control.ps1 set-mode stage # PR workflow
.\detective-control.ps1 set-mode production # Auto-fix
.\detective-control.ps1 status
# Linux/Mac
./detective-control.sh start
./detective-control.sh set-mode analysis
./detective-control.sh statusEnable debug mode for complete routing visibility:
# Via script
./packages/standards/scripts/debug-control.sh enable
./packages/standards/scripts/debug-control.sh verbose
./packages/standards/scripts/debug-control.sh full
# Via prompt
ai "create controller --debug"
ai "implementa auth con debug verboso"## 🔍 AI Kit Debug Report
**Stack Detected**: php-laravel (confidence: high)
**Agents Used**: @laravel-controller-builder, @test-writer
**Guides Loaded**: php-laravel-coding-guidelines.md (2.1k tokens)
**Quality Gates**: 4 passed, 1 warning
**Performance**: 850ms execution, 2.1k/200k context tokensimport { loadSettings, loadAgent, loadStandard } from '@padosoft/ai-standards';
// Load quality gates
const settings = loadSettings();
// Load an agent
const taskRouter = loadAgent('global', 'task-router');
// Load a standard
const laravelRoutes = loadStandard('php-laravel', 'routes');# Binary entries
ai-standards # Full name
ai # Short alias
# Both point to: dist/sync/cli.jsfrom ai_orchestrator import (
ParlantEngine,
StackDetector,
get_standards,
detect_stacks,
detect_stacks_detailed,
)
# Detect project stacks
stacks = detect_stacks("/path/to/project") # ["php-laravel", "node"]
# Full detection with confidence scores
details = detect_stacks_detailed("/path/to/project")
# {"stacks": [{"name": "php-laravel", "confidence": 0.95, ...}], "primary_stack": "php-laravel"}
# Standards integration
standards = get_standards()
guidelines = standards.guidelines # Quality gates as Parlant Guidelines
# Run the MCP server
from ai_orchestrator.server import mcp
mcp.run()- Parlant Architecture - Complete guide to the orchestrator architecture and philosophy
- Debug Mode Guide - How to enable and use debug mode
- Detective System - Debugging detective documentation
- GitHub Copilot: Repository Instructions
- Cursor IDE: Rules Documentation
- Gemini CLI: Configuration
- OpenCode AI: Rules | Agents
- Claude Code: Documentation
- Parlant Project: GitHub
- MCP Protocol: Model Context Protocol
- Create folder
packages/standards/docs/standards/new-stack/ - Add stack-specific guides
- Create agents in
packages/standards/agents/new-stack/ - Update
task-router.mdwith routing rules - Update
packages/cli/adapters/config/targets.yml
- Create
.claude/agents/in your project - Files with same name override global agents
- Create
.claude/settings.jsonfor project-specific quality gates
- Fork the repository
- Create branch for feature (
git checkout -b feature/amazing-feature) - Update guides and agents as needed
- Test with
ai validate - Commit with meaningful message
- Push and create Pull Request
MIT License - Copyright Padosoft 2025
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: helpdesk AT padosoft.com
Developed with ❤️ by Lorenzo Padovani Padosoft for accelerating enterprise development with AI tools.
