n8n-factory is a robust "Infrastructure as Code" (IaC) tool for assembling, optimizing, simulating, and publishing n8n workflows. It allows you to define complex workflows using simple, composable YAML "recipes" and reusable JSON templates.
Designed for AI agents and power users who need deterministic, scalable, and maintainable workflow generation.
- Assembly: Compile YAML
recipesinto valid n8nworkflow.jsonfiles. - Templates: Extensive library of over 80 reusable node templates covering major services, logic, and AI utilities (
ollama,safe_slugify,progress_marker). - Validation: Detects circular imports, orphan nodes, and potential secrets.
- Optimization: Automatically merges nodes, prunes dead code, and standardizes JSON structure.
- Hardening: Inject error triggers and debug logging automatically.
- Simulation: Dry-run workflows locally with mock data and export HTML/CSV reports.
- Operations: Manage Docker, Postgres, and Redis directly via CLI.
- Bundle & Publish: Export to ZIP or upload directly to your n8n instance API.
- AI Assistance: Optimize prompts and leverage local LLMs (Ollama).
- Adaptive Control Plane: Dynamic batch sizing, phase gating, and intelligent queuing for high-scale execution.
- Cyclic Workflows: Native support for loops via
connections_loop(bypasses DAG checks). - Environment Config: Load environment-specific settings with
--env.
pip install n8n-factory-
Initialize:
n8n-factory init
-
Create Recipe: Edit
recipes/my_workflow.yamlto define your workflow logic. -
Build:
n8n-factory build recipes/my_workflow.yaml
build: Assemble recipe to JSON.list: Show available templates (--json).ops: Runtime operations (logs,db,redis,exec,monitor).normalize: Standardize JSON structure.optimize: Refactor and clean up workflows.harden: Inject error handling and logging.simulate: Run logic locally (export to HTML/CSV).diff: Compare recipe vs JSON.ai: AI tools (chat, list models, optimize prompts).worker: Start the workflow scheduler worker.queue: Manage the job queue (add, list, batch, gate).login: Setup environment configuration.stats: View workflow metrics.creds: Manage/scaffold credentials.
See n8n-factory --help for all commands.
n8n-factory integrates with Ollama to provide local AI capabilities.
Create a .env file:
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_API_KEY=your_key_hereRefine your prompts for better AI responses:
n8n-factory ai optimize "Generate a workflow that connects gmail to slack"Monitor active executions directly from the n8n database:
# List active executions
n8n-factory ops monitor
# Watch a specific execution live
n8n-factory ops monitor <EXECUTION_ID>For complex, high-throughput environments, the factory provides:
- Adaptive Batch Sizing: Automatically adjusts throughput based on latency.
- Phase Gating: Controls workflow dependencies (e.g., Phase 2 waits for Phase 1).
- Delayed Execution: Precise scheduling and backoff strategies.
Refer to AGENTS.md for detailed protocols on using these advanced features.
Queue workflows for execution and let the worker manage concurrency. The worker is robust, automatically requeueing failed jobs.
Run the Queue Consumer (Recommended):
# Run with concurrency 5, poll every 5s.
# Optionally trigger a refill command when queue drops below 5 items.
n8n-factory queue run --concurrency 5 --poll 5 --broker-port 6580 --refill-cmd "python ./scripts/refill_jobs.py" --refill-threshold 5Queue a Job:
n8n-factory queue add my_workflow_id --mode id --meta '{"phase": "1"}'Manage Queue:
n8n-factory queue list --limit 20
n8n-factory queue clearYou can configure the behavior of n8n-factory using environment variables or a .env file.
| Variable | Description | Default |
|---|---|---|
N8N_CONTAINER_NAME |
Name of the n8n Docker container | n8n |
DB_CONTAINER_NAME |
Name of the Postgres container | postgres |
REDIS_CONTAINER_NAME |
Name of the Redis container | n8n-redis |
REDIS_PASSWORD |
Password for Redis authentication | None |
N8N_RUNNERS_BROKER_PORT |
Broker port for n8n runners | None |
A docker-compose.yaml is provided to spin up a full n8n stack with Postgres and Redis (exposed on port 16552).
docker-compose up -d- n8n: http://localhost:5678
- Redis: localhost:16552
The system follows a linear pipeline:
- Recipe Input: The user or AI defines the intent in
recipe.yaml. - Assembler: The factory combines the recipe with JSON Templates.
- Optimizer: The resulting structure is cleaned and optimized.
- Output: A valid n8n Workflow JSON is produced.
See CONTRIBUTING.md for details on how to contribute to this project.
MIT