LoopForge stores configuration in ~/.loopforge/config.toml (path kept for compatibility).
Think of the file in four layers:
providers.*— how to talk to each model providerrouter.*— which provider/model to use for planning, coding, and summary worksecurity.*— what network, secret, and leak-guard rules apply around tool executionskills.*— how local skills are allowlisted and approved
In practice:
loopforge initcreates the baseline configloopforge config validatechecks whether the file parses and the required structure is presentloopforge doctorhelps explain why a config is valid-but-still-not-usable in your environment
MCP servers are not persisted in config.toml yet. Enable them per run:
loopforge agent run \
--workspace my-ws \
--mcp-config mcp-servers.json \
--prompt "…"loopforge config validate
loopforge config validate --json
loopforge doctorUse config validate for syntax and schema issues.
Use doctor for runtime readiness issues such as missing env vars, browser prerequisites, or security posture warnings.
[providers.ollama]
kind = "openai_compatible"
base_url = "http://127.0.0.1:11434/v1"
api_key_env = ""
default_model = "qwen3:4b"
[router.planning]
provider = "ollama"
model = "default"
[router.coding]
provider = "ollama"
model = "default"
[router.summary]
provider = "ollama"
model = "default"
[security.secrets]
mode = "env_first"
[security.leaks]
mode = "warn"
[skills]
auto_approve_readonly = trueThis is enough for a local-first Ollama setup. You can harden it later by adding security.egress.rules and tightening skills policy.
Each provider entry defines:
kind: driver kind (openai_compatible,zhipu_native,minimax_native,bedrock, etc.)base_url: API base URLapi_key_env: environment variable name that contains the API key (empty for local providers)default_model: default model formodel = "default"
Example:
[providers.ollama]
kind = "openai_compatible"
base_url = "http://127.0.0.1:11434/v1"
api_key_env = ""
default_model = "qwen3:4b"For AWS Bedrock, use kind = "bedrock" plus an aws_bedrock table:
[providers.bedrock]
kind = "bedrock"
base_url = "" # unused for Bedrock
api_key_env = "" # unused for Bedrock
default_model = "anthropic.claude-3-5-sonnet-20241022-v2:0"
[providers.bedrock.aws_bedrock]
region = "us-east-1"
cross_region = "" # optional
profile = "" # optionalNotes:
- Bedrock uses the AWS SDK credential chain (env vars, shared config, profiles, instance role, etc.).
cross_region(optional) prefixes model ids with<cross_region>.when they are not already prefixed.
Each task kind selects a (provider, model) pair. This is how the runtime decides whether planning, coding, and summary turns should use the same model or different ones:
[router.planning]
provider = "ollama"
model = "default"
[router.coding]
provider = "ollama"
model = "default"
[router.summary]
provider = "ollama"
model = "default"[security.secrets]
mode = "env_first"
[security.leaks]
mode = "redact"
[[security.egress.rules]]
tool = "web_fetch"
host = "docs.rs"
path_prefix = "/"
methods = ["GET"]Fields:
security.secrets.modeenv_first: resolve provider credentials from host environment variables
security.leaks.modeoff: do nothing extrawarn: annotate likely secret leaks but keep raw outputredact: mask detected ranges before persistence and follow-up model callsenforce: block the tool result when likely secrets are detected
security.egress.rules- when empty, LoopForge keeps baseline SSRF/private-network guards only
- when non-empty, outbound requests must match an allow rule in addition to baseline guards
Each egress rule contains:
tool: tool name, for exampleweb_fetchhost: exact destination hostpath_prefix: required URL path prefixmethods: allowed HTTP methods
Current outbound allowlist enforcement applies to web_fetch, A2A requests, and browser navigation entrypoints.
LoopForge includes common provider presets (names may evolve):
- OpenAI-compatible:
deepseek,kimi,qwen,glm,minimax - Provider-native:
glm_native,minimax_native,qwen_native
This section controls local skill policy. It is separate from the normal tool sandbox: a skill can still be blocked even when the underlying workspace is otherwise valid.
[skills]
allowlist = ["hello-skill", "qa-helper"]
require_approval = false
auto_approve_readonly = true
experimental = falseFields:
allowlist: optional global skill allowlistrequire_approval: force approval for non-readonly skillsauto_approve_readonly: when true, readonly skills skip manual approvalexperimental: optional flag for rollout messaging