Description
When using the AWF sandbox with --enable-api-proxy, the API proxy overrides OPENAI_BASE_URL inside the agent container to point to its internal proxy (http://172.30.0.30:10000/v1). The proxy then forwards all requests to the hardcoded api.openai.com (or api.anthropic.com), ignoring any custom OPENAI_BASE_URL or ANTHROPIC_BASE_URL set by the user.
This makes it impossible to use AWF with internal/self-hosted LLM endpoints, such as corporate LLM routers or proxies that provide an OpenAI-compatible API.
Reproduction
- Configure a workflow with a custom
OPENAI_BASE_URL pointing to an internal endpoint:
engine:
id: codex
model: gpt-5.3-codex
env:
OPENAI_BASE_URL: "https://llm-router.internal.example.com/v1"
OPENAI_API_KEY: ${{ secrets.LLM_ROUTER_KEY }}
-
Compile and run the workflow. The compiled lock file includes --enable-api-proxy automatically.
-
The AWF API proxy:
- Overrides
OPENAI_BASE_URL to http://172.30.0.30:10000/v1 inside the agent container
- Strips
OPENAI_API_KEY from the agent container (credential isolation)
- Forwards all requests to
api.openai.com instead of the custom endpoint
-
Result: 401 Unauthorized because the internal API key is sent to api.openai.com, which doesn't recognize it.
Evidence from logs
[health-check] ✓ OpenAI/Codex credentials NOT in agent environment (correct)
[health-check] Testing connectivity to OpenAI API proxy at http://172.30.0.30:10000/v1...
...
Request completed method=POST url=http://172.30.0.30:10000/v1/responses status=401 Unauthorized
headers={... "Domain=api.openai.com" ...}
Incorrect API key provided: sk-****
The Domain=api.openai.com in the response cookies confirms the proxy is forwarding to OpenAI's API, not our internal endpoint.
Expected behavior
The API proxy should respect the user-configured OPENAI_BASE_URL / ANTHROPIC_BASE_URL and forward requests to that endpoint instead of the hardcoded defaults. This would allow AWF's credential isolation and firewall features to work with:
- Corporate/internal LLM routers
- Azure OpenAI endpoints
- Self-hosted OpenAI-compatible APIs (e.g., vLLM, TGI, etc.)
Suggested implementation
A --openai-api-target <url> flag (similar to the existing --copilot-api-target <host> flag) that configures the API proxy's upstream for OpenAI requests. Similarly, --anthropic-api-target <url> for Anthropic.
The gh-aw compiler could automatically set these flags when it detects a custom OPENAI_BASE_URL or ANTHROPIC_BASE_URL in engine.env.
Current workaround
The only workaround is to disable the AWF sandbox entirely (sandbox.agent: false + strict: false + threat-detection: false), which removes all firewall protection and credential isolation. This is not ideal for production use.
Documentation reference
The API proxy limitations are documented at docs/api-proxy-sidecar.md:
- Only supports OpenAI and Anthropic APIs
- No support for Azure OpenAI endpoints
Environment
- gh-aw CLI: v0.53.6
- AWF: v0.23.0
- Engine: codex (OpenAI Codex CLI)
- Runner: Self-hosted (Amazon Linux 2023)
Description
When using the AWF sandbox with
--enable-api-proxy, the API proxy overridesOPENAI_BASE_URLinside the agent container to point to its internal proxy (http://172.30.0.30:10000/v1). The proxy then forwards all requests to the hardcodedapi.openai.com(orapi.anthropic.com), ignoring any customOPENAI_BASE_URLorANTHROPIC_BASE_URLset by the user.This makes it impossible to use AWF with internal/self-hosted LLM endpoints, such as corporate LLM routers or proxies that provide an OpenAI-compatible API.
Reproduction
OPENAI_BASE_URLpointing to an internal endpoint:Compile and run the workflow. The compiled lock file includes
--enable-api-proxyautomatically.The AWF API proxy:
OPENAI_BASE_URLtohttp://172.30.0.30:10000/v1inside the agent containerOPENAI_API_KEYfrom the agent container (credential isolation)api.openai.cominstead of the custom endpointResult:
401 Unauthorizedbecause the internal API key is sent toapi.openai.com, which doesn't recognize it.Evidence from logs
The
Domain=api.openai.comin the response cookies confirms the proxy is forwarding to OpenAI's API, not our internal endpoint.Expected behavior
The API proxy should respect the user-configured
OPENAI_BASE_URL/ANTHROPIC_BASE_URLand forward requests to that endpoint instead of the hardcoded defaults. This would allow AWF's credential isolation and firewall features to work with:Suggested implementation
A
--openai-api-target <url>flag (similar to the existing--copilot-api-target <host>flag) that configures the API proxy's upstream for OpenAI requests. Similarly,--anthropic-api-target <url>for Anthropic.The gh-aw compiler could automatically set these flags when it detects a custom
OPENAI_BASE_URLorANTHROPIC_BASE_URLinengine.env.Current workaround
The only workaround is to disable the AWF sandbox entirely (
sandbox.agent: false+strict: false+threat-detection: false), which removes all firewall protection and credential isolation. This is not ideal for production use.Documentation reference
The API proxy limitations are documented at
docs/api-proxy-sidecar.md:Environment