A format conversion proxy server that enables using GLM-4.5-Air model with Claude Code CLI including full tool calling support.
This proxy acts as a bridge between Claude Code and the GLM-4.5-Air model with format conversion:
- ✅ Receives requests from Claude Code in Anthropic's Messages API format
- ✅ Converts to OpenAI Chat Completions format
- ✅ Forwards to GLM-4.5-Air API
- ✅ Converts responses back to Anthropic format
- ✅ Full tool calling support (Anthropic
tool_use↔ OpenAItool_calls) - ✅ Handles both streaming and non-streaming responses
- ✅ Comprehensive logging for debugging
Result: Use Claude Code with FREE GLM-4.5-Air model with full functionality!
3 simple steps to get started:
npm install -g claude-code-proxyclaude-code-proxy # Normal mode
claude-code-proxy --debug # Debug mode (verbose logging)
claude-code-proxy --port 4000 # Custom port
claude-code-proxy --help # Show all optionsIn a new terminal:
export ANTHROPIC_AUTH_TOKEN="dummy"
export ANTHROPIC_BASE_URL="http://localhost:3333"
claudeThat's it! Start using Claude Code with free Chutes GLM models.
-
Install dependencies
npm install
-
Configure environment variables
cp .env.example .env # Edit .env with your GLM-4.5-Air API credentials -
Start the proxy server
Normal mode (minimal logging):
npm run proxy
Debug mode (verbose logging):
npm run proxy:debug
To use the proxy with Claude Code CLI:
export ANTHROPIC_AUTH_TOKEN="dummy"
export ANTHROPIC_BASE_URL="http://localhost:3333"
claude{
"env": {
"ANTHROPIC_AUTH_TOKEN": "dummy",
"ANTHROPIC_BASE_URL": "http://localhost:3333"
}
}-
Start the proxy server (in one terminal):
npm run proxy # or for debug mode with full logs: npm run proxy:debug -
Use Claude Code normally (in another terminal):
export ANTHROPIC_AUTH_TOKEN="dummy" export ANTHROPIC_BASE_URL="http://localhost:3333" claude
Claude Code will now use GLM-4.5-Air model with full tool calling support!
-
To switch back to regular Claude:
unset ANTHROPIC_AUTH_TOKEN ANTHROPIC_BASE_URL claude
The proxy uses the following environment variables (in .env):
GLM_API_TOKEN: Your GLM-4.5-Air API token (required)GLM_API_URL: GLM-4.5-Air API endpoint (default: https://llm.chutes.ai/v1/chat/completions)GLM_MODEL: Model to use (default: zai-org/GLM-4.5-Air)PORT: Proxy server port (default: 3333)
✅ Full tool calling support - Anthropic tool_use ↔ OpenAI tool_calls conversion
✅ Streaming responses - Real-time token-by-token output
✅ Comprehensive logging - Color-coded debug logs for troubleshooting
✅ Format conversion - Seamless Anthropic ↔ OpenAI translation
✅ Free GLM-4.5-Air API - Use GLM-4.5-Air at no cost
✅ Multiple models - Maps to Haiku, Sonnet, and Opus tiers
✅ TypeScript support - Full type definitions and IntelliSense support
The proxy provides color-coded logs for easy debugging:
- 📥 Blue - Incoming requests from Claude Code
- 🔄 Magenta - Format conversion operations
- 📤 Cyan - Requests forwarded to Chutes API
- ✅ Green - Successful operations
- ❌ Red - Errors and failures
- 🌊 Cyan - Streaming responses
Normal mode: Clean, minimal logs Debug mode: Full request/response bodies and conversion details
RESEARCH.md- Complete research findings and solution approachesTEST-RESULTS.md- GLM-4.5-Air API test resultsTESTING-GUIDE.md- Comprehensive guide for testing and debugging the proxyTYPESCRIPT.md- TypeScript usage guide and type definitions
For issues or inquiries, contact: saradhi8142385201@gmail.com
MIT