Use Anthropic clients (like Claude Code) with Gemini or OpenAI backends. 🤝
A proxy server that lets you use Anthropic clients with Gemini or OpenAI models via LiteLLM. 🌉
- OpenAI API key 🔑
- Google AI Studio (Gemini) API key (if using Google provider) 🔑
- uv installed.
-
Clone this repository:
git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai
-
Install uv (if you haven't already):
curl -LsSf https://astral.sh/uv/install.sh | sh
(
uv
will handle dependencies based onpyproject.toml
when you run the server) -
Configure Environment Variables: Copy the example environment file:
cp .env.example .env
Edit
.env
and fill in your API keys and model configurations:ANTHROPIC_API_KEY
: (Optional) Needed only if proxying to Anthropic models.OPENAI_API_KEY
: Your OpenAI API key (Required if using the default OpenAI preference or as fallback).GEMINI_API_KEY
: Your Google AI Studio (Gemini) API key (Required if PREFERRED_PROVIDER=google).PREFERRED_PROVIDER
(Optional): Set toopenai
(default) orgoogle
. This determines the primary backend for mappinghaiku
/sonnet
.BIG_MODEL
(Optional): The model to mapsonnet
requests to. Defaults togpt-4.1
(ifPREFERRED_PROVIDER=openai
) orgemini-2.5-pro-preview-03-25
.SMALL_MODEL
(Optional): The model to maphaiku
requests to. Defaults togpt-4.1-mini
(ifPREFERRED_PROVIDER=openai
) orgemini-2.0-flash
.
Mapping Logic:
- If
PREFERRED_PROVIDER=openai
(default),haiku
/sonnet
map toSMALL_MODEL
/BIG_MODEL
prefixed withopenai/
. - If
PREFERRED_PROVIDER=google
,haiku
/sonnet
map toSMALL_MODEL
/BIG_MODEL
prefixed withgemini/
if those models are in the server's knownGEMINI_MODELS
list (otherwise falls back to OpenAI mapping).
-
Run the server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082 --reload
(
--reload
is optional, for development)
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use the configured backend models (defaulting to Gemini) through the proxy. 🎯
The proxy automatically maps Claude models to either OpenAI or Gemini models based on the configured model:
Claude Model | Default Mapping | When BIG_MODEL/SMALL_MODEL is a Gemini model |
---|---|---|
haiku | openai/gpt-4o-mini | gemini/[model-name] |
sonnet | openai/gpt-4o | gemini/[model-name] |
The following OpenAI models are supported with automatic openai/
prefix handling:
- o3-mini
- o1
- o1-mini
- o1-pro
- gpt-4.5-preview
- gpt-4o
- gpt-4o-audio-preview
- chatgpt-4o-latest
- gpt-4o-mini
- gpt-4o-mini-audio-preview
- gpt-4.1
- gpt-4.1-mini
The following Gemini models are supported with automatic gemini/
prefix handling:
- gemini-2.5-pro-preview-03-25
- gemini-2.0-flash
The proxy automatically adds the appropriate prefix to model names:
- OpenAI models get the
openai/
prefix - Gemini models get the
gemini/
prefix - The BIG_MODEL and SMALL_MODEL will get the appropriate prefix based on whether they're in the OpenAI or Gemini model lists
For example:
gpt-4o
becomesopenai/gpt-4o
gemini-2.5-pro-preview-03-25
becomesgemini/gemini-2.5-pro-preview-03-25
- When BIG_MODEL is set to a Gemini model, Claude Sonnet will map to
gemini/[model-name]
Control the mapping using environment variables in your .env
file or directly:
Example 1: Default (Use OpenAI)
No changes needed in .env
beyond API keys, or ensure:
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key" # Needed if PREFERRED_PROVIDER=google
# PREFERRED_PROVIDER="openai" # Optional, it's the default
# BIG_MODEL="gpt-4.1" # Optional, it's the default
# SMALL_MODEL="gpt-4.1-mini" # Optional, it's the default
Example 2: Prefer Google
GEMINI_API_KEY="your-google-key"
OPENAI_API_KEY="your-openai-key" # Needed for fallback
PREFERRED_PROVIDER="google"
# BIG_MODEL="gemini-2.5-pro-preview-03-25" # Optional, it's the default for Google pref
# SMALL_MODEL="gemini-2.0-flash" # Optional, it's the default for Google pref
Example 3: Use Specific OpenAI Models
OPENAI_API_KEY="your-openai-key"
GEMINI_API_KEY="your-google-key"
PREFERRED_PROVIDER="openai"
BIG_MODEL="gpt-4o" # Example specific model
SMALL_MODEL="gpt-4o-mini" # Example specific model
This proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI format via LiteLLM 🔄
- Sending the translated request to OpenAI 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊
Contributions are welcome! Please feel free to submit a Pull Request. 🎁