Self-hosted OpenAI-compatible API for Claude.ai using Playwright automation. Perfect for OpenClaw, custom AI assistants, and any tool expecting OpenAI's API format.
🦞 Built with OpenClaw in mind — Use Claude's intelligence without API costs!
No API key required • Free & Open Source • Streaming Support • File Uploads
A local server that bridges Claude's web interface (claude.ai) into an OpenAI-compatible REST API. Send requests to http://127.0.0.1:3000/v1/chat/completions and get responses back using your existing browser login session.
Works with OpenClaw, custom scripts, IDE plugins, and any tool that speaks the OpenAI chat completions format.
Perfect for OpenClaw users! Use Claude's intelligence with all your OpenClaw skills and automations - no API costs required.
# In your OpenClaw config:
llm:
provider: openai
api_base: "http://127.0.0.1:3000/v1"
api_key: "dummy-key-not-used"
model: "claude-sonnet-4.5"
streaming: true📖 Complete OpenClaw Integration Guide
- OpenClaw - Personal AI assistant with 100+ skills
- Continue.dev - AI code assistant for VS Code
- Custom scripts - Python, JavaScript, shell scripts
- IDE plugins - Cursor and other OpenAI-compatible tools
Using OpenAdapter in your project? Open a PR to add it here!
Your App / Client OpenAdapter claude.ai
───────────────── ─────────────────────────────── ───────────────
Express server (:3000)
POST /v1/chat/ ──> Receives OpenAI-format req ──> Types prompt
completions Manages Playwright browser into chat UI
Polls DOM for response
OpenAI-format <── Extracts & converts HTML <── Claude generates
JSON / SSE stream to Markdown response
- Launches Chromium with a persistent profile (session stays logged in across restarts)
- Receives OpenAI-format chat completion requests over HTTP
- Extracts the user prompt and any file attachments (images, documents)
- Types the prompt into Claude's web UI and submits it
- Polls the DOM for the response, streaming chunks via SSE if requested
- Converts Claude's HTML response to clean Markdown
- Returns an OpenAI-compatible response (or SSE stream)
- Node.js v18+
- npm
- A graphical desktop (the browser runs headful — no headless mode due to Cloudflare)
git clone <repo-url> open-adapter
cd open-adapter
npm install
npx playwright install chromiumPlaywright downloads its own bundled Chromium — you don't need Chrome installed.
git clone <repo-url> open-adapter
cd open-adapter
npm install
npx playwright install chromiumUse PowerShell or Windows Terminal. Command Prompt (
cmd) works too but PowerShell is recommended.
git clone <repo-url> open-adapter
cd open-adapter
npm install
npx playwright install chromium
# Playwright may prompt you to install system dependencies:
npx playwright install-deps chromiumOn Linux, Playwright needs certain system libraries (libnss3, libatk-bridge, etc.). The
install-depscommand installs them automatically (requires sudo).
On the first run, the browser opens to claude.ai. You must log in manually once:
node server.js- The browser opens to
claude.ai - Log in with your credentials
- Your session is saved in
.browser-profile/and persists across restarts - The server is now ready at
http://127.0.0.1:3000
npm start # Runs unit tests first, then starts the server (recommended)
npm run dev # Starts the server directly, skipping testsThe server listens on http://127.0.0.1:3000.
curl http://127.0.0.1:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "What is 2+2?"}]
}'curl http://127.0.0.1:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{"role": "user", "content": "Explain recursion"}],
"stream": true
}'The adapter supports OpenAI's multimodal message format:
curl http://127.0.0.1:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"messages": [{
"role": "user",
"content": [
{"type": "text", "text": "What is in this image?"},
{"type": "image_url", "image_url": {"url": "data:image/png;base64,..."}}
]
}]
}'The original CLI tool is also available for one-off queries without running the server:
node adapter.js "What is 2+2?"Output goes to stdout. Status messages go to stderr, so you can pipe cleanly:
node adapter.js "List 5 prime numbers" | grep -i primeopen-adapter/
├── server.js # Express API server (OpenAI-compatible endpoint)
├── adapter.js # Standalone CLI tool (single-prompt, exits after)
├── lib/
│ ├── sessionManager.js # Browser lifecycle & multi-tier session recovery
│ ├── extractPayload.js # OpenAI message parser & file attachment handler
│ ├── htmlToMd.js # HTML-to-Markdown converter (runs in-browser)
│ └── rateLimiter.js # Detects Claude rate limits from DOM & response text
├── tests/
│ ├── unit/ # Unit tests (extractPayload, htmlToMd, rateLimiter)
│ └── integration/ # Integration tests (HTTP endpoint validation)
├── .browser-profile/ # Persistent Chromium session (created on first run)
├── temp_uploads/ # Temporary directory for file attachments
├── logs.txt # Request/response logs (created at runtime)
├── package.json
└── README.md
The main entry point. An Express server that:
- Accepts
POST /v1/chat/completionsin OpenAI chat format - Supports both regular JSON responses and SSE streaming (
"stream": true) - Handles multimodal content: text, base64 images (
image_url), and file attachments (file_url) - Deduplicates system context across requests (hashes system messages, only re-uploads when changed)
- Converts large prompts (>15k chars) to file attachments automatically
- Logs all requests and responses to
logs.txt - Returns OpenAI-shaped responses with estimated token counts
Manages the Playwright browser with multi-tier recovery:
| Level | Strategy | Description |
|---|---|---|
| L0 | isPageAlive() |
Quick JS eval liveness probe |
| L1 | reloadPage() |
Reload the current page |
| L2 | newChat() |
Navigate to claude.ai/new (fresh conversation) |
| L3 | restartBrowser() |
Close browser context + relaunch Playwright |
| L4 | Fatal | Return null, server responds with 503 |
Sessions auto-timeout after 1 hour of inactivity, starting a fresh conversation.
A self-contained DOM-to-Markdown converter that runs inside the browser via page.evaluate(). Handles headings, bold/italic, code blocks (with language detection), tables, lists, checkboxes, links, images, blockquotes, and horizontal rules.
Detects Claude's rate limiting by:
- Scanning the DOM for error/alert elements
- Pattern-matching the response text against known rate-limit phrases
- Parsing retry-after durations from the message
Returns OpenAI-format 429 responses with Retry-After headers.
The original proof-of-concept. A standalone script that launches its own browser, sends a single prompt, captures the response, and exits. Useful for quick scripting without running the server.
| Variable | Default | Description |
|---|---|---|
PORT |
3000 |
Server listen port |
MAX_TIMEOUT_MS |
180000 (3 min) |
Hard timeout waiting for Claude's response |
STABLE_INTERVAL_MS |
30000 (30 sec) |
Content-unchanged = done threshold |
POLL_MS |
500 |
DOM polling interval |
SESSION_TIMEOUT_MS |
3600000 (1 hr) |
Inactivity before starting a new conversation |
| Variable | Default | Description |
|---|---|---|
CLAUDE_URL |
https://claude.ai/new |
Starting URL |
USER_DATA_DIR |
.browser-profile/ |
Persistent browser session directory |
MAX_TIMEOUT_MS |
120000 (2 min) |
Hard timeout for response |
STABLE_INTERVAL_MS |
3000 (3 sec) |
Content-unchanged = done threshold |
POLL_MS |
500 |
Polling interval |
Both the server and CLI use fallback selector chains to find UI elements. If Claude updates their UI, edit SELECTOR_CHAINS:
promptInput: div[contenteditable="true"] → div[role="textbox"] → ...
sendButton: button[aria-label*="Send"] → button[data-testid="send-button"]
stopButton: button[aria-label*="Stop"] → button[data-testid="stop-button"]
responseBlocks: div[data-testid*="message"] → div.font-claude-response → ...
fileInput: input[type="file"]
To inspect current selectors, open claude.ai in Chrome DevTools (F12) and inspect the elements.
npm test # Run all tests (unit + integration)
npm run test:unit # Unit tests only (no server needed)
npm run test:integration # Integration tests (requires running server)Unit tests cover the core modules (extractPayload, htmlToMd, rateLimiter) and run automatically before the server starts when using npm start.
Integration tests validate the HTTP endpoint (request validation, CORS, response shape) against a live server.
- Headful only — a visible browser window is required (Cloudflare blocks headless)
- Single request at a time — concurrent requests return 429 (busy)
- No login automation — you log in manually once; session persists in
.browser-profile/ - Selectors may break — Claude UI updates can change the DOM structure
- No conversation memory — each server session timeout starts a fresh chat
- Token counts are estimates — calculated from character length, not actual tokenization
| Problem | Fix |
|---|---|
| "Prompt input element not found" | Claude's UI changed. Inspect the page and update SELECTOR_CHAINS |
| Cloudflare challenge page | Must run headful (default). Don't set headless: true |
| Login not persisting | Ensure .browser-profile/ exists and isn't being deleted |
| Timeout with no response | Increase MAX_TIMEOUT_MS or check if Claude is down |
| Browser doesn't open | Run npx playwright install chromium |
| 503 session recovery failed | All recovery tiers failed. Restart node server.js |
| 429 rate limit | Claude's free/pro message limit hit. Wait for the retry-after period |
| Linux: missing shared libraries | Run npx playwright install-deps chromium to install system dependencies |
Linux: no display / DISPLAY not set |
You need a graphical desktop. For headless servers, use Xvfb: xvfb-run node server.js |
Windows: npx not recognized |
Ensure Node.js is in your PATH. Reinstall Node.js using the official installer and check "Add to PATH" |
| macOS: Chromium blocked by Gatekeeper | Go to System Settings > Privacy & Security and click "Allow Anyway" for the Chromium binary |
We welcome contributions! OpenAdapter is built with the OpenClaw community in mind. 🦞
Check out issues labeled good first issue - these are perfect for newcomers!
- 🎯 Tool Calling Support (v1.2) - #1 requested feature for OpenClaw integration
- Docker support
- Better streaming (MutationObserver)
- Configuration file
See CONTRIBUTING.md for the full guide and roadmap for planned features.
- 💬 Discussions - Ask questions, share workflows
- 🐛 Issues - Bug reports and feature requests
- 🦞 OpenClaw Integration - OpenClaw-specific help
ISC