A minimal, whitelabeled WebSocket bridge for OpenAI Codex MCP (Model Context Protocol). This provides a clean, standalone interface for AI-powered chat interactions using either OpenAI models or local Ollama models.
- WebSocket-based real-time chat with streaming responses
- MCP (Model Context Protocol) integration with Codex CLI
- Support for multiple model providers (OpenAI, Ollama)
- Clean, minimal React UI with dark/light theme support
- Docker-based deployment for easy setup
- Session persistence for conversation continuity
βββββββββββββββββββ WebSocket βββββββββββββββββββ
β ββββββββββββββββββββββΊβ β
β React Frontend β β FastAPI Backendβ
β (Vite + TS) β β (Python 3.12) β
β β β β
βββββββββββββββββββ βββββββββββββββββββ
β² β
β β stdio
β HTTP/nginx βΌ
ββββββββββββββββββββββββββββββββ€ Codex CLI β
β (MCP Protocol) β
βββββββββββββββββββ
- Docker and Docker Compose
- OpenAI API key (for OpenAI models) OR
- Ollama running locally (for local models)
- Clone the repository (or copy the files to your project):
cd /Users/scott/dev/codex-mcp-bridge- Configure environment variables:
cp .env.example .env
# Edit .env and add your API key or configure for Ollama- Start the services:
docker-compose up --build- Access the interface: Open your browser to http://localhost:3000
Edit your .env file:
CODEX_PROVIDER=openai
CODEX_MODEL=gpt-5-codex
OPENAI_API_KEY=your_api_key_hereFirst, ensure Ollama is running locally:
ollama serveThen edit your .env file:
CODEX_PROVIDER=local
CODEX_MODEL=gpt-oss:20b
OLLAMA_HOST=http://host.docker.internal:11434cd backend
pip install -r requirements.txt
python main.pyThe backend will be available at http://localhost:8000
cd frontend
npm install
npm run devThe frontend dev server will run at http://localhost:3000
This project includes comprehensive test coverage with both unit and integration tests.
Quick Test Run:
./run-tests.shBackend Tests Only:
docker-compose -f docker-compose.test.yml run --rm backend-testFrontend Tests Only:
docker-compose -f docker-compose.test.yml run --rm frontend-test- Backend: 32 tests, 90.97% code coverage
- Frontend: React component tests with Vitest
- Integration Tests: Real WebSocket connections to running services
- End-to-End Tests: Full conversation flow with actual Codex process
All tests run against real services when available - no mocking of running services. The test_full_conversation_flow test validates the complete pipeline from WebSocket connection through Codex MCP protocol to response delivery.
Tests run in Docker containers against real services:
- Unit tests with mocked dependencies
- Integration tests against actual running backend
- WebSocket tests with real message exchange
- Full Codex MCP protocol testing
Coverage reports are generated in:
- Backend:
backend/htmlcov/index.html - Frontend:
frontend/coverage/index.html
GET /- Service informationGET /health- Health check endpointWS /ws/chat- WebSocket endpoint for chat
{
"type": "message",
"content": "Your message here"
}// Session started
{
"type": "session_started",
"session_id": "session-123"
}
// Status update
{
"type": "status",
"message": "Processing..."
}
// Streaming token
{
"type": "token",
"content": "Response text..."
}
// Response complete
{
"type": "done"
}
// Error
{
"type": "error",
"message": "Error description"
}codex-mcp-bridge/
βββ docker-compose.yml # Main orchestration
βββ docker-compose.test.yml # Test orchestration
βββ run-tests.sh # Test runner script
βββ .env.example # Environment template
βββ README.md # This file
β
βββ backend/
β βββ Dockerfile # Backend container
β βββ Dockerfile.test # Test container with Codex
β βββ requirements.txt # Python dependencies
β βββ requirements-test.txt # Test dependencies
β βββ main.py # FastAPI application
β βββ pytest.ini # Pytest configuration
β βββ tests/
β βββ test_mcp_bridge.py # Unit tests (19 tests)
β βββ test_api_endpoints.py # API & E2E tests (9 tests)
β βββ test_integration_real.py # Integration tests (5 tests)
β
βββ frontend/
βββ Dockerfile # Frontend container
βββ Dockerfile.test # Test container
βββ package.json # Node dependencies
βββ vite.config.ts # Vite configuration
βββ vitest.config.ts # Test configuration
βββ nginx.conf # Nginx configuration
βββ src/
βββ App.tsx # Main application
βββ components/
β βββ Chat.tsx # Chat interface
βββ services/
β βββ websocket.ts # WebSocket client
βββ tests/
βββ setup.ts # Test setup
Edit frontend/src/index.css to customize the color scheme and theme variables.
Modify the Codex configuration in backend/Dockerfile to adjust model parameters, context size, and behavior.
Update frontend/src/services/websocket.ts to customize reconnection logic, message handling, or add new message types.
- Check that all services are running:
docker-compose ps- View logs:
docker-compose logs -f backend
docker-compose logs -f frontendThe first request may take 30-60 seconds if using large local models. The UI will show progress indicators during loading.
If ports 3000 or 8000 are in use, modify the port mappings in docker-compose.yml.
- This is an MVP without authentication - add auth before production use
- The backend uses
danger-full-accesssandbox mode for development - CORS is configured to allow all origins - restrict for production
- Always use environment variables for API keys
MIT - Feel free to use and modify as needed.
This is a minimal MVP implementation. Feel free to extend with:
- Authentication and user management
- Multiple conversation support
- File upload/download capabilities
- Code execution features
- Advanced model configuration UI
- Conversation history persistence
Built as a whitelabeled extraction from the CrewWork project