Skip to content

scottweiss/codex-mcp-bridge

Repository files navigation

Codex MCP Bridge

A minimal, whitelabeled WebSocket bridge for OpenAI Codex MCP (Model Context Protocol). This provides a clean, standalone interface for AI-powered chat interactions using either OpenAI models or local Ollama models.

Features

  • WebSocket-based real-time chat with streaming responses
  • MCP (Model Context Protocol) integration with Codex CLI
  • Support for multiple model providers (OpenAI, Ollama)
  • Clean, minimal React UI with dark/light theme support
  • Docker-based deployment for easy setup
  • Session persistence for conversation continuity

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     WebSocket      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                 │◄───────────────────►│                 β”‚
β”‚  React Frontend β”‚                     β”‚  FastAPI Backendβ”‚
β”‚   (Vite + TS)   β”‚                     β”‚   (Python 3.12) β”‚
β”‚                 β”‚                     β”‚                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β–²                                        β”‚
         β”‚                                        β”‚ stdio
         β”‚ HTTP/nginx                             β–Ό
         └───────────────────────────────   Codex CLI     β”‚
                                        β”‚  (MCP Protocol) β”‚
                                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Quick Start

Prerequisites

  • Docker and Docker Compose
  • OpenAI API key (for OpenAI models) OR
  • Ollama running locally (for local models)

Installation

  1. Clone the repository (or copy the files to your project):
cd /Users/scott/dev/codex-mcp-bridge
  1. Configure environment variables:
cp .env.example .env
# Edit .env and add your API key or configure for Ollama
  1. Start the services:
docker-compose up --build
  1. Access the interface: Open your browser to http://localhost:3000

Configuration

Using OpenAI Models

Edit your .env file:

CODEX_PROVIDER=openai
CODEX_MODEL=gpt-5-codex
OPENAI_API_KEY=your_api_key_here

Using Local Ollama Models

First, ensure Ollama is running locally:

ollama serve

Then edit your .env file:

CODEX_PROVIDER=local
CODEX_MODEL=gpt-oss:20b
OLLAMA_HOST=http://host.docker.internal:11434

Development

Backend Development

cd backend
pip install -r requirements.txt
python main.py

The backend will be available at http://localhost:8000

Frontend Development

cd frontend
npm install
npm run dev

The frontend dev server will run at http://localhost:3000

Testing

This project includes comprehensive test coverage with both unit and integration tests.

Running Tests

Quick Test Run:

./run-tests.sh

Backend Tests Only:

docker-compose -f docker-compose.test.yml run --rm backend-test

Frontend Tests Only:

docker-compose -f docker-compose.test.yml run --rm frontend-test

Test Coverage

  • Backend: 32 tests, 90.97% code coverage
  • Frontend: React component tests with Vitest
  • Integration Tests: Real WebSocket connections to running services
  • End-to-End Tests: Full conversation flow with actual Codex process

All tests run against real services when available - no mocking of running services. The test_full_conversation_flow test validates the complete pipeline from WebSocket connection through Codex MCP protocol to response delivery.

Test Infrastructure

Tests run in Docker containers against real services:

  • Unit tests with mocked dependencies
  • Integration tests against actual running backend
  • WebSocket tests with real message exchange
  • Full Codex MCP protocol testing

Coverage reports are generated in:

  • Backend: backend/htmlcov/index.html
  • Frontend: frontend/coverage/index.html

API Endpoints

  • GET / - Service information
  • GET /health - Health check endpoint
  • WS /ws/chat - WebSocket endpoint for chat

WebSocket Protocol

Client β†’ Server

{
  "type": "message",
  "content": "Your message here"
}

Server β†’ Client

// Session started
{
  "type": "session_started",
  "session_id": "session-123"
}

// Status update
{
  "type": "status",
  "message": "Processing..."
}

// Streaming token
{
  "type": "token",
  "content": "Response text..."
}

// Response complete
{
  "type": "done"
}

// Error
{
  "type": "error",
  "message": "Error description"
}

Project Structure

codex-mcp-bridge/
β”œβ”€β”€ docker-compose.yml       # Main orchestration
β”œβ”€β”€ docker-compose.test.yml  # Test orchestration
β”œβ”€β”€ run-tests.sh            # Test runner script
β”œβ”€β”€ .env.example            # Environment template
β”œβ”€β”€ README.md              # This file
β”‚
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ Dockerfile         # Backend container
β”‚   β”œβ”€β”€ Dockerfile.test    # Test container with Codex
β”‚   β”œβ”€β”€ requirements.txt   # Python dependencies
β”‚   β”œβ”€β”€ requirements-test.txt # Test dependencies
β”‚   β”œβ”€β”€ main.py           # FastAPI application
β”‚   β”œβ”€β”€ pytest.ini        # Pytest configuration
β”‚   └── tests/
β”‚       β”œβ”€β”€ test_mcp_bridge.py      # Unit tests (19 tests)
β”‚       β”œβ”€β”€ test_api_endpoints.py   # API & E2E tests (9 tests)
β”‚       └── test_integration_real.py # Integration tests (5 tests)
β”‚
└── frontend/
    β”œβ”€β”€ Dockerfile        # Frontend container
    β”œβ”€β”€ Dockerfile.test   # Test container
    β”œβ”€β”€ package.json      # Node dependencies
    β”œβ”€β”€ vite.config.ts   # Vite configuration
    β”œβ”€β”€ vitest.config.ts # Test configuration
    β”œβ”€β”€ nginx.conf       # Nginx configuration
    └── src/
        β”œβ”€β”€ App.tsx      # Main application
        β”œβ”€β”€ components/
        β”‚   └── Chat.tsx # Chat interface
        β”œβ”€β”€ services/
        β”‚   └── websocket.ts # WebSocket client
        └── tests/
            └── setup.ts # Test setup

Customization

Styling

Edit frontend/src/index.css to customize the color scheme and theme variables.

Model Configuration

Modify the Codex configuration in backend/Dockerfile to adjust model parameters, context size, and behavior.

WebSocket Behavior

Update frontend/src/services/websocket.ts to customize reconnection logic, message handling, or add new message types.

Troubleshooting

Connection Issues

  1. Check that all services are running:
docker-compose ps
  1. View logs:
docker-compose logs -f backend
docker-compose logs -f frontend

Model Loading Slow

The first request may take 30-60 seconds if using large local models. The UI will show progress indicators during loading.

Port Conflicts

If ports 3000 or 8000 are in use, modify the port mappings in docker-compose.yml.

Security Notes

  • This is an MVP without authentication - add auth before production use
  • The backend uses danger-full-access sandbox mode for development
  • CORS is configured to allow all origins - restrict for production
  • Always use environment variables for API keys

License

MIT - Feel free to use and modify as needed.

Contributing

This is a minimal MVP implementation. Feel free to extend with:

  • Authentication and user management
  • Multiple conversation support
  • File upload/download capabilities
  • Code execution features
  • Advanced model configuration UI
  • Conversation history persistence

Built as a whitelabeled extraction from the CrewWork project

About

Kickstart your codex mcp software integration

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published