vibes-mcp-cli is a comprehensive Go-based Multi-Provider CLI and HTTP server for working with Large Language Model (LLM) providers, featuring enterprise-grade session management and interactive terminal UI. It provides:
- Type-safe clients for completions, chat, and embeddings
- Support for OpenAI, Anthropic Claude, and other providers
- Unified CLI interface with consistent command structure
- Interactive Session Control: Real-time communication with Claude CLI processes
- Session Persistence: Complete conversation history with backup and restoration
- Advanced Search & Filtering: Find sessions by content, date, status, or metadata
- Metadata Tracking: Detailed tracking of tokens, response times, and resource usage
- Full-Screen TUI: Comprehensive terminal interface with multiple pages
- Session Logs Viewer: Browse, search, and manage session history
- Telemetry Dashboard: Real-time system monitoring with ASCII charts
- File Explorer: Browse and interact with project files
- Memory Leak Protection: Robust resource management and cleanup
- TTY Detection: Works in containers, CI/CD, and headless environments
- Error Boundaries: Comprehensive error handling with graceful recovery
- Timeout Protection: All operations have configurable timeouts
- HTTP proxy server (
serve) to expose MCP-compatible API - Telemetry integration with vibes-agent-backend
- Authentication support with JWT tokens
- Built-in support for environment variables, dotenv (
.env), and config files
- Install Go 1.20+ (https://golang.org/dl)
- Clone this repo:
git clone <repo-url> cd openai-cli
- Copy environment example and edit your keys:
cp .env_example .env # Edit .env: set OPENAI_CLI_API_KEY, PROMPT_MODE_PASSWORD, etc. - (Optional) Create a config file in
$HOME/.openai-cli.yamlor./.openai-cli.yaml:api_key: "your-openai-api-key" base_url: "https://api.openai.com" provider: "openai" log_level: "info" # Optional: JWT auth token for Vibes Agent backend to persist login auth_token: "your-agent-auth-token" templates: - "Hey, what's up!" - "Hows the weather in Brasilia - DF right now?"
- Build:
go mod tidy go build -o openai-cli
This project includes a Makefile to simplify common tasks:
make init # Initialize environment: copy .env, install deps
make build # Build the CLI binary
make test # Run all tests (client, service, cmd)
make lint # Format code (go fmt) and run vet
make docker-build # Build the Docker image
make docker-up # Start the server via docker-compose
make release # Cross-compile binaries for multiple platforms into dist/
make clean # Remove built binaries| Variable | Description |
|---|---|
| OPENAI_CLI_API_KEY | Default API key for the selected provider |
| OPENAI_CLI_BASE_URL | Base URL for API requests |
| OPENAI_CLI_PROVIDER | Default provider (openai, anthropic, etc.) |
| OPENAI_CLI_LOG_LEVEL | Logging level (debug, info, warn, error) |
| OPENAI_CLI_AGENT_URL | Vibes Agent backend URL (default: http://localhost:8000) |
| OPENAI_CLI_AUTH_TOKEN | JWT token for backend authentication |
| OPENAI_CLI_TELEMETRY_ENABLED | Enable telemetry data collection (true/false) |
| OPENAI_CLI_TELEMETRY_API_KEY | API key for telemetry service |
| PROMPT_MODE_PASSWORD | Password to unlock interactive REPL (chat) |
Environment variables can be set in a .env file (via github.com/joho/godotenv) or directly in your shell.
Run ./openai-cli --help for global flags and available commands.
--config string: path to config file (default$HOME/.openai-cli.yaml)--provider string: provider to use (overrides config / env)--api-key string: API key (overrides config / env)--base-url string: API base URL (overrides config / env)--server-url string: MCP server URL to proxy CLI calls--print-curl: print equivalentcurlcommand and exit--log-level string: set log level (debug,info,warn,error)
Generate a one-shot text completion:
./openai-cli completion \
--prompt "Once upon a time" \
--model text-davinci-003Send a single chat message:
./openai-cli chat \
--message "Hello, how are you?" \
--model gpt-3.5-turboKeep context across messages:
export PROMPT_MODE_PASSWORD=your-password
./openai-cli chat --prompt-modeType your message at the >>> prompt. Enter exit or quit to end.
Launch an interactive terminal UI for chat and Postman collections:
./openai-cli ui [--model MODEL] [--collection PATH]Use F1 to switch to Chat mode and F2 to switch to Postman mode. In Postman mode, navigate and select a .json collection, then press Ctrl+S to send a request.
Compute embeddings for one or more inputs:
./openai-cli embed \
--input "The quick brown fox" \
--input "jumps over the lazy dog" \
--model text-embedding-ada-002List available models you can use with the --model flag:
./openai-cli modelsOutput:
o4-mini
gpt-3.5-turbo
codex-cli
To see the raw curl you can run:
./openai-cli completion --prompt "Hello" --print-curlTo proxy commands through a running MCP server:
./openai-cli completion \
--prompt "Hello" \
--server-url http://localhost:8080Launch the comprehensive terminal user interface:
./vibes-mcp-cli ui- Chat: Interactive chat interface with conversation context
- Session Logs: Browse and manage Claude CLI session history
- Telemetry: Real-time system monitoring and performance metrics
- File Explorer: Browse project files with MCP integration
- Settings: Configuration and tenant management
- Create/Terminate Sessions: Full lifecycle management
- Search & Filter: Find sessions by name, content, date, or status
- Conversation History: View complete session interactions
- Real-time Updates: Live session status and monitoring
- System Health: CPU, memory, disk usage with progress bars
- API Metrics: Request counts, success rates, response times
- ASCII Charts: Visual trends and performance graphs
- Log Viewer: Real-time log streaming with severity filtering
F1: Home menuF2: Main navigation menuF3: File explorerG: Session logs viewerT: Telemetry dashboardQ: Quit application/: Search functionalityEsc: Return to previous page
The UI automatically detects your environment and provides appropriate alternatives:
- Full TUI functionality with all features enabled
# Automatic fallback suggestions
./vibes-mcp-cli ui --fallback-server # Auto-start server mode
./vibes-mcp-cli serve --port 8080 # Manual server mode
./vibes-mcp-cli chat "message" # CLI mode- Graceful degradation with helpful error messages
- Alternative command suggestions
- No hanging or freezing issues
# Create a new Claude CLI session
./vibes-mcp-cli ui # Use session management UI
# Sessions are automatically persisted in ./claude-sessions/
# Session history includes:
# - Complete conversation logs
# - Metadata (tokens, response times, resource usage)
# - Search indexes for fast filtering
# - Backup files with retention policies- Interactive Communication: Real-time streaming with Claude CLI
- Persistent History: All conversations saved with metadata
- Advanced Search: Text search, regex support, multi-criteria filtering
- Resource Monitoring: Track memory, CPU, and token usage
- Backup & Restore: Automated backups with configurable retention
- Concurrent Management: Handle multiple sessions safely
Start the built-in HTTP proxy:
./openai-cli serve --host 0.0.0.0 --port 8080Available endpoints:
POST /v1/completionsPOST /v1/chat/completionsPOST /v1/embeddings
Use the X-Provider header to switch providers per request:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-Provider: anthropic" \
-d '{"model":"claude-v1","messages":[{"role":"user","content":"Hi"}]}'Config files (.openai-cli.yaml, JSON, TOML) are supported via Viper in your home or working directory.
- Add new subcommands under
cmd/ - Update models in
internal/client/types.go - Implement additional providers under
internal/providers/
Build the Docker image locally:
docker build -t openai-cli:latest .Run the server in a container (using .env for config):
docker run --rm -it \
--env-file .env \
-p 8080:8080 \
openai-cli:latest serve --host 0.0.0.0 --port 8080Alternatively, use Docker Compose:
docker-compose up --buildNow your MCP server is listening on http://localhost:8080.
Generated README by the openai-cli scaffolding agent.
- Integrate remaining API endpoints into the UI client (per API_ENDPOINTS.md):
- Role management and user enable/disable endpoints under
/auth - Tenant, role, and permission management under
/user - WebSocket streaming via
/agent/ws - JSON-RPC tool proxy endpoint
/mcp - Any other endpoints outlined in API_ENDPOINTS.md not yet supported in the TUI
- Role management and user enable/disable endpoints under