OpenRAG is a comprehensive Retrieval-Augmented Generation platform that enables intelligent document search and AI-powered conversations. Users can upload, process, and query documents through a chat interface backed by large language models and semantic search capabilities. The system utilizes Langflow for document ingestion, retrieval workflows, and intelligent nudges, providing a seamless RAG experience. Built with Starlette and Next.js. Powered by OpenSearch, Langflow, and Docling.
Use the OpenRAG Terminal User Interface (TUI) to manage your OpenRAG installation without complex command-line operations.
To launch OpenRAG with the TUI, do the following:
-
Clone the OpenRAG repository.
git clone https://github.com/langflow-ai/openrag.git cd openrag -
To start the TUI, from the repository root, run:
# Install dependencies first uv sync # Launch the TUI uv run openrag
The TUI opens and guides you through OpenRAG setup.
For the full TUI installation guide, see TUI.
If you prefer to use Docker to run OpenRAG, the repository includes two Docker Compose .yml files.
They deploy the same applications and containers locally, but to different environments.
-
docker-compose.ymlis an OpenRAG deployment for environments with GPU support. GPU support requires an NVIDIA GPU with CUDA support and compatible NVIDIA drivers installed on the OpenRAG host machine. -
docker-compose-cpu.ymlis a CPU-only version of OpenRAG for systems without GPU support. Use this Docker compose file for environments where GPU drivers aren't available.
Both Docker deployments depend on docling serve to be running on port 5001 on the host machine. This enables Mac MLX support for document processing. Installing OpenRAG with the TUI starts docling serve automatically, but for a Docker deployment you must manually start the docling serve process.
To install OpenRAG with Docker:
-
Clone the OpenRAG repository.
git clone https://github.com/langflow-ai/openrag.git cd openrag -
Install dependencies.
uv sync
-
Start
docling serveon the host machine.uv run python scripts/docling_ctl.py start --port 5001
-
Confirm
docling serveis running.uv run python scripts/docling_ctl.py statusSuccessful result:
Status: running Endpoint: http://127.0.0.1:5001 Docs: http://127.0.0.1:5001/docs PID: 27746
-
Build and start all services.
For the GPU-accelerated deployment, run:
docker compose build docker compose up -d
For environments without GPU support, run:
docker compose -f docker-compose-cpu.yml up -d
The OpenRAG Docker Compose file starts five containers:
Container Name Default Address Purpose OpenRAG Backend http://localhost:8000 FastAPI server and core functionality. OpenRAG Frontend http://localhost:3000 React web interface for users. Langflow http://localhost:7860 AI workflow engine and flow management. OpenSearch http://localhost:9200 Vector database for document storage. OpenSearch Dashboards http://localhost:5601 Database administration interface. -
Access the OpenRAG application at
http://localhost:3000and continue with the Quickstart.To stop
docling serve, run:uv run python scripts/docling_ctl.py stop
For more information, see Install with Docker.
For common issues and fixes, see Troubleshoot.
For developers wanting to contribute to OpenRAG or set up a development environment, see CONTRIBUTING.md.