Rapida is an open-source platform for designing, building, and deploying voice agents at scale.
Itβs built around three core principles:
- Reliable β designed for production workloads, real-time audio, and fault-tolerant execution
- Observable β deep visibility into calls, latency, metrics, and tool usage
- Customizable β flexible architecture that adapts to any LLM, workflow, or enterprise stack
Rapida provides both a platform and a framework for building real-world voice agentsβfrom low-latency audio streaming to orchestration, monitoring, and integrations.
Rapida is written in Go, using the highly optimized gRPC protocol for fast, efficient, bidirectional communication.
-
Real-time Voice Orchestration
Stream and process audio with low latency using GRPC. -
LLM-Agnostic Architecture
Bring your own modelβOpenAI, Anthropic, open-source models, or custom inference. -
Production-grade Reliability
Built-in retries, error handling, call lifecycle management, and health checks. -
Full Observability
Call logs, streaming events, tool traces, latency breakdowns, metrics, and dashboards. -
Flexible Tooling System
Build custom tools and actions for your agents, or integrate with any backend. -
Developer-friendly
Clear APIs, modular components, and simple configuration. -
Enterprise-ready
Scalable design, efficient protocol, and predictable performance.
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CHANNELS β
β Phone β’ Web β’ WhatsApp β’ SIP β’ WebRTC β’ Others β
ββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RAPIDA ORCHESTRATOR β
β Routing β’ State β’ Parallelism β’ Tools β’ Observability β
βββββββββββββββββ¬βββββββββββββββββββββββββββββββ¬βββββββββββββββ
β β
βΌ βΌ
ββββββββββββββββββββββββ ββββββββββββββββββββββββββ
β Audio Preprocess β β STT β
β β’ VAD β <----> β Speech-to-Text β
β β’ Noise Reduction β β (ASR Engine) β
β β’ End-of-Speech β βββββββββββββ¬βββββββββββββ
βββββββββββββ¬βββββββββββ β
β βΌ
β ββββββββββββββββββββββββββ
β β LLM β
β β Reasoning β’ Tools β’ β
β β Memory β’ Policies β
β βββββββββββββ¬βββββββββββββ
β β
β βΌ
β ββββββββββββββββββββββββββ
ββββββββββββββββββββΆ β TTS β
β Text-to-Speech β
βββββββββββββ¬βββββββββββββ
β
βΌ
ββββββββββββββββββββββββββββββββββββββ
β USER OUTPUT β
β Audio Stream Response β
ββββββββββββββββββββββββββββββββββββββ
| Service | Description | Port |
|---|---|---|
| PostgreSQL | Database for persistent storage | 5432 |
| Redis | In-memory caching | 6379 |
| OpenSearch | Search engine for document indexing | 9200, 9600 |
| Web API | Backend service | 9001 |
| Assistant API | Intelligence and assistance API | 9007 |
| Integration API | Third-party integrations API | 9004 |
| Endpoint API | Endpoint management API | 9005 |
| Document API | Document handling API | 9010 |
| UI | React front-end | 3000 |
| NGINX | Reverse proxy and static server | 8080 |
- Docker: Install Docker.
- Docker Compose: Ensure Docker Compose is included with your Docker installation.
git clone https://github.com/rapidaai/voice-ai.git
cd voice-aiEnsure the following directories exist for containerized services to mount their data:
mkdir -p ${HOME}/rapida-data/For more about how the data is structured for services https://doc.rapida.ai
Grant docker group access to the created directories to ensure proper mounting:
sudo setfacl -m g:docker:rwx ${HOME}/rapida-data/make build-allStart all services:
make up-allAlternatively, start specific services (e.g., just PostgreSQL):
make up-dbStop all running services:
make down-allStop specific services:
make down-web| Service | URL |
|---|---|
| UI | http://localhost:3000 |
| Web-API | http://localhost:9001 |
| Assistant-API | http://localhost:9007 |
| Integration-API | http://localhost:9004 |
| Endpoint-API | http://localhost:9005 |
| Document-API | http://localhost:9010 |
| OpenSearch | http://localhost:9200 |
The Makefile simplifies operations using Docker Compose:
-
Build all images:
make build-all
-
Start all services:
make up-all
-
Stop all services:
make down-all
-
Check service logs (e.g., Web API):
make logs-web
-
Restart specific services (e.g., Redis):
make restart-redis
Run make help to see a full list of available Makefile commands.
- Ensure to create the necessary directories (
rapida-data/assets/...) and apply permissions before starting the services. - Custom configurations for NGINX and other services are mounted and should be adjusted as per your requirements.
Client SDKs enable your frontend to include interactive, multi-user experiences.
| Language | Repo | Docs |
|---|---|---|
| Web (React) | rapida-react | docs |
| Web Widget (react) | react-widget |
Server SDKs enable your backend to build and manage agents.
| Language | Repo | Docs |
|---|---|---|
| Go | rapida-go | docs |
| Python | rapida-python | docs |
For those who'd like to contribute code, see our Contribution Guide. At the same time, please consider supporting RapidaAi by sharing it on social media and at events and conferences.
To protect your privacy, please avoid posting security issues on GitHub. Instead, report issues to contact@rapida.ai, and our team will respond with detailed answer.
Rapida is open-source under the GPL-2.0 license, with additional conditions:
- Open-source users must keep the Rapida logo visible in UI components.
- Future license terms may change; this does not affect released versions.
A commercial license is available for enterprise use, which allows:
- Removal of branding
- Closed-source usage
- Private modifications Contact sales@rapida.ai for details.