Google Meet
Β Β
Microsoft Teams
- Microsoft Teams support (alongside Google Meet)
- WebSocket transcripts streaming for efficient sub-second delivery
- Numerous reliability and joining improvements from real-world usage of our hosted service
Vexa drops a bot into your online meeting and streams transcripts to your apps in real time.
- Platforms: Google Meet and Microsoft Teams
- Transport: REST or WebSocket (sub-second)
- Run it your way: Open source & self-hostable, or use the hosted API.
π Hosted (start in 5 minutes): https://vexa.ai
π Self-host guide: DEPLOYMENT.md
See full release notes: https://github.com/Vexa-ai/vexa/releases
- Hosted (fastest): Get your API key at https://vexa.ai/dashboard/api-keys
Or self-host the entire stack:
Self-host with Docker Compose:
git clone https://github.com/Vexa-ai/vexa.git
cd vexa
make all # CPU by default (Whisper tiny) β good for development
# For GPU:
# make all TARGET=gpu # (Whisper medium) β recommended for production quality
- Full guide: DEPLOYMENT.md
- For self-hosted API key: follow
vexa/nbs/0_basic_test.ipynb
API_HOST
for hosted version is https://api.cloud.vexa.ai
API_HOST
for self-hosted version (default) is http://localhost:18056
curl -X POST https://<API_HOST>/bots \
-H "Content-Type: application/json" \
-H "X-API-Key: <API_KEY>" \
-d '{
"platform": "teams",
"native_meeting_id": "<NUMERIC_MEETING_ID>",
"passcode": "<MEETING_PASSCODE>"
}'
curl -X POST https://<API_HOST>/bots \
-H "Content-Type: application/json" \
-H "X-API-Key: <API_KEY>" \
-d '{
"platform": "google_meet",
"native_meeting_id": "<MEET_CODE_XXX-XXXX-XXX>"
}'
curl -H "X-API-Key: <API_KEY>" \
"https://<API_HOST>/transcripts/<platform>/<native_meeting_id>"
For real-time streaming (subβsecond), see the WebSocket guide. For full REST details, see the User API Guide.
Note: Meeting IDs are user-provided (Google Meet code like xxx-xxxx-xxx
or Teams numeric ID and passcode). Vexa does not generate meeting IDs.
- Enterprises (self-host): Data sovereignty and control on your infra
- Teams using hosted API: Fastest path from meeting to transcript
- n8n/indie builders: Low-code automations powered by real-time transcripts
- Zoom support (public preview next)
For issues and progress, join our Discord.
Build powerful meeting assistants (like Otter.ai, Fireflies.ai, Fathom) for your startup, internal use, or custom integrations.
The Vexa API provides powerful abstractions and a clear separation of concerns, enabling you to build sophisticated applications on top with a safe and enjoyable coding experience.
- api-gateway: Routes API requests to appropriate services
- mcp: Provides MCP-capable agents with Vexa as a toolkit
- bot-manager: Handles bot lifecycle management
- vexa-bot: The bot that joins meetings and captures audio
- WhisperLive: Real-time audio transcription service
- transcription-collector: Processes and stores transcription segments
- Database models: Data structures for storing meeting information
π« If you're building with Vexa, we'd love your support! Star our repo to help us reach 1500 stars.
- Real-time multilingual transcription supporting 100 languages with Whisper
- Real-time translation across all 100 supported languages
- Public API: Fully available with self-service API keys at www.vexa.ai
- Google Meet Bot: Fully operational bot for joining Google Meet calls
- Teams Bot: Supported in v0.6
- Real-time Transcription: Low-latency, multilingual transcription service is live
- Real-time Translation: Instant translation between 100 supported languages
- WebSocket Streaming: Sub-second transcript delivery via WebSocket API
- Pending: Speaker identification is under development
- Zoom Bot: Integration for automated meeting attendance (July 2025)
- Direct Streaming: Ability to stream audio directly from web/mobile apps
For security-minded companies, Vexa offers complete self-deployment options.
To run Vexa locally on your own infrastructure, the primary command you'll use after cloning the repository is make all
. This command sets up the environment (CPU by default, or GPU if specified), builds all necessary Docker images, and starts the services.
Follow these steps to deploy Vexa on your own infrastructure:
- Docker (version 20.10+) and Docker Compose (version 2.0+)
- Git for cloning the repository
- At least 4GB RAM for CPU version (8GB+ recommended for GPU)
- 50GB disk space for models and logs
1. Clone the repository:
git clone https://github.com/Vexa-ai/vexa.git
cd vexa
2. Choose your deployment mode:
Option A: Quick Start with Make (Recommended)
CPU Mode (Development/Testing):
# Set up environment and download models
make all TARGET=cpu
# This will automatically:
# - Copy environment configuration from env-example.cpu
# - Download Whisper models (tiny by default for CPU)
# - Build vexa-bot image
# - Build all Docker images
# - Start all services
GPU Mode (Production - requires NVIDIA GPU):
# Set up environment and download models
make all TARGET=gpu
# This will automatically:
# - Copy environment configuration from env-example.gpu
# - Download Whisper models (medium by default for GPU)
# - Build vexa-bot image with GPU support
# - Build all Docker images with GPU support
# - Start all services
Option B: Manual Step-by-Step with Docker Compose
If you prefer manual control or need to customize the build process:
CPU Mode:
# 1. Copy environment configuration
cp env-example.cpu .env
# 2. Download Whisper models (optional, will download on first use)
python download_model.py
# 3. Build the vexa-bot image (REQUIRED before docker-compose)
make build-bot-image
# 4. Build and start all services with docker-compose
docker-compose --profile cpu build
docker-compose --profile cpu up -d
# 5. Initialize database
make migrate-or-init
GPU Mode:
# 1. Copy environment configuration
cp env-example.gpu .env
# 2. Download Whisper models
python download_model.py
# 3. Build the vexa-bot image (REQUIRED before docker-compose)
make build-bot-image
# 4. Build and start all services with docker-compose
docker-compose --profile gpu build
docker-compose --profile gpu up -d
# 5. Initialize database
make migrate-or-init
Important: The
vexa-bot
image must be built separately usingmake build-bot-image
before runningdocker-compose up
, as bot-manager dynamically creates bot containers from this pre-built image.
3. Verify services are running:
docker-compose ps
# You should see these services running:
# - api-gateway (port 8056)
# - admin-api (port 8057)
# - bot-manager
# - transcription-collector (port 8123)
# - whisperlive-cpu (CPU mode) or whisperlive (GPU mode)
# - redis
# - postgres
# - mcp (port 18888)
4. Check service health:
# Check API Gateway
curl http://localhost:18056/health
# Check Admin API
curl http://localhost:18057/health
# Check Transcription Collector
curl http://localhost:18123/health
5. Create your first API key:
Follow the notebook at nbs/0_basic_test.ipynb
or use the Admin API directly:
# Create a user and get API key
# See nbs/0_basic_test.ipynb for detailed examples
6. Test the deployment:
# Send a bot to a Google Meet
curl -X POST http://localhost:18056/bots \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY" \
-d '{
"platform": "google_meet",
"native_meeting_id": "xxx-xxxx-xxx"
}'
# Get transcripts
curl -H "X-API-Key: YOUR_API_KEY" \
"http://localhost:18056/transcripts/google_meet/xxx-xxxx-xxx"
For Production Deployment:
-
Update
docker-compose.yml
for production:- Set appropriate
WHISPER_MODEL_SIZE
(small, medium, large) - Configure proper logging levels (
LOG_LEVEL=INFO
orWARNING
)
- Set appropriate
-
Security considerations:
- Change default database passwords
- Use secure API tokens
- Configure firewall rules
- Set up HTTPS/TLS for external access
-
Resource allocation:
- CPU: 4+ cores recommended
- RAM: 8GB minimum, 16GB+ for production
- GPU: NVIDIA GPU with 8GB+ VRAM for optimal performance
Services not starting:
# Check logs
docker-compose logs -f <service-name>
# Common issues:
# - Port conflicts: Change ports in docker-compose.yml
# - Insufficient resources: Check Docker resource limits
# - Missing models: Run `python download_model.py`
Transcription not working through bot:
# Verify WhisperLive is accessible
docker-compose exec bot-manager curl -I http://whisperlive-cpu:9091
# Check bot logs
docker-compose logs -f | grep -i whisper
# Ensure WHISPER_LIVE_URL includes port :9090
Database connection issues:
# Wait for database to be ready
docker-compose logs postgres | grep "ready to accept connections"
# Run migrations if needed
make migrate-or-init
For comprehensive deployment instructions, monitoring setup, and advanced configuration, see the Local Deployment and Testing Guide.
When you make changes to any service, you need to rebuild and restart the affected containers:
# Rebuild and restart a single service
docker-compose up --build -d <service-name>
# Rebuild all services
make build TARGET=cpu # or TARGET=gpu
# Rebuild the bot image specifically
make build-bot-image
Changed Service | Command |
---|---|
vexa-bot (bot code) | make build-bot-image |
transcription-collector | docker-compose up --build -d transcription-collector |
bot-manager | docker-compose up --build -d bot-manager |
WhisperLive (CPU) | docker-compose --profile cpu up --build -d whisperlive-cpu |
WhisperLive (GPU) | docker-compose --profile gpu up --build -d whisperlive |
api-gateway | docker-compose up --build -d api-gateway |
admin-api | docker-compose up --build -d admin-api |
Quick iteration during development:
# After making changes
docker-compose up --build -d <service-name>
# View logs
docker-compose logs -f <service-name>
Full rebuild (when dependencies change):
# Stop everything
make down
# Rebuild all services
make build TARGET=cpu
# Start services
make up TARGET=cpu
# Run migrations if needed
make migrate-or-init
Force rebuild without cache:
# Use when Docker cache causes issues
docker-compose build --no-cache <service-name>
docker-compose up -d <service-name>
- vexa-bot containers are created dynamically for each meeting and auto-remove when done
- After rebuilding
vexa-bot
image, new bot containers will automatically use the updated image - Changes to Python code (without dependency updates) can sometimes use
docker-compose restart <service>
instead of rebuild - Always rebuild when you modify
requirements.txt
,package.json
, or Dockerfile
Contributors are welcome! Join our community and help shape Vexa's future. Here's how to get involved:
-
Understand Our Direction:
-
Engage on Discord (Discord Community):
- Introduce Yourself: Start by saying hello in the introductions channel.
- Stay Informed: Check the Discord channel for known issues, feature requests, and ongoing discussions. Issues actively being discussed often have dedicated channels.
- Discuss Ideas: Share your feature requests, report bugs, and participate in conversations about a specific issue you're interested in delivering.
- Get Assigned: If you feel ready to contribute, discuss the issue you'd like to work on and ask to get assigned on Discord.
-
Development Process:
- Browse available tasks (often linked from Discord discussions or the roadmap).
- Request task assignment through Discord if not already assigned.
- Submit pull requests for review.
- Critical Tasks & Bounties:
- Selected high-priority tasks may be marked with bounties.
- Bounties are sponsored by the Vexa core team.
- Check task descriptions (often on the roadmap or Discord) for bounty details and requirements.
We look forward to your contributions!
We β€οΈ contributions. Join our Discord and open issues/PRs. Licensed under Apache-2.0 β see LICENSE.
- π Vexa Website
- πΌ LinkedIn
- π¦ X (@grankin_d)
- π¬ Discord Community
The Vexa name and logo are trademarks of Vexa.ai Inc.