Open-source, MCP-first YouTube grounding service with timecoded citations
VidContext lets AI agents "search YouTube like the web," returning grounded, time-coded citations from video transcripts. Built with a BYOK (Bring Your Own Keys) architecture for OpenAI and YouTube APIs.
- Sign up at vidcontext.dev
- Get your
SERVICE_API_KEYfrom the dashboard - Add your OpenAI and YouTube API keys
curl -X POST https://api.vidcontext.dev/v1/rag/ask \
-H "Authorization: Bearer YOUR_SERVICE_API_KEY" \
-H "x-openai-api-key: YOUR_OPENAI_KEY" \
-H "x-youtube-api-key: YOUR_YOUTUBE_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "Compare Drizzle vs Prisma for Next.js 15",
"topK": 12
}'git clone https://github.com/vidcontext/vidcontext.git
cd vidcontext
cp .env.example .env.local
# Edit .env.local with your database and API keys
docker-compose up -d- MCP Integration: Native support for Claude Desktop, Augment, and other MCP clients
- REST API: Standard HTTP endpoints for any application
- BYOK Architecture: Your OpenAI and YouTube keys, your costs, your control
- Time-coded Citations: Every answer includes
[Video Title mm:ss]citations with deep links - Multi-language: Supports official captions and Whisper transcription
- Vector Search: pgvector-powered semantic search across video transcripts
- Usage Quotas: Fair usage limits with transparent pricing
- Backend: Node.js, TypeScript, Fastify
- Database: PostgreSQL with pgvector extension
- Frontend: Next.js 15, Tailwind CSS
- Auth: Supabase Auth with RLS
- AI: OpenAI (Whisper, Embeddings, GPT-4)
- Search: YouTube Data API v3
We welcome contributions! See CONTRIBUTING.md for guidelines.
Apache 2.0 - see LICENSE for details.