An intelligent platform that generates interactive quizzes from video content using AI-powered transcription and question generation.
- Video Processing: Upload and process video content
- AI-Powered Transcription: Convert speech to text with high accuracy
- Smart Quiz Generation: Automatically create multiple-choice questions from video content
- Interactive Quizzes: Engage users with automatically generated quizzes
- Multi-language Support: Support for multiple languages in transcription and question generation
- Scalable Architecture: Microservices-based architecture for reliability and scalability
vidQuizify/
├── frontend/ # React-based web interface
├── server/ # Main backend service (NestJS)
└── services/
└── transcription-service/ # Audio processing and question generation
- Node.js 16+
- Python 3.8+
- Docker (optional, for containerized deployment)
- FFmpeg (for audio processing)
- Ollama (for question generation)
git clone https://github.com/teja-dev-tech/vidQuizify.git
cd vidQuizifycd frontend
npm install
npm run devcd server
npm install
npm run start:devcd services/transcription-service
python -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reloadCreate .env files in each service directory with appropriate configuration. See individual service READMEs for details.
Once services are running, access the API documentation at:
- Backend API: http://localhost:3000/api
- Transcription Service: http://localhost:8000/docs
# Build and start all services
docker-compose up --build
# Start in detached mode
docker-compose up -d
# View logs
docker-compose logs -f- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Whisper for speech recognition
- Ollama for language model capabilities
- All open-source libraries and frameworks used in this project



