Skip to content

teja-dev-tech/vidQuizify

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Quizify

An intelligent platform that generates interactive quizzes from video content using AI-powered transcription and question generation.

🚀 Features

  • Video Processing: Upload and process video content
  • AI-Powered Transcription: Convert speech to text with high accuracy
  • Smart Quiz Generation: Automatically create multiple-choice questions from video content
  • Interactive Quizzes: Engage users with automatically generated quizzes
  • Multi-language Support: Support for multiple languages in transcription and question generation
  • Scalable Architecture: Microservices-based architecture for reliability and scalability

🏗️ Project Structure

vidQuizify/
├── frontend/           # React-based web interface
├── server/             # Main backend service (NestJS)
└── services/
    └── transcription-service/  # Audio processing and question generation

Demo

image

image

image

image

🛠️ Prerequisites

  • Node.js 16+
  • Python 3.8+
  • Docker (optional, for containerized deployment)
  • FFmpeg (for audio processing)
  • Ollama (for question generation)

🚀 Quick Start

1. Clone the Repository

git clone https://github.com/teja-dev-tech/vidQuizify.git
cd vidQuizify

2. Set Up Services

Frontend

cd frontend
npm install
npm run dev

Backend Server

cd server
npm install
npm run start:dev

Transcription Service

cd services/transcription-service
python -m venv venv
source venv/bin/activate  # On Windows: .\venv\Scripts\activate
pip install -r requirements.txt
uvicorn app.main:app --reload

3. Configure Environment

Create .env files in each service directory with appropriate configuration. See individual service READMEs for details.

🌐 API Documentation

Once services are running, access the API documentation at:

🐳 Docker Deployment

# Build and start all services
docker-compose up --build

# Start in detached mode
docker-compose up -d

# View logs
docker-compose logs -f

🤝 Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Whisper for speech recognition
  • Ollama for language model capabilities
  • All open-source libraries and frameworks used in this project

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published