AI-powered CV and project report evaluation system with Next.js frontend and FastAPI backend.
cv-evaluator/
├── backend/ # FastAPI backend
│ ├── app/
│ │ ├── main.py
│ │ ├── routers/
│ │ ├── services/
│ │ └── tasks/
│ ├── requirements.txt
│ └── Dockerfile
├── app/ # Next.js frontend
├── components/
└── scripts/ # Database scripts
- 📄 Document Upload: Upload CV and project reports (PDF)
- 🤖 AI Evaluation: LLM-powered evaluation with RAG context retrieval using Groq
- 🔒 Local Embeddings: Privacy-focused local embeddings with Sentence Transformers
- ⚡ Async Processing: Celery-based background job processing
- ⏱ Real-time Status: Poll for evaluation status and results
- 📝 Detailed Feedback: Comprehensive scoring and feedback
- 💰 Cost-Effective: Uses Groq's free tier for fast LLM inference
- Next.js 15 (App Router)
- TypeScript
- Tailwind CSS v4
- shadcn/ui components
- FastAPI
- PostgreSQL (Supabase) with pgvector
- ChromaDB (Vector database)
- Groq API (llama-3.3-70b-versatile) - Fast & free LLM inference
- Sentence Transformers (all-MiniLM-L6-v2) - Local embeddings
- Celery + Redis
- Python 3.11+
- Node.js 18+
- Python 3.11+
- PostgreSQL with pgvector
- Redis
- OpenAI API key
# 1. Navigate to backend directory
cd backend
# 2. Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# 3. Install dependencies
pip install -r requirements.txt
# 4. Configure environment variables
cp .env.example .env
# Edit .env with your credentials
# 5. Run database migrations
# Execute SQL scripts in scripts/ folder
# 6. Start Redis
redis-server
# 7. Start Celery worker
celery -A app.tasks.evaluation_tasks worker --loglevel=info
# 8. Start FastAPI server
uvicorn app.main:app --reload# 1. Install dependencies
npm install
# 2. Configure environment variables
cp .env.local.example .env.local
# Edit .env.local with API URL
# 3. Start development server
npm run dev
# 4. Open in browser
http://localhost:3000POST /api/upload→ Upload CV and project reportPOST /api/evaluate→ Trigger evaluation pipelineGET /api/result/{id}→ Get evaluation results
# Database
DATABASE_URL=postgresql://...
SUPABASE_URL=https://...
SUPABASE_KEY=...
SUPABASE_SERVICE_KEY=...
# Groq API (free tier available)
GROQ_API_KEY=gsk_...
GROQ_MODEL=llama-3.3-70b-versatile
# Embeddings (local, no API key needed)
EMBEDDING_MODEL=all-MiniLM-L6-v2
# Redis
REDIS_URL=redis://localhost:6379/0
# Application
SECRET_KEY=your-secret-key
APP_ENV=developmentNEXT_PUBLIC_API_URL=http://localhost:8000/api
NEXT_PUBLIC_MAX_FILE_SIZE=10485760
NEXT_PUBLIC_POLL_INTERVAL=3000- 10-100x faster than traditional LLM providers
- Free tier with generous rate limits
- High quality models (Llama 3.3, Mixtral, etc.)
- Low latency for real-time applications
- 100% free - no API costs
- Privacy-focused - embeddings generated locally
- Fast - no network latency
- High quality - state-of-the-art embedding models
- Offline capable - works without internet after initial download
- LLM Inference: ~500-1000 tokens/sec with Groq (vs ~50-100 with OpenAI)
- Embeddings: ~1000 documents/sec locally (vs ~10-50 with API calls)
- Total Evaluation Time: ~30-60 seconds for full CV + project analysis
| Component | This Setup | Traditional Setup |
|---|---|---|
| LLM Inference | Free (Groq) | $0.01-0.03/1K tokens |
| Embeddings | Free (Local) | $0.0001/1K tokens |
| Vector DB | Free (ChromaDB) | $0-50/month |
| Total/month | $0 | $50-200 |
MIT