This project contains a multi-service application with FastAPI backends, Next.js frontend, Qdrant vector database, and optional Ollama integration.
DEMO: https://youtu.be/mkJ2wJv1fFA
An intelligent system that recommends roles and training programs based on employee skills, leveraging AI for personalized career development and project matching.
This comprehensive AI system analyzes employee skills and provides intelligent recommendations for:
- Project Assignments: Match employees with suitable projects based on their skill profiles
- Training Programs: Recommend courses and learning paths to bridge skill gaps
- Career Development: Generate personalized analysis for professional growth
- Job Readiness Assessment: AI-powered analysis of employment prospects and skill gaps
- Generative AI Insights: Deep analysis of what skills you lack for target positions
The system follows a microservices architecture with the following components:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │ │ Inference API │ │ CRUD API │
│ (Next.js) │────│ (FastAPI) │────│ (FastAPI) │
│ Port: 3000 │ │ Port: 8001 │ │ Port: 8002 │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│ │ │
└───────────────────────┼───────────────────────┘
│
┌─────────────────┐ ┌─────────────────┐
│ Qdrant │ │ Ollama │
│ (Vector DB) │ │ (Optional) │
│ Port: 6333 │ │ Port: 11434 │
└─────────────────┘ └─────────────────┘
- Next.js 14: React framework with server-side rendering
- TypeScript: Type-safe development experience
- Tailwind CSS: Utility-first CSS framework
- Shadcn UI: Modern component library built on Radix UI
- TanStack Table: Powerful data table management
- FastAPI: High-performance API framework
- Python: Core programming language
- Qdrant: Vector database for semantic search
- SQLite: Lightweight database for CRUD operations
- Ollama: Optional LLM integration
- Docker: Containerization platform
- Docker Compose: Multi-container orchestration
- Kubernetes: Production deployment (optional)
Ensure you have the following installed:
- Docker & Docker Compose
- Make (optional, for convenient commands)
- Node.js 18+ (for local development)
- Python 3.9+ (for local development)
OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
OPENAI_EMBEDDING_MODEL="YOUR_OPENAI_EMBEDDING_MODEL"
OPENAI_MODEL="YOUR_OPENAI_MODEL"
QDRANT_URL="http://qdrant:6333"
QDRANT_API_KEY="http://localhost:11434/v1/"
# Clone the repository
git clone https://github.com/harshspotted/JTP_PROJECT
cd jtp-test
# Start all services
docker compose -f docker-compose.yml up
# Or using the Makefile
make upIf used without the minimum requirement, then it may not function properly.
Once you click on generate analysis, the ollama will start rendering, so you will have to click again for it to give the result.
# Start all services including Ollama for enhanced AI capabilities
docker compose -f docker-compose.yml -f docker-compose.override.yml up -d
# Or using the Makefile
make up-ollama.
├── backend/
│ ├── inference_app.py # AI inference service
│ ├── crud_app.py # Data management service
│ └── requirements.txt # Python dependencies
├── frontend/
│ ├── components/ # React components
│ ├── pages/ # Next.js pages
│ ├── package.json # Node.js dependencies
│ └── next.config.js # Next.js configuration
├── docker/
│ ├── Dockerfile.inference # Inference service container
│ ├── Dockerfile.crud # CRUD service container
│ └── Dockerfile.frontend # Frontend container
├── docker-compose.yml # Main compose configuration
├── docker-compose.override.yml # Ollama integration
├── Makefile # Development commands
└── README.md # This file
# Start services
make up # Start without Ollama
make up-ollama # Start with Ollama
make build-up # Build and start without Ollama
make build-up-ollama # Build and start with Ollama
# Stop services
make down # Stop without Ollama
make down-ollama # Stop with Ollama
# View logs
make logs # All services
make logs-inference # Inference service only
make logs-crud # CRUD service only
make logs-frontend # Frontend service only
make logs-qdrant # Qdrant service only
make logs-ollama # Ollama service only
# Development
make dev-inference # Start dependencies + inference service
make dev-crud # Start dependencies + CRUD service
make dev-frontend # Start backends + frontend service
# Utilities
make status # Show service status
make clean # Clean up containers and volumesmake shell-inference # Access inference container
make shell-crud # Access CRUD container
make shell-frontend # Access frontend container
make shell-qdrant # Access Qdrant container
make shell-ollama # Access Ollama container- Frontend: http://localhost:3000
- Inference API: http://localhost:8001
- CRUD API: http://localhost:8002
- Qdrant Dashboard: http://localhost:6333
- Ollama (if enabled): http://localhost:11434
- TanStack Table integration for efficient skills data presentation
- Interactive form interface for adding and managing employee skills
- Real-time skill proficiency tracking with experience levels
- Professional skill level categorization (Basic, Professional, CollegeResearch)
- Project Matching: Intelligent project recommendations based on skill profiles
- Semantic Search: Vector-based similarity matching using Qdrant
- Personalized Analysis: Comprehensive employee-project fit evaluation
- Training Recommendations: AI-generated course suggestions for skill development
- Comprehensive skill portfolio tracking
- Experience level documentation (months of experience)
- Skill description and proficiency management
- Career progression visualization
- Gap Analysis: Identifies skill deficiencies for target projects
- Course Recommendations: Suggests relevant training programs
- Personalized Learning Paths: Tailored development roadmaps
- Progress Tracking: Monitors skill development over time
GET /Returns system health status.
POST /predict/
Content-Type: application/json
{
"skills": [
{
"skill_name": "Python",
"level": "Professional",
"months": 24
}
],
"description": "Backend developer with API experience",
"top_k": 5
}Response:
[
{
"rank": 1,
"project_id": "project_22",
"score": 55707.93,
"description": "A backend project using Docker and Python...",
"required_skills": [
{ "skill_name": "Python", "level": "Professional", "months": 12 }
]
}
]POST /analysis/
Content-Type: application/json
{
"employee_skills": [...],
"employee_description": "Senior backend developer",
"project_skills": [...],
"project_description": "Containerize microservices platform",
"score": 0.75
}Response:
{
"fitness_evaluation": "Medium - Excellent Python background, but Docker skills need deepening.",
"recommended_courses": "Enroll in an advanced Docker course and complete two hands-on container projects."
}- Purpose: Lightweight database for metadata and structured data
- Location: Within crud-app container or mounted volume
- Usage: Employee profiles, project definitions, skill catalogues
- Purpose: High-dimensional embedding storage and semantic search
- Features: Nearest-neighbor search, similarity matching
- Use Cases:
- Skill similarity analysis
- Project-employee matching
- Contextual understanding for recommendations
- Input Processing: Frontend/API accepts new employee or project data
- Preprocessing: Inference service generates embeddings from text descriptions
- Dual Storage:
- Metadata → SQLite via crud-app
- Vector embeddings → Qdrant for similarity search
- Retrieval: Qdrant provides contextually similar data for AI recommendations
The system supports scalable Kubernetes deployment with dedicated pods:
| Pod Name | Description | Database |
|---|---|---|
inference-app |
FastAPI inference operations | - |
crud-app |
CRUD operations with SQLite | SQLite |
fe-app |
Next.js frontend application | - |
qdrant-pod |
Vector database service | Qdrant |
QDRANT_HOST=qdrant
QDRANT_PORT=6333
OLLAMA_HOST=ollama
OLLAMA_PORT=11434NEXT_PUBLIC_INFERENCE_API_URL=http://localhost:8001
NEXT_PUBLIC_CRUD_API_URL=http://localhost:8002fastapi>=0.100.0
uvicorn[standard]>=0.23.0
qdrant-client>=1.6.0
python-multipart>=0.0.6
pydantic>=2.0.0{
"dependencies": {
"next": "^14.0.0",
"react": "^18.0.0",
"react-dom": "^18.0.0",
"@types/react": "^18.0.0",
"typescript": "^5.0.0",
"tailwindcss": "^3.3.0",
"@tanstack/react-table": "^8.10.0"
}
}- Navigate to the Profile page
- Click "Add Skills" button
- Fill in skill details (name, level, experience)
- Submit to update skill portfolio
- Ensure employee profile is complete
- Click "Generate Recommendations"
- Review AI-suggested projects with match scores
- Select project for detailed analysis
- Select a target project from recommendations
- Click "Generate Analysis"
- Review skill gap analysis
- Follow suggested learning path
For enhanced AI performance with Ollama, enable GPU support:
# In docker-compose.override.yml
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]Port Conflicts
# Check if ports are available
netstat -tulpn | grep :3000
netstat -tulpn | grep :8001Build Issues
# Clean and rebuild
make clean
make build-upNetwork Connectivity
# Verify service communication
docker network ls
docker network inspect jtp-test_app-networkVolume Issues
# Clean unused volumes
docker volume prune