Trust your hiring process again.
Unmask is an intelligent hiring verification platform that helps you verify candidate authenticity through comprehensive analysis of CVs, LinkedIn profiles, GitHub accounts, and automated reference calls. Built for RAISE YOUR HACK 2025 β’ Vultr Track.
π Live Demo: unmask.click
- CV Processing: Extracts and analyzes professional experience, education, skills, and credentials
- LinkedIn Integration: Cross-references LinkedIn data with CV information for consistency
- GitHub Analysis: Evaluates coding activity, repository quality, and technical skills
- Credibility Scoring: AI-powered authenticity assessment with detailed flags and recommendations
- AI-Powered Calls: Automatically calls references using ElevenLabs Conversational AI
- Natural Conversations: Professional, human-like interactions with references
- Transcript Analysis: Real-time transcription and AI-powered summarization
- Reference Validation: Cross-checks reference feedback with candidate claims
- Live Feedback: Get real-time prompts during candidate interviews
- Inconsistency Detection: Flags discrepancies between sources on-the-fly
- Suggested Questions: AI-generated follow-up questions based on analysis
- Interview Transcripts: Live transcription with highlighted concerns
- Candidate Profiles: Unified view of all candidate information
- Processing Pipeline: Real-time status tracking from upload to analysis
- Flag Management: Visual indicators for potential concerns
- Export Reports: Detailed hiring decision support documents
- Next.js 15 - React framework with App Router
- TypeScript - Type-safe development
- Tailwind CSS - Modern styling framework
- Radix UI - Accessible component primitives
- Framer Motion - Smooth animations
- Supabase - PostgreSQL database with real-time capabilities
- Supabase Storage - Secure file storage for CVs and documents
- Ashby ATS Integration - Seamless candidate import and sync
- Groq API - Fast AI inference for document analysis
- OpenAI GPT-4 - Advanced reasoning and summarization
- ElevenLabs - Natural voice AI for reference calls
- PDF Processing - Automated document parsing and extraction
- Docker - Containerized deployment
- Vultr - Cloud hosting platform
- Real-time Processing - Async job processing
- Node.js 18+
- Docker (for production deployment)
- API keys for external services
-
Clone the repository
git clone https://github.com/le-commit/unmask.git cd unmask
-
Install dependencies
cd frontend pnpm install
-
Configure environment variables
cp .env.example .env.local
Required environment variables:
# Supabase NEXT_PUBLIC_SUPABASE_URL=your_supabase_url NEXT_PUBLIC_SUPABASE_ANON_KEY=your_anon_key SUPABASE_SERVICE_ROLE_KEY=your_service_role_key # AI Services GROQ_API_KEY=your_groq_api_key OPENAI_API_KEY=your_openai_api_key # Reference Calling (ElevenLabs) ELEVENLABS_API_KEY=your_elevenlabs_api_key ELEVENLABS_AGENT_ID=your_agent_id ELEVENLABS_AGENT_PHONE_ID=your_phone_id # Twilio (via ElevenLabs) TWILIO_ACCOUNT_SID=your_twilio_sid TWILIO_AUTH_TOKEN=your_twilio_token TWILIO_PHONE_NUMBER=your_twilio_number # Ashby ATS Integration ASHBY_API_KEY=your_ashby_api_key
-
Start local supabase
supabase start supabase db reset --local
-
Start development server
pnpm dev
-
Open your browser
http://localhost:3000
The application uses pg_cron for automated webhook queue processing in both local development and production environments.
β
Local Development: pg_cron extension automatically installed
β
Production: pg_cron pre-available in Supabase Cloud
β
Queue Processing: Automatic every 2 minutes in both environments
# Just start development - queue processing works automatically!
pnpm dev
How it works:
- Database migrations auto-install
pg_cron
extension - Cron job created automatically: processes queue every 2 minutes
- Local development uses
host.docker.internal:3000
- Production uses configured webhook base URL
Optional: Configure production webhook URL for external deployments:
-- Set production webhook base URL (optional)
ALTER DATABASE your_production_db_name
SET app.webhook_base_url = 'https://your-domain.com';
Monitor queue status and cron job health:
-- View pending webhooks
SELECT webhook_type, status, priority, created_at, payload->'applicantId' as applicant_id
FROM webhook_queue
WHERE status IN ('pending', 'failed')
ORDER BY priority DESC, created_at ASC;
-- Check pg_cron job status
SELECT jobid, schedule, active, jobname FROM cron.job
WHERE jobname = 'process-webhook-queue';
-- Use helper function for detailed status
SELECT * FROM check_webhook_queue_cron_status();
For debugging or immediate processing:
# Local development
curl -X POST "http://localhost:3000/api/webhooks/process-queue" \
-H "Authorization: Bearer webhook-secret-dev" \
-H "Content-Type: application/json"
# Production
curl -X POST "https://your-domain.com/api/webhooks/process-queue" \
-H "Authorization: Bearer your-webhook-secret" \
-H "Content-Type: application/json"
Webhook Type | Priority | Purpose |
---|---|---|
score_push |
Based on AI score (1-100) | Push updated credibility scores to Ashby ATS |
note_push |
90 (high priority) | Push analysis notes and red flags to Ashby |
Priority Processing: Higher scores processed first (score 85 = priority 85)
We provide automated deployment scripts for seamless production deployment:
-
Make scripts executable
chmod +x deploy.sh rollback.sh check-status.sh
-
Deploy to production
./deploy.sh
-
Check deployment status
./check-status.sh
-
Emergency rollback (if needed)
./rollback.sh
-
Build the Docker image
docker build -t unmask:latest .
-
Configure webhook base URL for production
-- Required: Set webhook base URL to your production domain ALTER DATABASE your_production_db_name SET app.webhook_base_url = 'https://your-domain.com';
-
Run the container
docker run -d \ --name unmask-app \ -p 3000:3000 \ --env-file .env.local \ unmask:latest
-
Build the application
cd frontend npm run build
-
Configure webhook base URL for production
-- Required: Set webhook base URL to your production domain ALTER DATABASE your_production_db_name SET app.webhook_base_url = 'https://your-domain.com';
-
Start production server
npm start
GET /api/applicants
- List all applicantsPOST /api/applicants
- Create new applicant with CV/LinkedIn/GitHubGET /api/applicants/[id]
- Get specific applicantPUT /api/applicants/[id]
- Update applicant informationDELETE /api/applicants/[id]
- Delete applicant
GET /api/ashby/candidates
- List cached candidates from database with auto-syncPOST /api/ashby/candidates
- Force refresh candidates from Ashby APIPOST /api/ashby/files
- Download and store CV in Supabase Storage (webhook endpoint)POST /api/ashby/push-score
- Send AI analysis score to Ashby custom field
GET /api/files/[fileId]
- Get signed URL for file download from storage
POST /api/reference-call
- Initiate automated reference callGET /api/get-transcript?conversationId=
- Retrieve call transcriptPOST /api/summarize-transcript
- AI analysis of reference call
- File upload β CV/LinkedIn parsing β GitHub analysis β AI credibility assessment β Reference verification
- Authentication Approach - Comprehensive auth architecture, middleware patterns, and security best practices
- System Architecture - Overall system design and data flow
- Setup Guides - Historical setup and integration documentation
- Navigate to the dashboard:
/board
- Click "Add New Applicant"
- Upload required documents:
- CV (PDF, DOC, DOCX) - Required
- LinkedIn Profile (PDF, HTML, TXT) - Optional
- GitHub Profile URL - Optional
- Submit and wait for processing
- Open the reference call interface:
/call
- Fill in reference details:
- Phone number (with country code)
- Candidate name
- Reference name
- Company context (optional)
- Role and duration (optional)
- Initiate the call
- Review transcript and AI summary
- Navigate to candidate profile
- Click "Start Interview"
- Use real-time suggestions during the call
- Review flagged inconsistencies
- Primary Analysis: Groq Llama models for speed
- Summarization: GPT-4o-mini for cost efficiency
- Voice AI: ElevenLabs for natural conversations
- GitHub Repositories: 50 per analysis
- Content Analysis: 3 repositories max
- File Size: 10MB per document
- Concurrent Processing: 3 applicants
- Environment variable validation
- File type restrictions
- Input sanitization
- Rate limiting on API endpoints
"Permission denied" when running deployment scripts
chmod +x deploy.sh rollback.sh check-status.sh
CV processing fails
- Ensure PDF is not password protected
- Check file size is under 10MB
- Verify GROQ_API_KEY is set correctly
Reference calls not working
- Verify ElevenLabs agent is configured
- Check Twilio phone number permissions
- Ensure all environment variables are set
Webhook queue not processing
-- Check if pg_cron job exists
SELECT * FROM cron.job WHERE jobname = 'process-webhook-queue';
-- Check pending webhooks
SELECT COUNT(*) as pending_count FROM webhook_queue WHERE status = 'pending';
-- Manual queue processing
SELECT net.http_post(
url => 'https://your-domain.com/api/webhooks/process-queue',
headers => '{"Authorization": "Bearer your-webhook-secret", "Content-Type": "application/json"}'::jsonb
);
Docker deployment issues
# Check logs
docker logs unmask-app
# Restart container
docker restart unmask-app
# Check environment variables
docker exec unmask-app env | sort
We welcome contributions! Please see our development guide for:
- Code style guidelines
- Testing procedures
- Feature request process
- Bug reporting
- Deployment Scripts Guide
- Supabase Storage Setup
- Reference Calling Setup
- Vultr Deployment Guide
- API Documentation
This project is built for RAISE YOUR HACK 2025 hackathon submission.
Event: RAISE YOUR HACK 2025 Track: Vultr Infrastructure Challenge Team: le-commit Live Demo: unmask.click