Watch the Video Demo: https://vimeo.com/1158085304?fl=ip&fe=ec
Prepare Smarter. Get Hired Faster.
An intelligent interview preparation platform that combines job search, resume parsing, automated resource curation, and AI-powered coaching to help candidates prepare effectively for their dream jobs.
Nexa is an end-to-end interview preparation platform that revolutionizes how candidates prepare for job interviews. By leveraging cutting-edge AI and automation, Nexa:
- Searches for Jobs - Finds relevant job opportunities based on your resume
- Analyzes Job Requirements - Extracts missing skills and requirements using AI
- Auto-Curates Learning Resources - Automatically searches and scrapes relevant learning materials
- Builds RAG Knowledge Base - Creates a personalized RAG (Retrieval-Augmented Generation) system from curated resources
- Provides AI Interview Coach - Chat with an AI coach that has context about your target role and company
Resume Parser - AI-powered resume parsing with Groq LLM
Smart Job Search - Intelligent job matching with SerpAPI
Automated Resource Discovery - Uses Tavily AI to find the best learning materials
RAG-Powered Chat - Context-aware interview coaching using embedded knowledge
Chat History - Persistent chat sessions for each job preparation
Dashboard - Real-time analytics and activity tracking
- Framework: Next.js 15.5.9 (App Router)
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: shadcn/ui, kokonutui
- State Management: React Hooks
- Animations: Framer Motion
- Markdown Rendering: react-markdown
- Runtime: Node.js
- Package Manager: pnpm
- Authentication: Supabase Auth (Google OAuth, GitHub OAuth, Email OTP)
- File Processing: pdf-parse, mammoth
| Service | Model/API | Purpose |
|---|---|---|
| Groq | Llama 3.3 70B Versatile | Resume parsing, skill extraction |
| GitHub Models | GPT-4o | Resource summarization, interview prep responses |
| Hugging Face | sentence-transformers/all-mpnet-base-v2 | Text embeddings for RAG |
| Tavily AI | Search API | Learning resource discovery |
- SerpAPI - Job search and aggregation
- Supabase - Authentication & user management
- Vercel - Hosting and deployment
RAG (Retrieval-Augmented Generation) Pipeline:
- Web Scraping - Fetches content from discovered resources
- Summarization - GPT-4o generates interview-focused summaries
- Chunking - Splits content into semantic chunks
- Embedding - Converts chunks to vector embeddings
- Retrieval - Cosine similarity search for relevant context
- Generation - GPT-4o generates responses with retrieved context
├── app/
│ ├── api/
│ │ ├── auto-prepare-interview/ # Automated prep workflow
│ │ ├── build-vector-db/ # RAG embeddings creation
│ │ ├── find-sources/ # Resource discovery
│ │ ├── get-response/ # Chat completions
│ │ ├── parse-resume/ # Resume parsing endpoint
│ │ ├── search-jobs/ # Job search API
│ │ └── save-selected-sources/ # Resource persistence
│ ├── command-center/ # Analytics dashboard
│ ├── dashboard/ # Main dashboard layout
│ ├── interview-preperation/ # Chat interface
│ │ ├── ChatInterface.tsx # Main chat UI
│ │ └── history.ts # Chat session management
│ ├── jobsFound/ # Job listings
│ └── userProfile/ # Resume upload & profile
│
├── backend/
│ ├── jobSearcher/ # Job search & matching
│ │ ├── jobSearcher.ts # Main job search logic
│ │ ├── matcher.ts # AI-powered job matching
│ │ └── serpApiClient.ts # SerpAPI integration
│ ├── prep/ # Resource search
│ │ └── searchprepsources.ts # Tavily API integration
│ ├── rag/
│ │ ├── chat/ # Response generation
│ │ │ ├── responsegeneration.ts # GPT-4o chat
│ │ │ └── retrieval.ts # Vector search
│ │ └── ingestion/ # RAG pipeline
│ │ ├── chunking.ts # Text chunking
│ │ ├── embedding.ts # HF embeddings
│ │ ├── summarization.ts # Content summarization
│ │ └── webscraper.ts # Web content extraction
│ └── resume-parser/ # AI resume parser
│ ├── aiParser.ts # Groq-powered parsing
│ ├── pdfExtractor.ts # PDF text extraction
│ └── types.ts # Type definitions
│
├── components/ # Reusable UI components
│ ├── ui/ # shadcn/ui components
│ └── auth-page.tsx # Authentication UI
│
└── lib/ # Utilities & helpers
├── supabase.ts # Supabase client
└── utils.ts # Helper functions
- Node.js 18+
- pnpm (recommended) or npm
- API keys for required services
- Clone the repository
git clone https://github.com/shilok09/v0-cyberpunk-dashboard-design.git
cd v0-cyberpunk-dashboard-design- Install dependencies
pnpm install- Set up environment variables
Create a .env.local file in the root directory:
# AI Services
GITHUB_TOKEN=your_github_token_here
GROQ_API_KEY=your_groq_api_key_here
HUGGINGFACE_API_TOKEN=your_huggingface_token_here
# Search APIs
SERPAPI_KEY=your_serpapi_key_here
TAVILY_API_KEY=your_tavily_api_key_here
# Authentication (Supabase)
NEXT_PUBLIC_SUPABASE_URL=your_supabase_project_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key- Run the development server
pnpm dev- Open your browser Navigate to http://localhost:3000
- Visit GitHub Settings → Tokens
- Generate a personal access token
- Add to
.env.localasGITHUB_TOKEN
- Sign up at console.groq.com
- Create API key in dashboard
- Add to
.env.localasGROQ_API_KEY
- Create account at huggingface.co
- Generate token at Settings → Access Tokens
- Add to
.env.localasHUGGINGFACE_API_TOKEN
- Sign up at serpapi.com
- Get API key from dashboard
- Add to
.env.localasSERPAPI_KEY
- Sign up at tavily.com
- Get API key from dashboard
- Add to
.env.localasTAVILY_API_KEY
- Create project at supabase.com
- Enable Google and GitHub OAuth providers
- Copy project URL and anon key
- Add to
.env.localasNEXT_PUBLIC_SUPABASE_URLandNEXT_PUBLIC_SUPABASE_ANON_KEY
1. User uploads resume
↓
2. AI parses resume (Groq Llama 3.3)
↓
3. User searches for jobs
↓
4. SerpAPI finds matching jobs
↓
5. AI scores job relevance
↓
6. User clicks "Start Preparing"
↓
7. AUTOMATED FLOW BEGINS:
├─ Extract missing skills (GPT-4o)
├─ Search for resources (Tavily)
├─ Scrape web content
├─ Summarize with AI (GPT-4o)
├─ Chunk text semantically
├─ Generate embeddings (HuggingFace)
└─ Build vector database
↓
8. Chat interface launches with RAG
↓
9. User chats with AI coach
├─ Retrieves relevant chunks
├─ Generates contextual responses (GPT-4o)
└─ Saves chat history
Query → Embedding → Vector Search → Top-K Chunks → Context + Query → GPT-4o → Response
Cyberpunk Aesthetic
- Dark theme with orange (#f97316) accents
- Monospace fonts for technical data
- Animated status indicators
- Real-time activity logs
User Experience
- Fully automated workflow (no manual steps)
- Real-time progress feedback
- chat history for Context
- Persistent sessions per job
pnpm install
pnpm build- Push code to GitHub
- Import project in Vercel
- Add environment variables
- Deploy!
Or use CLI:
vercel --prodEnsure all API keys from .env.local are added to your Vercel project settings.
pnpm dev # Start development server (localhost:3000)
pnpm build # Build for production
pnpm start # Start production server
pnpm lint # Run ESLint- App Router - Next.js 15 with server/client components
- API Routes - RESTful endpoints with TypeScript
- Real-time Updates - useState + useEffect hooks
- File Storage - Server-side storage for RAG artifacts
Shilok Kumar - Developer
Fatima Tu Zahra - Developer
Ramalah Amir - Developer
Built with Love For The World