An AI-powered resume builder with modular LangGraph orchestration, typed state schemas, and real-time WebSocket streaming.
demo.mp4
┌─────────────────────────────────────────────────────────────┐
│ Transport Layer (WebSocket / REST) │
├─────────────────────────────────────────────────────────────┤
│ Session Manager (conversation state, history management) │
├─────────────────────────────────────────────────────────────┤
│ Orchestrator (intent → routing → execution → response) │
├───────────────┬─────────────────────┬───────────────────────┤
│ Intent │ Action Router │ Response Composer │
│ Classifier │ (declarative) │ (context-aware) │
├───────────────┴─────────────────────┴───────────────────────┤
│ Handlers (chat, edit, analyze, add_course, format, ...) │
├─────────────────────────────────────────────────────────────┤
│ Domain Services (LaTeX compilation, course parsing) │
└─────────────────────────────────────────────────────────────┘
| Concern | Approach |
|---|---|
| State Management | Immutable Pydantic schemas with schema versioning |
| Intent Classification | LLM-powered with confidence scoring and entity extraction |
| Handler Routing | Declarative routing table, zero conditional logic |
| Event Streaming | Typed events over WebSocket for real-time UI feedback |
| Error Handling | Typed error categories (retryable, user-fixable, critical) |
| Guardrails | Input validation, prompt injection detection, output sanitization |
| Context Window | Token budget management with sliding window history |
vitaeai/
├── backend/
│ └── app/
│ ├── assistant/ # Core AI orchestration
│ │ ├── orchestrator.py # Main coordination logic
│ │ ├── state/ # Pydantic state schemas
│ │ ├── nodes/ # Graph nodes (classifier, router, composer)
│ │ ├── handlers/ # Action handlers (chat, edit, analyze, etc.)
│ │ ├── events/ # Event emitter implementations
│ │ ├── guardrails.py # Input/output validation
│ │ ├── context.py # Token budget management
│ │ ├── health.py # Component health checks
│ │ └── logging.py # Structured logging with correlation IDs
│ ├── api/ # FastAPI routers
│ ├── graphs/ # LangGraph workflows (compile, parse)
│ └── services/ # Domain services (LLM, session, compile)
├── frontend/
│ ├── src/
│ │ ├── app/ # Next.js App Router
│ │ ├── components/ # React components
│ │ ├── hooks/ # useAssistant, useResumeEditor
│ │ └── lib/ # WebSocket client, API utilities
│ └── package.json
└── compiled_pdfs/ # Generated PDF output
| Handler | Intent | Description |
|---|---|---|
ChatHandler |
GENERAL_CHAT |
Conversational responses with resume context |
AddCourseHandler |
ADD_COURSE |
Parse course → inject → compile pipeline |
EditSectionHandler |
EDIT_SECTION |
LLM-powered section modifications |
AnalyzeHandler |
ANALYZE_RESUME |
Structured resume feedback with suggestions |
FormatHandler |
FORMAT_RESUME |
Visual styling and layout changes |
ClarifyHandler |
CLARIFY_NEEDED |
Request additional details from user |
FallbackHandler |
UNKNOWN |
Graceful handling of unrecognized intents |
- Docker Desktop (for LaTeX compilation)
- Python 3.10+
- Node.js 20+
- Google Gemini API key
cd backend
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
export GOOGLE_API_KEY=your_key_here
uvicorn app.main:app --reloadcd frontend
npm install
npm run devOpen http://localhost:3000
ws://localhost:8000/ws/assistant
Client → Server:
{
"type": "user_message",
"content": "Improve my summary section",
"source": "<latex source>"
}Server → Client (streaming events):
{"type": "stream_event", "event": {"type": "progress", "data": {"message": "Analyzing..."}}}Server → Client (final response):
{
"type": "assistant_response",
"reply": "I've updated your summary...",
"artifacts": {"updated_source": "...", "compile_success": true, "pdf_url": "..."}
}| Endpoint | Purpose |
|---|---|
GET /api/v1/health |
Fast health check (no LLM) |
GET /api/v1/health/deep |
Full component verification |
GET /api/v1/health/ready |
Kubernetes readiness probe |
GET /api/v1/health/live |
Kubernetes liveness probe |
All state is managed through immutable Pydantic models:
- SessionState – Conversation history, metadata
- RequestState – Single turn context and hints
- IntentState – Classification result with entities
- ActionState – Handler execution lifecycle and artifacts
- ResponseState – Final output with events and metadata
- Structured Logging – JSON format with correlation IDs
- Request Tracing – Context managers for timing critical paths
- Health Checks – Component-level status (healthy/degraded/unhealthy)
- Event Capture – All streaming events collected for debugging
Backend: FastAPI, LangGraph, LangChain, Google Gemini, Pydantic, Docker/TeXLive
Frontend: Next.js 16, TypeScript, Tailwind CSS, Monaco Editor, Framer Motion, react-markdown
- Backend README – Detailed architecture and handler documentation
- OpenAPI docs at
/docswhen server is running
Proprietary – VitaeAI Project