BLUEST FLAME is an AI-powered Resume Analyzer and Job Application Assistant that helps users improve their resumes for Applicant Tracking Systems (ATS) and job applications.
This project consists of two main components:
The application helps job seekers by:
- Analyzing resumes against job descriptions
- Identifying matching and missing skills
- Evaluating resume formatting and grammar
- Generating interview questions based on the resume and job
- Providing resume improvement suggestions
- Finding relevant job listings that match the user's skills
- Resume Analysis: Upload your resume and get detailed feedback
- Skill Matching: See which skills in your resume match or don't match the job description
- Grammar Check: Receive a grammar score and highlighted issues
- Format Analysis: Get formatting suggestions and a score based on resume best practices
- Role Match: View a percentage match between your resume and the job description
- Resume Suggestions: Receive AI-generated suggestions to improve your resume
- Job Recommendations: Browse related job listings based on your resume
- Interview Preparation: Get AI-generated interview questions to help prepare
- FastAPI: High-performance Python web framework
- Spacy NER: Custom-trained Named Entity Recognition model for skill extraction
- PDF Mining: Extract and analyze content from PDF resumes
- Web Scraping: Find relevant job listings
- Sentence Transformers: For semantic matching of skills
- Language Tool: For grammar checking
- Next.js: React framework for web applications
- TailwindCSS: Utility-first CSS framework
- Framer Motion: Animation library for smooth transitions
- React Components: Custom components for visualizations and UI
- Node.js (v16 or higher)
- Python (v3.8 or higher)
- npm or yarn
- Navigate to the backend directory:
cd fastapi-backend- Install Python dependencies:
pip install -r requirements.txt- Important: Train the NER model This repository doesn't include the trained model due to its large size (~700MB). You'll need to train it yourself:
# First, create the output directory
mkdir -p Models/model-best
# Train the model using the provided configuration and data
python -m spacy train Configs/config.cfg --output Models/model-best --paths.train Data/train_data.spacy --paths.dev Data/valid_data.spacyThe training uses the NER data in Data/ner.json which contains annotated job descriptions with labeled skills.
- Run the FastAPI server:
uvicorn app:app --reload --port 8000- Navigate to the frontend directory:
cd nextjs-frontend- Install Node.js dependencies:
npm install
# or
yarn install- Run the development server:
npm run dev
# or
yarn dev- Open http://localhost:3000 in your browser
- User uploads a resume PDF and enters a job description on the home page
- The frontend sends these to the FastAPI backend
- The backend processes the resume and compares it with the job description
- Results are stored in the browser's localStorage
- User is redirected to the results page
The extract_skills.py module uses the custom-trained Spacy NER model to extract skills from both the resume and job description, then the matcher.py module compares them to find matches and gaps.
The grammar.py module analyzes the resume text for grammatical issues and provides a score.
The formatchecker.py evaluates resume formatting based on best practices:
- Section headers
- Bullet points
- Font consistency
- Contact information
- Date formats
- Action verbs
The suggestions.py module generates AI-powered suggestions for improving the resume.
The scrapper.py module finds relevant job listings based on the resume content.
The generateq.py module creates personalized interview questions based on the resume and job description.
bluest-flame/
├── fastapi-backend/ # Python backend
│ ├── app.py # Main FastAPI application
│ ├── extract_skills.py # Skill extraction module using Spacy NER
│ ├── formatchecker.py # Resume format analysis
│ ├── generateq.py # Interview question generator
│ ├── grammar.py # Grammar checker
│ ├── matcher.py # Skill matching algorithm
│ ├── requirements.txt # Python dependencies
│ ├── resumeparser.py # Resume parsing utilities
│ ├── scrapper.py # Job listing scraper
│ ├── skill2vec.model # Word vector model for skills
│ ├── suggestions.py # Resume suggestion generator
│ ├── Configs/ # Spacy configuration files
│ │ ├── base_config.cfg # Base configuration template
│ │ └── config.cfg # Complete configuration for training
│ ├── Data/ # Training data
│ │ ├── ner.json # NER annotations for skills
│ │ ├── train_data.spacy # Processed training data
│ │ └── valid_data.spacy # Processed validation data
│ └── Models/ # Directory for trained NER model
│ └── model-best/ # Will contain trained model files after setup
│
└── nextjs-frontend/ # React frontend
├── src/ # Source files
├── public/ # Static assets
├── .next/ # Next.js build output
├── next.config.mjs # Next.js configuration
├── postcss.config.mjs # PostCSS configuration
└── package.json # Node dependencies
The project uses a custom Spacy NER model trained to identify skills in job descriptions and resumes. The model configuration in Configs/config.cfg defines:
- A Tok2Vec component with MultiHashEmbed for embedding tokens
- A named entity recognition (NER) component to extract skills
- Training parameters for batch size, dropout, learning rate, etc.
The annotated data in Data/ner.json contains thousands of job descriptions with skills labeled as "SKILLS" entities.
This project is licensed under the MIT License - see the LICENSE file for details.



