Operation CV is a privacy-first, local AI-powered tool that helps you rewrite and optimize your CV for any job description—entirely on your machine. Built with Python, Streamlit, and local LLMs via LM Studio, this app parses your CV and job descriptions, scores relevance, tailors your content, and estimates your interview probability—all without sending a single byte to the cloud.
- Upload your CV and job descriptions in PDF, DOCX, or TXT formats
- Robust section extraction from real-world, messy CVs and JDs
- Smart section normalization for consistent template mapping
- Automatic skill extraction and matching
- Interview Probability Score: Get a clear percentage of how well your CV matches the job
- Target Score Setting: Set your desired interview probability and get tailored suggestions
- Component Analysis:
- Content Match (50%): Semantic alignment with job requirements
- Skill Coverage (30%): Required skills found in your CV
- Keyword Density (20%): Effective use of relevant keywords
- Gap Analysis: See exactly how far you are from your target score
- Personalized improvement recommendations based on your scores
- Missing skills identification and integration suggestions
- Content alignment tips for better semantic matching
- Keyword optimization guidance
- Customizable DOCX templates for consistent CV formatting
- Default template included with professional layout
- Variables for all common CV sections:
{{ summary }}- Professional summary/profile{{ experience }}- Work experience{{ education }}- Education{{ skills }}- Skills & competencies- And more! See
/template/example_template.md
- Run your favorite Mistral, LLaMA 3, or other models locally via LM Studio
- Industry-specific prompting for targeted optimization
- Language selection (English UK/US, French, Spanish, Italian)
- Save applications to SQLite database for future reference
- Export as DOCX or PDF using your templates
- View and re-export previous applications
- Custom prompts per industry
- Industry-specific scoring adjustments
- Tailored suggestions based on sector
- Clean, responsive Streamlit interface
- Interactive probability scoring dashboard
- Visual component score analysis
- Progress tracking for long operations
- Detailed error messages and recovery
-
Install Python 3.10 or 3.11
# Using pyenv (recommended) pyenv install 3.11 pyenv local 3.11 # Or use your OS package manager
-
Clone the Repository
git clone https://github.com/yourusername/OperationCV.git cd OperationCV -
Set Up Virtual Environment
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install Requirements
pip install --upgrade pip pip install -r requirements.txt
-
Install & Run LM Studio
- Download LM Studio
- Run a local LLM (Mistral, Llama 3, etc.) on port 1234
- Model must support OpenAI-compatible chat API
-
Run the App
streamlit run app/streamlit_app.py
The overall score is calculated using three weighted components:
- Content Match (50%): Semantic similarity between your CV and the job description
- Skill Coverage (30%): Percentage of required skills found in your CV
- Keyword Density (20%): How effectively you've used relevant keywords
- Set your desired interview probability (default 80%)
- The system will:
- Calculate the gap between current and target scores
- Provide specific suggestions to reach your target
- Highlight areas needing immediate improvement
Each component is scored from 0-100%:
- Content Match: Uses semantic analysis to measure how well your content aligns
- Skill Coverage: Compares required skills with those in your CV
- Keyword Density: Analyzes the effective use of relevant keywords
- Check
/template/example_template.mdfor available variables - Use the default template in
/template/cv_template.docx - Or create your own DOCX template using the variables
- Upload your template in the app's sidebar
- Your template will be used for all exports (DOCX and PDF)
The CV structure is defined in core/cv_schema.json and includes:
- Required sections (summary, experience, education, skills)
- Optional sections (projects, publications, languages)
- Validation rules for each section
- Format requirements and constraints
- Default port: 1234 (configurable)
- Supported models: Any OpenAI-compatible chat model
- Recommended: Mistral-7B, LLaMA-3, or similar
- Min 8GB VRAM recommended
-
CV Preparation
- Use clear section headings
- Include quantifiable achievements
- Keep formatting simple
- Use standard section names
-
Job Description Analysis
- Include full job posting
- Ensure requirements section is included
- More detail = better matching
-
Template Usage
- Test templates with sample data first
- Keep styling minimal
- Use all required variables
- Follow spacing guidelines
-
Optimal Results
- Set realistic target scores
- Review and implement all suggestions
- Focus on gap analysis recommendations
- Update skills section comprehensively
- Fixed JSON schema and parsing for CV suggestions
- Improved error handling and logging in LLM client
- Enhanced suggestion display in the UI
- Updated CV schema to better match LLM response format
- Added better validation for JSON responses
- Fixed percentage calculation issues
- Added comprehensive scoring documentation
The system expects LLM responses in this JSON format:
{
"sections": {
"summary": "Improved summary content...",
"education": "Improved education content...",
"experience": "Improved experience content...",
"skills": "Improved skills content..."
}
}Each section contains the improved content as a string, which can include:
- Line breaks (
\n) for formatting - Bullet points for better readability
- Original content structure preserved
- Enhanced wording and phrasing
- Better keyword placement
- Advanced template gallery
- Custom industry instruction editor
- Batch processing of multiple CVs
- Enhanced skill extraction
- More language support
- Docker deployment
- Template sharing system
- Enhanced PDF formatting
- AI-powered template recommendations
- Historical score tracking
- Comparative analysis features
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Streamlit
- LLM support via LM Studio
- Semantic analysis using Sentence Transformers