Fine-tune LLMs on your laptop's GPU—no code, no PhD, no hassle.
ModelForge v2.0 is a complete architectural overhaul bringing 2x faster training, modular providers, advanced strategies, and production-ready code quality.
- 🚀 2x Faster Training with Unsloth provider
- 🧩 Multiple Providers: HuggingFace, Unsloth (more coming!)
- 🎯 Advanced Strategies: SFT, QLoRA, RLHF, DPO
- 📊 Built-in Evaluation with task-specific metrics
- 🏗️ Modular Architecture for easy extensibility
- 🔒 Production-Ready with proper error handling and logging
- GPU-Powered Fine-Tuning: Optimized for NVIDIA GPUs (even 4GB VRAM)
- One-Click Workflow: Upload data → Configure → Train → Test
- Hardware-Aware: Auto-detects GPU and recommends optimal models
- No-Code UI: Beautiful React interface, no CLI or notebooks
- Multiple Providers: HuggingFace (standard) or Unsloth (2x faster)
- Advanced Strategies: SFT, QLoRA, RLHF, DPO support
- Automatic Evaluation: Built-in metrics for all tasks
- Text Generation: Chatbots, instruction following, code generation, creative writing
- Summarization: Document condensing, article summarization, meeting notes
- Question Answering: RAG systems, document search, FAQ bots
- Python 3.11.x (Python 3.12 not yet supported)
- NVIDIA GPU with 4GB+ VRAM (6GB+ recommended)
- CUDA installed and configured
- HuggingFace Account with access token (Get one here)
- Linux or Windows operating system
⚠️ macOS is NOT supported. ModelForge requires NVIDIA CUDA which is not available on macOS. Use Linux or Windows with NVIDIA GPU.Windows Users: See Windows Installation Guide for platform-specific instructions, especially for Unsloth support.
# Install ModelForge
pip install modelforge-finetuning
# Install PyTorch with CUDA support
# Visit https://pytorch.org/get-started/locally/ for your CUDA version
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126Linux:
export HUGGINGFACE_TOKEN=your_token_hereWindows PowerShell:
$env:HUGGINGFACE_TOKEN="your_token_here"Or use .env file:
echo "HUGGINGFACE_TOKEN=your_token_here" > .envmodelforge runOpen your browser to http://localhost:8000 and start training!
- Quick Start Guide - Get up and running in 5 minutes
- What's New in v2.0 - Major features and improvements
- Windows Installation - Complete Windows setup (including WSL and Docker)
- Linux Installation - Linux setup guide
- Post-Installation - Initial configuration
- Configuration Guide - All configuration options
- Dataset Formats - Preparing your training data
- Training Tasks - Understanding different tasks
- Hardware Profiles - Optimizing for your GPU
- Provider Overview - Understanding providers
- HuggingFace Provider - Standard HuggingFace models
- Unsloth Provider - 2x faster training
- Strategy Overview - Understanding strategies
- SFT Strategy - Standard supervised fine-tuning
- QLoRA Strategy - Memory-efficient training
- RLHF Strategy - Reinforcement learning
- DPO Strategy - Direct preference optimization
- REST API - Complete API documentation
- Training Config Schema - Configuration options
- Common Issues - Frequently encountered problems
- Windows Issues - Windows-specific troubleshooting
- FAQ - Frequently asked questions
- Contributing Guide - How to contribute
- Architecture - Understanding the codebase
- Model Configurations - Adding model recommendations
| Platform | HuggingFace Provider | Unsloth Provider | Notes |
|---|---|---|---|
| Linux | ✅ Full support | ✅ Full support | Recommended |
| Windows (Native) | ✅ Full support | ❌ Not supported | Use WSL or Docker for Unsloth |
| WSL 2 | ✅ Full support | ✅ Full support | Recommended for Windows users |
| Docker | ✅ Full support | ✅ Full support | With NVIDIA runtime |
Platform-Specific Installation Guides →
Unsloth provider is NOT supported on native Windows. For 2x faster training with Unsloth:
- Option 1: WSL (Recommended) - WSL Installation Guide
- Option 2: Docker - Docker Installation Guide
The HuggingFace provider works perfectly on native Windows.
When using Unsloth provider, you MUST specify a fixed max_sequence_length:
{
"provider": "unsloth",
"max_seq_length": 2048 // ✅ Required - cannot be -1
}Auto-inference (max_seq_length: -1) is NOT supported with Unsloth.
ModelForge uses JSONL format. Each task has specific fields:
Text Generation:
{"input": "What is AI?", "output": "AI stands for Artificial Intelligence..."}
{"input": "Explain ML", "output": "Machine Learning is a subset of AI..."}Summarization:
{"input": "Long article text...", "output": "Short summary."}Question Answering:
{"context": "Document text...", "question": "What is X?", "answer": "X is..."}Complete Dataset Format Guide →
We welcome contributions! ModelForge v2.0's modular architecture makes it easy to:
- Add new providers - Just 2 files needed
- Add new strategies - Just 2 files needed
- Add model recommendations - Simple JSON configs
- Improve documentation
- Fix bugs and add features
ModelForge uses modular configuration files for model recommendations. See the Model Configuration Guide for instructions on adding new recommended models.
- Backend: Python, FastAPI, SQLAlchemy
- Frontend: React.js
- ML: PyTorch, Transformers, PEFT, TRL
- Training: LoRA, QLoRA, bitsandbytes
- Providers: HuggingFace Hub, Unsloth
Results on NVIDIA RTX 3090. Your results may vary.
BSD License - see LICENSE file for details.
- HuggingFace for Transformers and model hub
- Unsloth AI for optimized training kernels
- The open-source ML community
- Documentation: https://github.com/ForgeOpus/ModelForge/tree/main/docs/
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- PyPI: modelforge-finetuning
ModelForge v2.0 - Making LLM fine-tuning accessible to everyone 🚀
