Run any LLM, anywhere, offline.
Pack any LLM model into an offline-runnable Docker image with one command.
# Install
pip install -e .
# Pack a model
ezrunner pack qwen/Qwen-7B-Chat -o model.tar
# Run on offline machine
ezrunner run model.tar- User Guide - Features and usage
- Architecture - Design and implementation
- Development - Setup dev environment
- Code Style - Coding standards
- Testing - Testing strategy
- Contributing - How to contribute
# Clone repository
git clone https://github.com/yourusername/ezrunner.git
cd ezrunner
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Check code quality
black src/ tests/
ruff check src/ tests/
mypy src/Apache-2.0