A modular MLOps workflow for training, inference, experiment tracking, model registry, and deployment. Built with FastAPI, MLflow, MinIO, and PostgreSQL for scalable machine learning operations.
- Supports concurrent (non-blocking) model training and inference requests via FastAPI
- MLflow for experiment tracking
- MinIO for artifact storage
- PostgreSQL for metadata and experiment storage
- Hydra for modular config
- Pydantic for config validation
- Docker Compose for easy deployment
- Pre-commit hooks for code quality
- Unit and integration tests
- Docker
- Nvidia container toolkit (optional, for GPU)
-
If you don’t want to use GPU, update your
.envfile:NVIDIA_VISIBLE_DEVICES= NVIDIA_RUNTIME=
-
If you want to use GPU:
NVIDIA_VISIBLE_DEVICES=all NVIDIA_RUNTIME=nvidia
docker compose build
docker compose up- User: minioadmin
- Password: minioadmin (defined in
.env)
- Create and copy the access keys
- Update
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYin.env
docker compose upcurl -X POST -F "config_file=@backend/conf/config.yaml" \
http://localhost:8000/api/v1/run_train<run_id> refers to the MLflow run ID
curl -X POST -H "Content-Type: application/json" -d '{
"image_data": "'"$(base64 -w 0 backend/tests/assets/3.png)"'"'
}' http://localhost:8000/api/v1/run_inference/run_id/<run_id>Here are the main web interfaces and endpoints you can access:
-
FastAPI API & Docs
- API root: http://localhost:8000/api/v1/
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
-
MLflow Tracking UI
-
MinIO Console
-
PostgreSQL
- Accessible via database clients at localhost:5432 (no web UI)
- For GPU support, ensure Nvidia container toolkit is installed and configured.
- This project uses uv as the Python dependency manager inside Docker containers
This repository is intended as a minimal, educational template or starter kit for machine learning workflows. The training logic and architecture are kept simple for clarity and ease of use. For production or research use, you are encouraged to extend and customize the code to fit your requirements.
MIT
