Skip to content

An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.

License

willyfh/mlops-workflow

Repository files navigation

⚙ MLOps Workflow

python 3.10 python 3.11 python 3.12 Run Tests codecov MIT License

A modular MLOps workflow for training, inference, experiment tracking, model registry, and deployment. Built with FastAPI, MLflow, MinIO, and PostgreSQL for scalable machine learning operations.

image

Features

  • Supports concurrent (non-blocking) model training and inference requests via FastAPI
  • MLflow for experiment tracking
  • MinIO for artifact storage
  • PostgreSQL for metadata and experiment storage
  • Hydra for modular config
  • Pydantic for config validation
  • Docker Compose for easy deployment
  • Pre-commit hooks for code quality
  • Unit and integration tests

Prerequisites

  • Docker
  • Nvidia container toolkit (optional, for GPU)

GPU/CPU Configuration

  • If you don’t want to use GPU, update your .env file:

    NVIDIA_VISIBLE_DEVICES=
    NVIDIA_RUNTIME=
  • If you want to use GPU:

    NVIDIA_VISIBLE_DEVICES=all
    NVIDIA_RUNTIME=nvidia

Installation & Usage

Docker Compose

1. Build and start containers

docker compose build
docker compose up

2. Login to MinIO

http://localhost:9001/login

  • User: minioadmin
  • Password: minioadmin (defined in .env)

3. Create and copy access keys

  • Create and copy the access keys
  • Update AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in .env

4. Restart containers

docker compose up

5. For training (from host)

curl -X POST -F "config_file=@backend/conf/config.yaml" \
  http://localhost:8000/api/v1/run_train

6. For inference

<run_id> refers to the MLflow run ID

curl -X POST -H "Content-Type: application/json" -d '{
"image_data": "'"$(base64 -w 0 backend/tests/assets/3.png)"'"'
}' http://localhost:8000/api/v1/run_inference/run_id/<run_id>

Service URLs

Here are the main web interfaces and endpoints you can access:


Notes

  • For GPU support, ensure Nvidia container toolkit is installed and configured.
  • This project uses uv as the Python dependency manager inside Docker containers

Disclaimer

This repository is intended as a minimal, educational template or starter kit for machine learning workflows. The training logic and architecture are kept simple for clarity and ease of use. For production or research use, you are encouraged to extend and customize the code to fit your requirements.

License

MIT

About

An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published