An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.
-
Updated
Oct 26, 2025 - Python
An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.
[TPDS 2025] EdgeAIBus: AI-driven Joint Container Management and Model Selection Framework for Heterogeneous Edge Computing
A comprehensive .NET MAUI plugin for ML inference with ONNX Runtime, CoreML, and platform-native acceleration support
Kickstart your MLOps initiative with a flexible, robust, and productive Python package.
A lightweight, framework-agnostic middleware that dynamically batches inference requests in real time to maximize GPU/TPU utilization.
Microservice to digitalize a chess scoresheet
Submission of Project
Enterprise Data Warehouse & ML Platform - High-performance platform processing 24B records with <60s latency and 100K records/sec throughput, featuring 32 fact tables, 128 dimensions, and automated ML pipelines achieving 91.2% accuracy. Real-time ML inference serving 300K+ predictions/hour with ensemble models.
A simple gRPC server for Machine Learning (ML) Model Inference in Rust.
scripts for benchmarking vLLM using Llama 8b and NVIDIA 4090 GPU
Add a description, image, and links to the ml-inference topic page so that developers can more easily learn about it.
To associate your repository with the ml-inference topic, visit your repo's landing page and select "manage topics."