An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
-
Updated
Feb 1, 2026 - Python
An inference server for your machine learning models, including support for multiple frameworks, multi-model serving and more
MLOps tutorial using Python, Docker and Kubernetes.
Deploy A/B testing infrastructure in a containerized microservice architecture for Machine Learning applications.
Tool to take your ML model from local to production with one-line of code.
CartPole game by Reinforcement Learning, a journey from training to inference
Serve contanerized machine learning models in microservice architecture with seldon-core or Tensorflow Serving
In this repo it is show how to build and deploy a simple pipeline using Kubernetes, Kubeflow pipelines and seldon-core.
MLOps platform for intelligent document processing and validation. Includes OCR, data pipelines, model training, MLflow tracking, Airflow orchestration, and model serving via Seldon Core. Designed for scalable document recognition and classification in enterprise environments
A deployment using Seldon's open source MLServer
Add a description, image, and links to the seldon-core topic page so that developers can more easily learn about it.
To associate your repository with the seldon-core topic, visit your repo's landing page and select "manage topics."