An enterprise-ready and vendor-agnostic federated learning platform.
-
Updated
Jun 26, 2025 - Python
An enterprise-ready and vendor-agnostic federated learning platform.
In this repository, we explore model compression for transformer architectures via quantization. We specifically explore quantization aware training of the linear layers and demonstrate the performance for 8 bits, 4 bits, 2 bits and 1 bit (binary) quantization.
A lightweight, resource-efficient MLOps monitoring solution for machine learning models deployed on edge devices. Features system health tracking, model I/O logging, drift detection, and cloud telemetry.
Add a description, image, and links to the edge-ml topic page so that developers can more easily learn about it.
To associate your repository with the edge-ml topic, visit your repo's landing page and select "manage topics."