Making large AI models cheaper, faster and more accessible
-
Updated
Nov 18, 2024 - Python
Making large AI models cheaper, faster and more accessible
Accelerated deep learning R&D
Training and serving large-scale neural networks with auto parallelization.
Bare bone examples of machine learning in TensorFlow
A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
A unified interface for distributed computing. Fugue executes SQL, Python, Pandas, and Polars code on Spark, Dask and Ray without any rewrites.
A distributed task scheduler for Dask
Distributed Deep learning with Keras & Spark
Python-based research interface for blackbox and hyperparameter optimization, based on the internal Google Vizier Service.
37 traditional FL (tFL) or personalized FL (pFL) algorithms, 3 scenarios, and 20 datasets.
Implementation of Communication-Efficient Learning of Deep Networks from Decentralized Data
PySpark + Scikit-learn = Sparkit-learn
A crowdsourced distributed cluster for AI art and text generation
Distributed Computing for AI Made Simple
Unified Interface for Constructing and Managing Workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.
Bagua Speeds up PyTorch
Unleash the power of cloud
Auto Tune Models - A multi-tenant, multi-data system for automated machine learning (model selection and tuning).
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs.
t-Digest data structure in Python. Useful for percentiles and quantiles, including distributed enviroments like PySpark
Add a description, image, and links to the distributed-computing topic page so that developers can more easily learn about it.
To associate your repository with the distributed-computing topic, visit your repo's landing page and select "manage topics."