Skip to content

A multi-cluster batch queuing system for high-throughput workloads on Kubernetes.

License

Notifications You must be signed in to change notification settings

sarthaksarthak9/armada

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CircleCI Go Report Card

Armada

Armada is a multi-Kubernetes cluster batch job scheduler.

Armada is designed to address the following issues:

  1. A single Kubernetes cluster can not be scaled indefinitely, and managing very large Kubernetes clusters is challenging. Hence, Armada is a multi-cluster scheduler built on top of several Kubernetes clusters.
  2. Achieving very high throughput using the in-cluster storage backend, etcd, is challenging. Hence, queueing and scheduling is performed partly out-of-cluster using a specialized storage layer.

Armada is designed primarily for machine learning, AI, and data analytics workloads, and to:

  • Manage compute clusters composed of tens of thousands of nodes in total.
  • Schedule a thousand or more pods per second, on average.
  • Enqueue tens of thousands of jobs over a few seconds.
  • Divide resources fairly between users.
  • Provide visibility for users and admins.
  • Ensure near-constant uptime.

Armada is a CNCF Sandbox project used in production at G-Research.

For an overview of Armada, see these videos:

Armada adheres to the CNCF Code of Conduct.

Documentation

For an overview of the architecture and design of Armada, and instructions for submitting jobs, see:

For a full developer guide, see:

For API reference, see:

We expect readers of the documentation to have a basic understanding of Docker and Kubernetes; see, e.g., the following links:

Contributions

Thank you for considering contributing to Armada! We want everyone to feel that they can contribute to the Armada Project. Your contributions are valuable, whether it's fixing a bug, implementing a new feature, improving documentation, or suggesting enhancements. We appreciate your time and effort in helping make this project better for everyone. For more information about contributing to Armada see CONTRIBUTING.md and before proceeding to contributions see CODE_OF_CONDUCT.md

Discussion

If you are interested in discussing Armada you can find us on slack

About

A multi-cluster batch queuing system for high-throughput workloads on Kubernetes.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 81.5%
  • C# 8.1%
  • TypeScript 7.4%
  • Python 1.8%
  • Smarty 0.3%
  • CSS 0.3%
  • Other 0.6%