Armada is a system built on top of Kubernetes for running batch workloads. With Armada as middleware for batch, Kubernetes can be a common substrate for batch and service workloads. Armada is used in production and can run millions of jobs per day across tens of thousands of nodes.
Armada addresses the following limitations of Kubernetes:
- Scaling a single Kubernetes cluster beyond a certain size is challenging. Hence, Armada is designed to effectively schedule jobs across many Kubernetes clusters. Many thousands of nodes can be managed by Armada in this way.
- Achieving very high throughput using the in-cluster storage backend, etcd, is challenging. Hence, Armada performs queueing and scheduling out-of-cluster using a specialized storage layer. This allows Armada to maintain queues composed of millions of jobs.
- The default kube-scheduler is not suitable for batch. Instead, Armada includes a novel multi-Kubernetes cluster scheduler with support for important batch scheduling features, such as:
- Fair queuing and scheduling across multiple users. Based on dominant resource fairness.
- Resource and job scheduling rate limits.
- Gang-scheduling, i.e., atomically scheduling sets of related jobs.
- Job preemption, both to run urgent jobs in a timely fashion and to balance resource allocation between users.
Armada also provides features to help manage large compute clusters effectively, including:
- Detailed analytics exposed via Prometheus showing how the system behaves and how resources are allocated.
- Automatically removing nodes exhibiting high failure rates from consideration for scheduling.
- A mechanism to earmark nodes for a particular set of jobs, but allowing them to be used by other jobs when not used for their primary purpose.
Armada is designed with the enterprise in mind; all components are secure and highly available.
Armada is a CNCF Sandbox project and is used in production at G-Research.
For an overview of Armada, see the following videos:
- Armada - high-throughput batch scheduling
- Building Armada - Running Batch Jobs at Massive Scale on Kubernetes
The Armada project adheres to the CNCF Code of Conduct.
For documentation, see the following:
- System overview
- Scheduler
- User guide
- Quickstart
- Development guide
- Release notes/Version history
- API Documentation
We expect readers of the documentation to have a basic understanding of Docker and Kubernetes; see, e.g., the following links:
Thank you for considering contributing to Armada! We want everyone to feel that they can contribute to the Armada Project. Your contributions are valuable, whether it's fixing a bug, implementing a new feature, improving documentation, or suggesting enhancements. We appreciate your time and effort in helping make this project better for everyone. For more information about contributing to Armada see CONTRIBUTING.md and before proceeding to contributions see CODE_OF_CONDUCT.md
If you are interested in discussing Armada you can find us on