Kubernetes-based Event Driven Autoscaling
KEDA allows for fine-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.
KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.
We are a Cloud Native Computing Foundation (CNCF) incubation project.
Table of contents
- Getting started
- Documentation
- Community
- Adopters - Become a listed KEDA user!
- Governance & Policies
- Roadmap
- Releases
- Contributing
- QuickStart - RabbitMQ and Go
- QuickStart - Azure Functions and Queues
- QuickStart - Azure Functions and Kafka on Openshift 4
- QuickStart - Azure Storage Queue with ScaledJob
You can find several samples for various event sources here.
There are many ways to deploy KEDA including Helm, Operator Hub and YAML files.
Interested to learn more? Head over to keda.sh.
If interested in contributing or participating in the direction of KEDA, you can join our community meetings.
- Meeting time: Bi-weekly Tues 15:00 UTC (does follow US daylight savings). (Subscribe to Google Agenda | Convert to your timezone)
- Zoom link: https://zoom.us/j/96655859927?pwd=cGxaWWpHOVZSMEZDY3NuWWVIMERtdz09 (Password: keda)
- Meeting agenda: Google Docs
Just want to learn or chat about KEDA? Feel free to join the conversation in #KEDA on the Kubernetes Slack!
We are always happy to list users who run KEDA in production, learn more about it here.
You can learn about the governance of KEDA here.
We use GitHub issues to build our backlog, a complete overview of all open items and our planning is available here.
You can find the latest releases here.
You can find contributing guide here.
Learn how to build & deploy KEDA locally here.