Skip to content

Commit

Permalink
Added a paragraph on how Zapier uses KEDA.
Browse files Browse the repository at this point in the history
  • Loading branch information
rtnpro committed Mar 9, 2022
1 parent 124c678 commit db7739e
Showing 1 changed file with 5 additions and 1 deletion.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
+++
title = "How Zapier uses KEDA"
date = 2022-02-14
date = 2022-03-10
author = "Ratnadeep Debnath (Zapier)"
aliases = [
"/blog/how-zapier-uses-keda"
Expand All @@ -15,4 +15,8 @@ We do a lot of [blocking I/O](https://medium.com/coderscorner/tale-of-client-ser

Ideally, we would like to scale our workers on both CPU and our backlog of ready messages in RabbitMQ. Unfortunately, Kubernetes’ native HPA does not support scaling based on RabbitMQ queue length out of the box. There is a potential solution by collecting RabbitMQ metrics in Prometheus, creating a custom metrics server, and configuring HPA to use these metrics. However, this is a lot of work and why reinvent the wheel when there’s KEDA.

We have installed KEDA in our Kubernetes clusters and started opting into KEDA for autoscaling. Our goal is to autoscale our workers not just based on CPU usage, but also based on the number of ready messages in RabbitMQ queues they are consuming from.

At Zapier, we use KEDA to autoscale our workers not just based on CPU usage, but also based on the number of ready messages in RabbitMQ queues they are consuming from. We monitor our KEDA setup in Grafana using Prometheus metrics, and use Prometheus rules to alert on errors. Using KEDA to autoscale our workers significantly prevented delays in our Zap processing due to blocked I/O calls. We are slowly updating apps at Zapier to use KEDA.

**Read the rest of this [post on how Zapier uses KEDA](https://www.cncf.io/blog/2022/01/21/keda-at-zapier/) on the Cloud Native Computing Foundation blog.**

0 comments on commit db7739e

Please sign in to comment.