You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/versioned/eventing/configuration/keda-configuration.md
+99-2Lines changed: 99 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,9 +34,77 @@ data:
34
34
35
35
```
36
36
37
+
## Configure KEDA scaling globally
38
+
39
+
To configure the KEDA scaling behaviour globally for all resources you can edit the configmap `config-kafka-autoscaler` in the `knative-eventing` namespace.
40
+
41
+
Note that the `min-scale` and `max-scale` parameters both refer to the number of Kafka consumers, not the number of dispatcher pods. The number of dispatcher pods in the StatefulSet is determined by `scale / POD_CAPACITY`, where `POD_CAPACITY` is an environment variable you can configure on the `kafka-controller` deployment, and defaults to `20`.
42
+
```yaml
43
+
apiVersion: v1
44
+
kind: ConfigMap
45
+
metadata:
46
+
name: config-kafka-autoscaler
47
+
namespace: knative-eventing
48
+
data:
49
+
# What autoscaling class should be used. Can either be keda.autoscaling.knative.dev or dispabled.
50
+
class: keda.autoscaling.knative.dev
51
+
# The period in seconds the autoscaler waits until it scales down
52
+
cooldown-period: "30"
53
+
# The lag that is used for scaling (1<->N)
54
+
lag-threshold: "100"
55
+
# The maximum number of replicas to scale up to
56
+
max-scale: "50"
57
+
# The minimum number of replicas to scale down to
58
+
min-scale: "0"
59
+
# The interval in seconds the autoscaler uses to poll metrics in order to inform its scaling decisions
60
+
polling-interval: "10"
61
+
```
62
+
63
+
## Understanding KEDA Scaling Patterns
64
+
65
+
KEDA uses different lag thresholds to determine when to scale your Kafka components:
66
+
67
+
### Scaling from 0 to 1 (Activation)
68
+
- **Trigger**: When consumer lag exceeds the `activation-lag-threshold` (default: 0)
69
+
- **Behavior**: KEDA activates the first replica when any messages are waiting in the Kafka queue
70
+
- **Use case**: Quick response to incoming events when the system is idle
71
+
72
+
### Scaling from 1 to N (Scale Up)
73
+
- **Trigger**: When consumer lag exceeds the `lag-threshold` (default: 100)
74
+
- **Behavior**: KEDA adds more consumer replicas based on the lag-to-threshold ratio
- **Trigger**: When consumer lag falls below thresholds
79
+
- **Behavior**: KEDA waits for the `cooldown-period` (default: 30 seconds) before reducing replicas
80
+
- **Scale to zero**: Occurs when lag is below `activation-lag-threshold` for the duration of the cooldown period
81
+
82
+
### StatefulSet Pod Calculation
83
+
Each resource type (KafkaSources, Triggers, etc.) has its own dispatcher StatefulSet. Within each StatefulSet, the number of dispatcher pods is calculated as:
Where `POD_CAPACITY` defaults to 20, meaning each dispatcher pod can handle up to 20 Kafka consumers.
89
+
90
+
**Important**: Each resource (KafkaSource, Trigger, Subscription) creates its own consumer group, and KEDA scales the number of consumers within that consumer group. All consumers for a given resource type are distributed across the dispatcher pods in that type's StatefulSet.
91
+
92
+
To maintain a **fixed minimum number of StatefulSet pods** for a resource type, calculate the total consumers needed:
- 2 Triggers each with `min-scale: "40"` = 80 total consumers
100
+
- With default `POD_CAPACITY: 20`, this creates `ceil(80/20) = 4` dispatcher pods for the Trigger StatefulSet
101
+
- All 80 consumers are distributed across these 4 pods
102
+
37
103
## Configure Autoscaling for a Resource
38
104
39
-
If you want to customize how KEDA scales a KafkaSource, trigger, or subscription you can set annotations on the resource:
105
+
If you want to customize how KEDA scales a KafkaSource, trigger, or subscription you can set annotations on the resource.
106
+
107
+
Note that the `min-scale` and `max-scale` parameters both refer to the number of Kafka consumers, not the number of dispatcher pods. The number of dispatcher pods in the StatefulSet is determined by `scale / POD_CAPACITY`, where `POD_CAPACITY` is an environment variable you can configure on the `kafka-controller` deployment, and defaults to `20`.
When KEDA is disabled (either globally or for specific resources), you can manually control the number of StatefulSet pods by scaling the consumer group resources directly.
147
+
148
+
Each Kafka resource creates a corresponding `ConsumerGroup` resource in the same namespace that controls the number of consumers. You can manually scale these to achieve your desired pod count:
149
+
150
+
```bash
151
+
# List all consumer groups in your workload namespace
152
+
kubectl get consumergroups -n <your-namespace>
153
+
154
+
# Scale a specific consumer group to desired consumer count
**Note**: Manual scaling is persistent until you re-enable KEDA autoscaling or manually change the scale again. The StatefulSet will automatically adjust its pod count based on the consumer group replica count.
0 commit comments