-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The replica count of target pods fluctuates when fallback is triggered in scaling-modifier #5666
Comments
Definitivelly it shouldn't happen and fallback should be applied always, could you take a look? |
@SpiritZhou good catch, I also belive this is the root of the problem. |
There is another bug in the dofallback(). The
|
yes! we should use the |
If the user sets a |
could you pleaes elaborate? |
Hmm, probably some glitch in the logic. The scaler (no matter if composite or just one) should in case of errors report errors to HPA normally and once the failure threshold is reached then it should report fallback number. |
What errors should be reported to HPA when one of the composite scalers encounters an error? Nowadays it reports the normal one. |
I think it should report a new error stating that we weren't able to calculate the composite metric and then attach the failure fromt the specific scaler |
Report
If one of the scalers encounters an error while using scaling-modifier, the replica count cannot remain stable at the fallback value. Instead, it fluctuates between 1 and the fallback value.
Expected Behavior
The replica count of target pod keeps at the fallback value when scaler encounter an error.
Actual Behavior
The replica count of target pod keeps fluctuating between 1 and the fallback value.
Steps to Reproduce the Problem
Logs from KEDA operator
keda-keda-operator-5789f449c4-dprm4-1712480741312929239.log
KEDA Version
2.13.1
Kubernetes Version
1.27
Platform
Other
Scaler Details
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: