-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Datadog Scaler: Support Multi-Query Results #3423
Comments
I started having a little look; but i'm a golang noob - I have this so far: https://github.com/dgibbard-cisco/keda/pull/1/files |
hey @dgibbard-cisco Thanks for the suggestion and the patch! I will have a look to that PR in the following days |
@dalgibbard Before starting reviewing the code, I want to clarify a bit more what you want to do here, from a logic point of view. Can you put an example on how you would use two queries (and an aggregator) to scale an object and why? Thanks! |
@arapulido yeah sure thing - the short version is that it allows you to condense multiple API requests into a single API query, which eases strain on Datadog Rate Limiting - this was one of the main drivers, and stemmed from a Discussion with Datadog support recently when discussing how to aggregate multiple queries on our own proprietary autoscaling software, which uses Datadog on the backend, for some on-prem things. The longer version is that it also allows you to make multiple queries, and then decide how best to use that result; eg, take the average from multiple queries for scaling. You can add arthimetic operations to support weighting etc, and other weird but powerful things; eg: Essentially; I was expecting to be able to do something like I explained with some examples in the possible readme change too: https://github.com/dgibbard-cisco/keda-docs/pull/2 A slight more real-world example of that would be something like:
This would return an API response containing 4 Series elements, each with it's own Pointlist; which we sift through as before, to collect the latest timestamp/value, before passing the values through an avg() or max() operation. NOTE: I haven't tested the code yet; so probably don't burn too much time on a deep review - I mainly just wanted to know that I was on the right track, and that my code isn't complete garbage :) Particularly (as i'm new to golang):
|
Proposal
In the Datadog scaler code currently, we check if the number of
Series
returned is only 1, and then grab the last (newest) from thePointlist
: https://github.com/kedacore/keda/blob/main/pkg/scalers/datadog_scaler.go#L273It would be very handy to support multi-query calls (defined in the form of comma-seperated queries) in a single call; eg:
sum:http.requests{service:myservice},avg:http.backlog{service:myservice}
This results in the Datadog API returning multiple
Series
objects, each with it's ownPointlist
- so we'd need to iterate through allSeries
items, grab the latest point from each (same as we currently do for one, but for multiple), and then run an Aggregation on those - eg: min/max/median/sum/avg -- probably defaulting to "max"?Use-Case
In places where we currently define multiple Datadog Queries for Autoscaling a single service, this change would allow for massively reduced API calls (as multi-query calls is only a single API request); and then aggregation on the Datadog Scaler would allow for good user customisation of the results (which the Metrics API actually doesn't handle well).
Anything else?
@arapulido 👋
I can probably take a stab at this? But not sure if you had/have plans for this already etc.
The text was updated successfully, but these errors were encountered: