-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] [RFC] Grouping similar Top N Queries by Latency and Resource Usage #13357
Comments
@deshsidd Thanks for creating the RFC! This feature will be a super useful extension to the current top n queries feature!
How would you feel about renaming the title/scope of the RFC to something like “Grouping similar top queries together in the top n queries feature” ? Cause based on the RFC, that is what we are really trying to do. Also the grouping will be more than just “resource usage”, it can be a lot more metric types as well (like latency, volume) - arguably, “latency” shouldn’t be considered as one type of “resources”.
It would be great if you can also provide the original query for this example shape, so that we can compare them side by side.
As part of the normalization and shape calculation, How would you feel about using hashing instead of the magic
Seems like you are suggesting we create a separate Priority Queue to keep track of the shapes along with the existing top n queries? IMO that would be a duplicated effort and will potentially have a negative impact on memory usage. What do you think about “extending the current processor and top n features to add a grouping mechanism” instead of making a whole new processor? In that case the user can either use the “vanilla” top n queries, or they can opt-int to do grouping by certain attributes - and shape can simply be one of the attributes.
We don’t need to propose new APIs IMO. The user just need to make certain configuration changes. For example, if they want to use “top n queries with grouping”, they can do something like PUT _cluster/settings -H 'Content-Type: application/json' -d'
{
"persistent" : {
"search.insights.top_queries.latency.enabled" : "true",
"search.insights.top_queries.latency.window_size" : "1m",
"search.insights.top_queries.latency.top_n_size" : 20,
"search.insights.top_queries.latency.group_similar_queries" : true,
// We only have "group by shape" now
"search.insights.top_queries.latency.group_by" : "shape",
// OR this if we don't want to expose the internal "shape" concept
// "search.insights.top_queries.latency.group_granularity" : "low",
}
}' |
Thanks @deshsidd for writing this proposal. Most of it looks good, and storing hashes in memory is really nifty. Should prevent unnecessary memory consumption. Some considerations below:
We need to be careful while doing the normalization as the execution/results might be different depending on the sort order. Probably we can compare the execution latency/resource consumption of current query with the average metrics for the shape so far (count_of_queries, total_latency_across_all_queries). Ideally should not be too off for it to be variation of the same shape?
I will probably call these parameters as
Definitely +1 to that
When it comes to settings/configurations, I like the principal of Less is more. Should cut the number of settings to atleast half (from 6 -> 3) in my opinion. |
I'd like to recommend that grouping similar queries should be the default behavior. Otherwise the TopN feature is likely to capture "the same" query for all N, since most search requests are programmatically generated. If there's one dashboard visualization powered by an expensive query (with relative time range), some N refreshes of that visualization will dominate the top N. Maybe we can have a setting to turn it off, but it's almost always going to be a good thing. It's probably also worth noting that "query shape" in this context really just refers to deduplicating with a (slightly) relaxed definition of equality. If we come up with the right definition (that reasonably matches what a human would consider "equivalent queries"), we might not even need a setting. I suppose the counterexample would be cases where a normally fast query is slow because of a stop-the-world garbage collection, which would probably only show up in the non-deduped output. As a countercounterargument, though, capturing that query doesn't really give us any insight (since it's not a slow query in general -- it's just an unlucky victim of garbage collection and timing). |
I really love this idea. We could make deduplicating default to true in top n queries, and start with a setting to turn it off until we have a reasonable behavior defined for equivalent queries. @deshsidd what do you think? |
Thanks for the feedback @peternied @ansjcy @jainankitk @msfroh @ansjcy
Agreed. Provided the query and the subsequent query shape in the edit above.
These are not hashed/obfuscated fields but are sample field names. Changed the query and query shape to real field names above to avoid any confusion. There will be no hashing or obfuscating of field names as part of the above proposal.
Looks like there are 2 different approaches we could potentially take here based on the above discussions. 1. Compute either Top N queries or Top N query groups but not both.Have a configuration such as: Pros:
Cons:
2. Compute both the Top N queries and Top N query groups and provide the option in the API to view both.
Pros:
Cons:
Based on the above discussions looks like we are leaning towards option 1. If the memory and resource usage overhead is not much more in option 2 it will provide more flexibility to the user (might need to perform a POC to verify). |
In that case probably might be beneficial to skip the normalization for multiple sorts and leave it as is in the original query since the order of the execution matters. |
I want to call out that we should should not use the word It's a word that I used when trying to address the problem of collecting anonymous statistics about queries (where we don't care about the specific fields/values being queried, but do care about the types of queries and the complex query structure). In this case, we have the queries themselves -- we're just trying to avoid the situation where the Top N queries are all the "same" query. |
Agreed. Keeping the above in mind I am proposing the following. We can have a dynamic configuration for the grouping of Top N queries:
The type of the setting is enum and the values could be We do not need any changes in the Top N queries APIs:
The output of these APIs will vary based on the
Let me know your thoughts! |
Made good progress working on implementing the above feature. However, during the implementation came up with some issues that need to be further thought through. 1. Algorithm to group Top queries does not work as expectedIn the algorithm that we used to attain the Top N query groups we missed taking into account the fact that we will require to update the average latency for a group already encountered and existing in the Priority Queue. If the average latency for this group decreases, this might not make it into the Top N query groups and can leave us with incorrect results. Solutions: b. Get Top N groups at the end of the window from the Hashmap c. Get Top N groups on a best effort basis (Proposed approach) 2. Current Implementation of Top N does not support updating Top N Records in the Priority QueueCurrently, we have a separate Priority Queue for tracking Top N requests by latency, cpu, memory. If Top N groups is enabled, we will need to change the current approach and now add ALL the records to the LinkedBlockingQueue and process the Top N groups while updating the respective Priority queues. This will enable us to update the average latency for a previously encountered group that already exists in the Top N query groups priority queue. The new approach might cause additional overhead to the system when grouping is enabled since we are now processing ALL the records for each MetricType (Latency, CPU, memory). Previously, we were filtering out most of the records and retaining only the Top records as described above. 3. Grouping by individual metric type might not be possibleIn the RFC, we proposed the following configurations that could be used to group Top N queries: However, based on the new approach proposed in 2 we will have to enable group_by for all Metric Types based on the current implementation. This is because if group_by is enabled, we are proposing to add all query records to the LinkedBlockingQueue and not only the records that will make it to the Priority Queue. We need to do this for ALL metric types and cannot do this for individual metric types based on the current implementation. Another aspect to consider here is how we will display these groups on the query insights dashboard that is currently being worked on. Need to get consensus on the above before we continue with the implementation and carefully think about some of these aspects. @msfroh @jainankitk @getsaurabh02 @ansjcy |
@deshsidd can you please explain what is the issue here specific to grouping. I am assuming the comparison logic for priority queue should follow the same logic for Top N, assuming the mapping of Groups to individual queries is outside in a separate map. |
The challenging part for grouping is the key we use to sort items in prority queue can change constantly. But in the normal top n queries, the keys always remain the same. I did some thinking around this and I believe those challenges can still be solved in good time complexity. We need to tweak the data structures a little bit to add another priority queue and a hashmap (so we have 1 min heap, 1 max heap and 1 hashmap).
Refer to the following flow diagram for when / how to update each data structure Also, if we continue using the normal priority queue, updating / deletion can take O(n) in a worst case scenario, which is not acceptable. But I remember indexed priority queue can support O(logn) time update and delete. We should use that for this purpose. Please refer to https://algs4.cs.princeton.edu/24pq/IndexMinPQ.java.html. For the As for |
Thanks for the responses. Based on my initial comment please note the following:
My proposal would be to go with the approach to bypass adding the records to the buffer if grouping is enabled and do some benchmarking to figure out the performance and storage impact.
@ansjcy , regarding your approach mentioned, it seems like an overly complex approach that might not be required and might still be less performant. What do you think of the approach mentioned above in 1. where we update the average latency of the record in the priority queue (PQ will have max 100 elements and this should be quick). If the average latency of a query group currently in the Top N (Priority Queue) decreases, check if it is lower than the lowest entry in the PQ. If so, delete the group from the Top N groups else keep it in the Priority Queue. Can also consider using an indexed PQ for better time complexity here. |
In terms of time complexity, I don't think the proposed algorithm will be less performant, since you have a hashmap anyway to keep all the groups info, I'm also okay with the "the best effort approach". But the correctness will be a big concern. Is it possibly to test how accurate this method will be? If it is 90%+ it should be good. |
Right, looks like the time complexity will be pretty much similar in both the approaches (min and max heap approach and best effort approach). The storage impact will be more in the 2 heap approach. However, we are already storing all the record group objects in the hashmap - storing this in the PQ would essentially be pointers to the original objects and not take up too much more space. I was also thinking about the correctness of "the best effort approach" and it might be hard to measure this and we can definitely get incorrect Top N with this approach. Example: Consider a scenario where we have N = 5 and 5 query groups current priority queue (with average latency 5,4,3,2,1). A new query record comes in which is part of one of the 5 groups. We update the average latency and since the new average latency is less than the lowest entry in the PQ we remove it (PQ now is 5,4,3,1) and the evicted value was 0.9. We now get a new query group with 0.001 average latency which now makes it to the PQ since one slot is available (PQ now is 5,4,3,1,0.001). Keeping the above in mind and the fact that the time and space complexity are very similar in both the approaches - lets go with the 2 heap approach and do some benchmarking to figure out the performance and storage impact. |
Inspired by the algorithm proposed here, here is the algorithm used in the PR: Data Structures
Algorithm=> New group added to the grouping service:
=> Existing group being updated to the grouping service
We can do some benchmarking to check the performance here and update this to an index PQ or have limits on the number of groups to improve the performance if required. |
Resolved in the following : opensearch-project/query-insights#66 |
Is your feature request related to a problem? Please describe
Background
As part of the search query categorization initiative we added support to compute the query shape of the incoming search queries and logged (debug) the basic query shape. We also added instrumentations on the search path to collect the types of queries, types of aggregations, sort order, etc.
As part of Top N queries by latency, we implemented a priority queue-based in-memory data store, with configurable window size, on the coordinator node, designed to efficiently store the top N queries.
Problem
For Top N queries by latency, we can encounter scenarios where some (or most) of the Top N queries contain duplicate queries. Say the same dashboard query is triggered continuously and happens to be the most expensive query in terms of latency - in this scenario all the Top N queries by latency will likely be spammed by the same query. To overcome such scenarios and to get a more detailed view of the Top N query patterns we are proposing to implement Top N query shapes by resource usage (latency, memory, disk, etc).
Describe the solution you'd like
Design
Query Shape
For every incoming query, we will compute the query shape. The query shape is essentially lossy transformations on the actual query to help de-duplicate the top N queries. Consider the sample query:The proposed query shape for the above query will look something like:
We are capturing the shape of the query, sort, aggregations and pipeline aggregations. We capture the sort order, field names, field values and range width (difference between upper-bound and lower-bound of the range). The range width is captured to help us de-duplicate queries with the same range size but different window. For example, the same dashboard query that is run continuously but for different time ranges.
Normalize Query Shape
We need to normalize the query shape to make sure that 2 same queries always map to the exact same query shape including the ordering of the clauses, ordering of the queries within the clauses, ordering of the aggregations, etc. The following normalizations will be required:Top N Query Shapes
We will create a new processor in the Query Insights framework to keep track of the Top N query shapes in the current window, similar to what we do for Top N Queries by latency.For every window, we will keep track of the hash to (count_of_queries, total_latency_across_all_queries). This will enable us to calculate the average latency for the queries. We will also have a priority query to keep track of the Top N queries for the window.
The algorithm will be as follows:
After every window we will clear the hash to (count, latency) mapping and Priority Queue. We will export the Top N query shape data for the previous window to a local index similar to what we are aiming to do for the Top N queries by latency.
APIs
We will be extending the Top N queries by resource usage APIs as part of query shape insights.Existing APIs:
Additions:
Related component
Search
Describe alternatives you've considered
This approach might end up grouping queries with varying latency characteristics into the same group. Example is the same query on 2 different fields exhibiting different latency due to the cardinality differences for the 2 fields (one high cardinality other low cardinality).
Please let me know your thoughts on the above proposal!
The text was updated successfully, but these errors were encountered: