Description
-
breaker.search.cancel_request - This should cancel the search request across all nodes if a circuit breaker trips due to a search or aggregation.
Currently, if one node trips a circuit breaker the request continues to run on other nodes. If using a coordinating only node, that node will still continue to receive responses from other nodes. By sending a cancel request to all other nodes that contain the request, the node could be saved in the event that many small requests would send the node over the limit. ex. 75 nodes, each sending a response in the neighborhood of 1.9 gb, circuit breaker trips on one request at 2.1gb, still another 74 nodes sending 1.9gb, which could be cancelled thereby saving the node from going unresponsive until all request finish. -
breaker.search.aggregation_memory.limit - Aggregation memory used (This should work better than max_buckets, as this would be more dynamic)
-
breaker.search.calculation.memory_used - Calculations memory used (ie. unique counts)
Ability to limit memory on a per request basis, but only for calculation being performed from that request. queries that perform unique counts on buckets of buckets on large data sets can be node killers -
breaker.search.request_size_limit - Total size of response from search request (should apply at the coordinating node, should also cancel the rest of the request)
See Query DSL: Terms Filter #1 above -
breaker.search.request.nested_aggs_limit - Nested aggregations circuit breaker... So instead of allowing a user to nest aggregations 10+ levels, this could be tunable, and shut down before the request ever hits a shard. Admins could set a limit on the number of nested aggs.
I know this contains a list of items... I do believe that this list should be kept together as much of the work would be too closely related, and splitting them up would cause more trouble.