-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Concurrent Segment Search] shard_min_doc_count and shard_size should not be evaluated at the slice level #8860
Comments
High Level Overview:
High Level Solutions:I have a few high level solutions in mind. The big problem is with Essentially there are a few different ways we can address both
Currently my recommendation is [3] for |
@jed326 Thanks for the analysis.
I didn't understand what you mean my it could grow unbounded. For sequential case as well, the I think for both the shard level parameter if we ignore these at slice level and then apply it as part of reduce, it should work as expected. What are the challenges for |
You are right that at the shard level the same bounds would still be applied, so from the coordinator perspective concurrent and non-concurrent search would look the same. However, the unbounded growth I am concerned about here is basically the size of the priority queue at the slice level, which is also the number of buckets being returned at the slice level. Today this is bounded by the |
Subtask for #7357 to focus on the test failures related to shard size parameter:
The text was updated successfully, but these errors were encountered: