Skip to content

Filter out empty top docs results before merging #126385

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Apr 10, 2025

Conversation

javanna
Copy link
Member

@javanna javanna commented Apr 7, 2025

Lucene.EMPTY_TOP_DOCS to identify empty to docs results. These were previously null results, but did not need to be send over transport as incremental reduction was performed only on the data node.

Now it can happen that the coord node received a merge result with empty top docs, which has nothing interesting for merging, but that can lead to an exception because the type of the empty array does not match the type of other shards results, for instance if the query was sorted by field. To resolve this, we filter out empty top docs results before merging.

Closes #126118

`Lucene.EMPTY_TOP_DOCS` to identify empty to docs results. These were previously
null results, but did not need to be send over transport as incremental reduction
was performed only on the data node.

Now it can happen that the coord node received a merge result with empty top docs,
which has nothing interesting for merging, but that can lead to an exception because
the type of the empty array does not match the type of other shards results, for
instance if the query was sorted by field. To resolve this, we filter out empty
top docs results before merging.

Closes elastic#126118
@elasticsearchmachine elasticsearchmachine added the Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch label Apr 7, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search-foundations (Team:Search Foundations)

@elasticsearchmachine
Copy link
Collaborator

Hi @javanna, I've created a changelog YAML for you.

mergedTopDocs = TopFieldGroups.merge(sort, from, topN, shardTopDocs, false);
} else if (topDocs instanceof TopFieldDocs firstTopDocs) {
final Sort sort = checkSameSortTypes(results, firstTopDocs.fields);
final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[0]);
final TopFieldDocs[] shardTopDocs = results.stream().filter((td -> td != Lucene.EMPTY_TOP_DOCS)).toArray(TopFieldDocs[]::new);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel a bit uneasy that this was almost inadvertently raised by a geo distance related integration test. This seems to indicate that we lack some proper unit testing of the merging logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ indeed, we should have caught this deterministically for sure

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we track this in a follow-up issue, to add the missing unit tests for the merging of incremental results?

Copy link
Contributor

@drempapis drempapis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, that's clean and concise

mergedTopDocs = TopFieldGroups.merge(sort, from, topN, shardTopDocs, false);
} else if (topDocs instanceof TopFieldDocs firstTopDocs) {
final Sort sort = checkSameSortTypes(results, firstTopDocs.fields);
final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[0]);
final TopFieldDocs[] shardTopDocs = results.stream().filter((td -> td != Lucene.EMPTY_TOP_DOCS)).toArray(TopFieldDocs[]::new);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ indeed, we should have caught this deterministically for sure

@@ -150,11 +150,11 @@ static TopDocs mergeTopDocs(List<TopDocs> results, int topN, int from) {
return topDocs;
} else if (topDocs instanceof TopFieldGroups firstTopDocs) {
final Sort sort = new Sort(firstTopDocs.fields);
final TopFieldGroups[] shardTopDocs = results.toArray(new TopFieldGroups[0]);
final TopFieldGroups[] shardTopDocs = results.stream().filter(td -> td != Lucene.EMPTY_TOP_DOCS).toArray(TopFieldGroups[]::new);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's obviously hard to justify this here in isolation, but shall we stay away from that stream magic a little? It really adds a lot of unexpected warmup overhead and is generally somewhat unpredictable.
That said, this looks like code we could optimize/design away mostly anyway, not important now :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure what you'd prefer here, a loop? What do you mean with optimize/design away? I tried different approaches and this is the only one that worked, sadly. Curious to know how we could do things differently to not require this filtering.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a loop would be cheaper, but actually we should just filter this stuff out right off the bat in the QueryPhaseResultConsumer for one and also exploit our knowledge of the array type there directly and not type check here (this need not be clever, we simply know this once we have the first result).
Now that I look at this again, I'm sorry :) I think this only looks the way it does now because I was lazy when it came to the serialization of partial merge results.

But that said, I also had a kinda cool optimization in mind here. Merging top docs is super cheap actually. We could do it on literally every result and then only register search contexts with the search service if a shard's hits are needed as well as releasing those that go out of the top-hits window directly. That would save the needless complicated logic that deals with this when sending the partial result now and saves heap and such :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried fixing the problem at the root, and serializing things differently, but I did not get very far. In short, in https://github.com/elastic/elasticsearch/blob/main/server/src/main/java/org/elasticsearch/action/search/QueryPhaseResultConsumer.java#L361 we have a null topDocsList, and I was not sure how we can determine what type that actually was. I will merge this as a remediation for the test failure. If there's a better way to fix this with a more extensive change, let's make it as a follow-up.

@javanna javanna merged commit 0c95d1a into elastic:main Apr 10, 2025
17 checks passed
@javanna javanna deleted the fix/merge_non_empty_results branch April 10, 2025 08:03
@javanna
Copy link
Member Author

javanna commented Apr 10, 2025

@original-brownbear this needs to be backported once batched execution is backported.

original-brownbear pushed a commit to original-brownbear/elasticsearch that referenced this pull request Apr 10, 2025
`Lucene.EMPTY_TOP_DOCS` to identify empty to docs results. These were previously
null results, but did not need to be send over transport as incremental reduction
was performed only on the data node.

Now it can happen that the coord node received a merge result with empty top docs,
which has nothing interesting for merging, but that can lead to an exception because
the type of the empty array does not match the type of other shards results, for
instance if the query was sorted by field. To resolve this, we filter out empty
top docs results before merging.

Closes elastic#126118
original-brownbear added a commit that referenced this pull request Apr 10, 2025
…#126563)

* Introduce batched query execution and data-node side reduce (#121885)

This change moves the query phase a single roundtrip per node just like can_match or field_caps work already.
A a result of executing multiple shard queries from a single request we can also partially reduce each node's query results on the data node side before responding to the coordinating node.

As a result this change significantly reduces the impact of network latencies on the end-to-end query performance, reduces the amount of work done (memory and cpu) on the coordinating node and the network traffic by factors of up to the number of shards per data node!

Benchmarking shows up to orders of magnitude improvements in heap and network traffic dimensions in querying across a larger number of shards.

* Filter out empty top docs results before merging (#126385)

`Lucene.EMPTY_TOP_DOCS` to identify empty to docs results. These were previously
null results, but did not need to be send over transport as incremental reduction
was performed only on the data node.

Now it can happen that the coord node received a merge result with empty top docs,
which has nothing interesting for merging, but that can lead to an exception because
the type of the empty array does not match the type of other shards results, for
instance if the query was sorted by field. To resolve this, we filter out empty
top docs results before merging.

Closes #126118

---------

Co-authored-by: Luca Cavanna <javanna@apache.org>
javanna added a commit to javanna/elasticsearch that referenced this pull request Apr 14, 2025
We addressed the empty top docs issue with elastic#126385 specifically for scenarios where
empty top docs don't go through the wire. Yet they may be serialized from data node
back to the coord node, in which case they will no longer be equal to Lucene#EMPTY_TOP_DOCS.

This commit expands the existing filtering of empty top docs to include also those that
did go through serialization.

Closes elastic#126742
javanna added a commit that referenced this pull request Apr 17, 2025
We addressed the empty top docs issue with #126385 specifically for scenarios where
empty top docs don't go through the wire. Yet they may be serialized from data node
back to the coord node, in which case they will no longer be equal to Lucene#EMPTY_TOP_DOCS.

This commit expands the existing filtering of empty top docs to include also those that
did go through serialization.

Closes #126742
elasticsearchmachine pushed a commit that referenced this pull request Apr 17, 2025
We addressed the empty top docs issue with #126385 specifically for scenarios where
empty top docs don't go through the wire. Yet they may be serialized from data node
back to the coord node, in which case they will no longer be equal to Lucene#EMPTY_TOP_DOCS.

This commit expands the existing filtering of empty top docs to include also those that
did go through serialization.

Closes #126742
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Search Foundations/Search Catch all for Search Foundations Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch v8.19.0 v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] GeoDistanceIT testDistanceSortingWithUnmappedField failing
4 participants