[Spark-50873][SQL] Prune column for ExistsSubquery in DSV2 #50916
+90
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This PR offers an optimize rule for SparkOptimizer to prune unnecessary column for DataSourceV2 (DSV2) after RewriteSubquery.
Spark 3 use V2ScanRelationPushDown rule to prune column for DSV2. However, if there are subquerys in the qeuery sql, RewriteSubery rule will be generated new predicates which can be use to prune column after executed V2ScanRelationPushDown, but Spark does not prune column again which cause lower performance.
See the issue for more detail description : SPARK-50873
A Similar issue: SPARK-51831
This PR's Solution:
This PR rewrite "SELECT *" as "SELECT 1" in WHERE EXISTS subquery, because "Select *" does not provide any project information when entering V2ScanRelationPushDown#pruneColumn function but "SELECT 1"'s project provides the necessary column.
A more general Solution:
Since EXISTS only returns True or False, it doesn't matter what is SELECTED. Therefore, an optimization rule can be added to check whether the EXISTS clause contains Project 1, If not, add a Project 1 on the EXISTS. This method can also solve the current problem and provide a wider range of optimizations. See the SPARK-52186.
Why are the changes needed?
A better performance for Spark DSV2.
For example, in 10T TPCDS test, the query16 execution time will be reduced by 50% from 2.5min to 1.3min in my cluster.
Does this PR introduce any user-facing change?
NO
How was this patch tested?
Github Actions.

Currently, only TPCDSV1_4-PLanStabilitySuite and TPCDSV1_4-PLanStabilityWithStatsSuite are unable to pass. The plan is correct, but after this rule, the positions of some columns were changed. like this:
Was this patch authored or co-authored using generative AI tooling?
No