[SPARK-53928][SQL] Enhance DSV2 partition filtering using catalyst expression #52628
+382
−79
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Add new interfaces HasPartitionKeys and KeyedPartitioning to DSV2 to report partition values. These are a superset of HasPartitionKey and KeyGroupedPartitioning (which requires the data source to group its InputPartition by partition-values and is mainly for SPJ). Use this in Spark for further partition-column filtering.
Why are the changes needed?
Currently, Spark converts Catalyst Expression to either Filter or Predicate and pushes it to DSV2 via SupportsPushdownFilters and SupportsPushdownV2Filters API's.
However, some Spark filters may not convert cleanly. For example, trim(part_col) = 'a'. There are cases where DSV2 can return the exact partition value(s) to spark for its InputPartition, and Spark can use the original catalyst expression for filtering.
Does this PR introduce any user-facing change?
No
How was this patch tested?
Unit test
Was this patch authored or co-authored using generative AI tooling?
No