[SPARK-52186][SQL] Rewrite EXISTS to add Scalar Project #50917
+88
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This PR offers an optimize rule for Optimizer to rewrite EXISTS to add a Scalar Project on the plan top.
Since EXISTS only returns True or False, it doesn't matter what is SELECTED. Therefore, an optimization rule can be added to check whether the EXISTS clause contains Project 1, If not, add a Project 1 on the EXISTS.
Why are the changes needed?
Prune unnecessary column for spark DSV2.
For example, it will prune t1.col2 in EXISTS when generate physical plan.
test("Test exist join with v2 source plan") { import org.apache.spark.sql.functions._ withTempPath { dir => spark.range(100) .withColumn("col1", col("id") + 1) .withColumn("col2", col("id") + 2) .write .mode("overwrite") .parquet(dir.getCanonicalPath + "/t1") spark.range(10).write.mode("overwrite").parquet(dir.getCanonicalPath + "/t2") Seq("parquet", "").foreach { v1SourceList => withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key-> v1SourceList) { spark.read.parquet(dir.getCanonicalPath + "/t1").createOrReplaceTempView("t1") spark.read.parquet(dir.getCanonicalPath + "/t2").createOrReplaceTempView("t2") spark.sql( """ |select t2.id |from t2 |where exists(select col2 from t1 where t1.id == t2.id and t1.col1>5) |""".stripMargin).explain() } } } }
Does this PR introduce any user-facing change?
NO
How was this patch tested?
Github Actions.

Currently, only TPCDSV1_4-PLanStabilitySuite and TPCDSV1_4-PLanStabilityWithStatsSuite are unable to pass. The plan is correct, but after this rule, the positions of some columns and ID were changed. like this:
Was this patch authored or co-authored using generative AI tooling?
NO