-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[SPARK-29807][SQL] Rename "spark.sql.ansi.enabled" to "spark.sql.dialect.spark.ansi.enabled" #26444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Test build #113487 has finished for PR 26444 at commit
|
660e9f9
to
52171db
Compare
Test build #113488 has finished for PR 26444 at commit
|
52171db
to
0107aac
Compare
Test build #113509 has finished for PR 26444 at commit
|
1b8a0da
to
d76d6a5
Compare
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
Outdated
Show resolved
Hide resolved
@@ -1655,6 +1655,20 @@ object SQLConf { | |||
.checkValues(Dialect.values.map(_.toString)) | |||
.createWithDefault(Dialect.SPARK.toString) | |||
|
|||
val ANSI_ENABLED = buildConf("spark.sql.ansi.enabled") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we deprecate this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, maybe after a few versions, this config can be deleted.
Test build #113551 has finished for PR 26444 at commit
|
Test build #113552 has finished for PR 26444 at commit
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine to me
Test build #113662 has finished for PR 26444 at commit
|
Test build #113671 has finished for PR 26444 at commit
|
retest this please. |
retest this please |
Test build #113692 has finished for PR 26444 at commit
|
Test build #113694 has finished for PR 26444 at commit
|
cc @gatorsmile |
e21de38
to
015539f
Compare
Test build #113803 has finished for PR 26444 at commit
|
@@ -1673,6 +1673,19 @@ object SQLConf { | |||
.checkValues(Dialect.values.map(_.toString)) | |||
.createWithDefault(Dialect.SPARK.toString) | |||
|
|||
val ANSI_ENABLED = buildConf("spark.sql.ansi.enabled") | |||
.doc("This configuration will be deprecated in the future releases and replaced by" + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is deprecated and will be removed in future releases
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, done in 1cedb3d.
Test build #113863 has finished for PR 26444 at commit
|
thanks, merging to master! |
Thanks all for the review. |
### What changes were proposed in this pull request? Reprocess all PostgreSQL dialect related PRs, listing in order: - #25158: PostgreSQL integral division support [revert] - #25170: UT changes for the integral division support [revert] - #25458: Accept "true", "yes", "1", "false", "no", "0", and unique prefixes as input and trim input for the boolean data type. [revert] - #25697: Combine below 2 feature tags into "spark.sql.dialect" [revert] - #26112: Date substraction support [keep the ANSI-compliant part] - #26444: Rename config "spark.sql.ansi.enabled" to "spark.sql.dialect.spark.ansi.enabled" [revert] - #26463: Cast to boolean support for PostgreSQL dialect [revert] - #26584: Make the behavior of Postgre dialect independent of ansi mode config [keep the ANSI-compliant part] ### Why are the changes needed? As the discussion in http://apache-spark-developers-list.1001551.n3.nabble.com/DISCUSS-PostgreSQL-dialect-td28417.html, we need to remove PostgreSQL dialect form code base for several reasons: 1. The current approach makes the codebase complicated and hard to maintain. 2. Fully migrating PostgreSQL workloads to Spark SQL is not our focus for now. ### Does this PR introduce any user-facing change? Yes, the config `spark.sql.dialect` will be removed. ### How was this patch tested? Existing UT. Closes #26763 from xuanyuanking/SPARK-30125. Lead-authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Co-authored-by: Maxim Gekk <max.gekk@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
What changes were proposed in this pull request?
Rename config "spark.sql.ansi.enabled" to "spark.sql.dialect.spark.ansi.enabled"
Why are the changes needed?
The relation between "spark.sql.ansi.enabled" and "spark.sql.dialect" is confusing, since the "PostgreSQL" dialect should contain the features of "spark.sql.ansi.enabled".
To make things clearer, we can rename the "spark.sql.ansi.enabled" to "spark.sql.dialect.spark.ansi.enabled", thus the option "spark.sql.dialect.spark.ansi.enabled" is only for Spark dialect.
For the casting and arithmetic operations, runtime exceptions should be thrown if "spark.sql.dialect" is "spark" and "spark.sql.dialect.spark.ansi.enabled" is true or "spark.sql.dialect" is PostgresSQL.
Does this PR introduce any user-facing change?
Yes, the config name changed.
How was this patch tested?
Existing UT.