Skip to content

Commit 9e8c4aa

Browse files
committed
[SPARK-46122][SQL] Set spark.sql.legacy.createHiveTableByDefault to false by default
### What changes were proposed in this pull request? This PR aims to switch `spark.sql.legacy.createHiveTableByDefault` to `false` by default in order to move away from this legacy behavior from `Apache Spark 4.0.0` while the legacy functionality will be preserved during Apache Spark 4.x period by setting `spark.sql.legacy.createHiveTableByDefault=true`. ### Why are the changes needed? Historically, this behavior change was merged at `Apache Spark 3.0.0` activity in SPARK-30098 and reverted officially during the `3.0.0 RC` period. - 2019-12-06: #26736 (58be82a) - 2019-12-06: https://lists.apache.org/thread/g90dz1og1zt4rr5h091rn1zqo50y759j - 2020-05-16: #28517 At `Apache Spark 3.1.0`, we had another discussion and defined it as `Legacy` behavior via a new configuration by reusing the JIRA ID, SPARK-30098. - 2020-12-01: https://lists.apache.org/thread/8c8k1jk61pzlcosz3mxo4rkj5l23r204 - 2020-12-03: #30554 Last year, this was proposed again twice and `Apache Spark 4.0.0` is a good time to make a decision for Apache Spark future direction. - SPARK-42603 on 2023-02-27 as an independent idea. - SPARK-46122 on 2023-11-27 as a part of Apache Spark 4.0.0 idea ### Does this PR introduce _any_ user-facing change? Yes, the migration document is updated. ### How was this patch tested? Pass the CIs with the adjusted test cases. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #46207 from dongjoon-hyun/SPARK-46122. Authored-by: Dongjoon Hyun <dhyun@apple.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
1 parent ed5aa56 commit 9e8c4aa

File tree

4 files changed

+7
-9
lines changed

4 files changed

+7
-9
lines changed

docs/sql-migration-guide.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ license: |
2525
## Upgrading from Spark SQL 3.5 to 4.0
2626

2727
- Since Spark 4.0, `spark.sql.ansi.enabled` is on by default. To restore the previous behavior, set `spark.sql.ansi.enabled` to `false` or `SPARK_ANSI_SQL_MODE` to `false`.
28+
- Since Spark 4.0, `CREATE TABLE` syntax without `USING` and `STORED AS` will use the value of `spark.sql.sources.default` as the table provider instead of `Hive`. To restore the previous behavior, set `spark.sql.legacy.createHiveTableByDefault` to `true`.
2829
- Since Spark 4.0, the default behaviour when inserting elements in a map is changed to first normalize keys -0.0 to 0.0. The affected SQL functions are `create_map`, `map_from_arrays`, `map_from_entries`, and `map_concat`. To restore the previous behaviour, set `spark.sql.legacy.disableMapKeyNormalization` to `true`.
2930
- Since Spark 4.0, the default value of `spark.sql.maxSinglePartitionBytes` is changed from `Long.MaxValue` to `128m`. To restore the previous behavior, set `spark.sql.maxSinglePartitionBytes` to `9223372036854775807`(`Long.MaxValue`).
3031
- Since Spark 4.0, any read of SQL tables takes into consideration the SQL configs `spark.sql.files.ignoreCorruptFiles`/`spark.sql.files.ignoreMissingFiles` instead of the core config `spark.files.ignoreCorruptFiles`/`spark.files.ignoreMissingFiles`.

python/pyspark/sql/tests/test_readwriter.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -247,10 +247,9 @@ def test_create(self):
247247

248248
def test_create_without_provider(self):
249249
df = self.df
250-
with self.assertRaisesRegex(
251-
AnalysisException, "NOT_SUPPORTED_COMMAND_WITHOUT_HIVE_SUPPORT"
252-
):
250+
with self.table("test_table"):
253251
df.writeTo("test_table").create()
252+
self.assertEqual(100, self.spark.sql("select * from test_table").count())
254253

255254
def test_table_overwrite(self):
256255
df = self.df

sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4457,7 +4457,7 @@ object SQLConf {
44574457
s"instead of the value of ${DEFAULT_DATA_SOURCE_NAME.key} as the table provider.")
44584458
.version("3.1.0")
44594459
.booleanConf
4460-
.createWithDefault(true)
4460+
.createWithDefault(false)
44614461

44624462
val LEGACY_CHAR_VARCHAR_AS_STRING =
44634463
buildConf("spark.sql.legacy.charVarcharAsString")

sql/core/src/test/scala/org/apache/spark/sql/execution/command/PlanResolutionSuite.scala

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2847,11 +2847,9 @@ class PlanResolutionSuite extends AnalysisTest {
28472847
assert(desc.viewText.isEmpty)
28482848
assert(desc.viewQueryColumnNames.isEmpty)
28492849
assert(desc.storage.locationUri.isEmpty)
2850-
assert(desc.storage.inputFormat ==
2851-
Some("org.apache.hadoop.mapred.TextInputFormat"))
2852-
assert(desc.storage.outputFormat ==
2853-
Some("org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"))
2854-
assert(desc.storage.serde == Some("org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"))
2850+
assert(desc.storage.inputFormat.isEmpty)
2851+
assert(desc.storage.outputFormat.isEmpty)
2852+
assert(desc.storage.serde.isEmpty)
28552853
assert(desc.storage.properties.isEmpty)
28562854
assert(desc.properties.isEmpty)
28572855
assert(desc.comment.isEmpty)

0 commit comments

Comments
 (0)