Skip to content

[SPARK-30627][SQL] Disable all the V2 file sources by default #27348

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1728,7 +1728,7 @@ object SQLConf {
"implementation class names for which Data Source V2 code path is disabled. These data " +
"sources will fallback to Data Source V1 code path.")
.stringConf
.createWithDefault("kafka")
.createWithDefault("avro,csv,json,kafka,orc,parquet,text")

val DISABLED_V2_STREAMING_WRITERS = buildConf("spark.sql.streaming.disabledV2Writers")
.doc("A comma-separated list of fully qualified data source register class names for which" +
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ package org.apache.spark.sql.connector
import scala.collection.JavaConverters._
import scala.collection.mutable.ArrayBuffer

import org.apache.spark.SparkConf
import org.apache.spark.sql.{AnalysisException, QueryTest}
import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
import org.apache.spark.sql.connector.catalog.{SupportsRead, SupportsWrite, Table, TableCapability}
Expand Down Expand Up @@ -86,6 +87,8 @@ class FileDataSourceV2FallBackSuite extends QueryTest with SharedSparkSession {
private val dummyReadOnlyFileSourceV2 = classOf[DummyReadOnlyFileDataSourceV2].getName
private val dummyWriteOnlyFileSourceV2 = classOf[DummyWriteOnlyFileDataSourceV2].getName

override protected def sparkConf: SparkConf = super.sparkConf.set(SQLConf.USE_V1_SOURCE_LIST, "")

test("Fall back to v1 when writing to file with read only FileDataSourceV2") {
val df = spark.range(10).toDF()
withTempPath { file =>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,8 @@ abstract class OrcPartitionDiscoveryTest extends OrcTest {
}

class OrcPartitionDiscoverySuite extends OrcPartitionDiscoveryTest with SharedSparkSession {
override protected def sparkConf: SparkConf = super.sparkConf.set(SQLConf.USE_V1_SOURCE_LIST, "")

test("read partitioned table - partition key included in orc file") {
withTempDir { base =>
for {
Expand Down