Skip to content

[SPARK-24669][SQL] Invalidate tables in case of DROP DATABASE CASCADE #23905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,11 @@ class SessionCatalog(
if (dbName == DEFAULT_DATABASE) {
throw new AnalysisException(s"Can not drop default database")
}
if (cascade && databaseExists(dbName)) {
listTables(dbName).foreach { t =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dongjoon-hyun Do we need to worry about any recurring exceptions from this call like following ?
http://discuss.itversity.com/t/unable-to-perform-listtables-on-spark-catalog-class/15888
Should we fail the drop database or warn and proceed ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best behavior is compatible with the old Spark behavior. So, that will be warn and proceed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @dongjoon-hyun

invalidateCachedTable(QualifiedTableName(dbName, t.table))
}
}
externalCatalog.dropDatabase(dbName, ignoreIfNotExists, cascade)
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

package org.apache.spark.sql.execution.command

import java.io.File
import java.io.{File, PrintWriter}
import java.net.URI
import java.util.Locale

Expand Down Expand Up @@ -2715,4 +2715,40 @@ abstract class DDLSuite extends QueryTest with SQLTestUtils {
}
assert(ex.getMessage.contains("Spark config"))
}

test("Refresh table before drop database cascade") {
withTempDir { tempDir =>
val file1 = new File(tempDir + "/first.csv")
val writer1 = new PrintWriter(file1)
writer1.write("first")
writer1.close()

val file2 = new File(tempDir + "/second.csv")
val writer2 = new PrintWriter(file2)
writer2.write("second")
writer2.close()

withDatabase("foo") {
withTable("foo.first") {
sql("CREATE DATABASE foo")
sql(
s"""CREATE TABLE foo.first (id STRING)
|USING csv OPTIONS (path='${file1.toURI}')
""".stripMargin)
sql("SELECT * FROM foo.first")
checkAnswer(spark.table("foo.first"), Row("first"))

// Dropping the database and again creating same table with different path
sql("DROP DATABASE foo CASCADE")
sql("CREATE DATABASE foo")
sql(
s"""CREATE TABLE foo.first (id STRING)
|USING csv OPTIONS (path='${file2.toURI}')
""".stripMargin)
sql("SELECT * FROM foo.first")
checkAnswer(spark.table("foo.first"), Row("second"))
}
}
}
}
}