Skip to content

[SPARK-33786][SQL] The storage level for a cache should be respected when a table name is altered. #30774

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ import java.net.{URI, URISyntaxException}

import scala.collection.JavaConverters._
import scala.collection.mutable.ArrayBuffer
import scala.util.Try
import scala.util.control.NonFatal

import org.apache.hadoop.fs.{FileContext, FsConstants, Path}
Expand Down Expand Up @@ -193,18 +192,19 @@ case class AlterTableRenameCommand(
} else {
val table = catalog.getTableMetadata(oldName)
DDLUtils.verifyAlterTableType(catalog, table, isView)
// If an exception is thrown here we can just assume the table is uncached;
// this can happen with Hive tables when the underlying catalog is in-memory.
val wasCached = Try(sparkSession.catalog.isCached(oldName.unquotedString)).getOrElse(false)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing implementation uses Catalog APIs (isCached), whereas this PR uses CacheManager directly. If this approach is not desired, we can update Catalog API to expose StorageLevel.

if (wasCached) {
// If `optStorageLevel` is defined, the old table was cached.
val optCachedData = sparkSession.sharedState.cacheManager.lookupCachedData(
sparkSession.table(oldName.unquotedString))
val optStorageLevel = optCachedData.map(_.cachedRepresentation.cacheBuilder.storageLevel)
if (optStorageLevel.isDefined) {
CommandUtils.uncacheTableOrView(sparkSession, oldName.unquotedString)
}
// Invalidate the table last, otherwise uncaching the table would load the logical plan
// back into the hive metastore cache
catalog.refreshTable(oldName)
catalog.renameTable(oldName, newName)
if (wasCached) {
sparkSession.catalog.cacheTable(newName.unquotedString)
optStorageLevel.foreach { storageLevel =>
sparkSession.catalog.cacheTable(newName.unquotedString, storageLevel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this miss the tableName if there is in the original cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't get this question. This is creating a new cache with a new table name.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, you can check the change like #30769. Especially how it recaches the table. There is a cacheName parameter. If the table was cached with a cache name, when recaching it, I think we should keep it.

    val cache = session.sharedState.cacheManager.lookupCachedData(v2Relation)
    session.sharedState.cacheManager.uncacheQuery(session, v2Relation, cascade = true)
    session.sharedState.cacheManager.uncacheQuery(session, v2Relation, cascade = true)
    if (recacheTable && cache.isDefined) {
      // save the cache name and cache level for recreation
      val cacheName = cache.get.cachedRepresentation.cacheBuilder.tableName
      val cacheLevel = cache.get.cachedRepresentation.cacheBuilder.storageLevel

      // recache with the same name and cache level.
      val ds = Dataset.ofRows(session, v2Relation)
      session.sharedState.cacheManager.cacheQuery(ds, cacheName, cacheLevel)
    }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous code seems also recache with the new name?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, the refresh table command for v2 doesn't recache the table before #30769.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean the previous code in AlterTableRenameCommand. We shouldn't change its behavior regarding cache name in this bug fix PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm okay, actually it also sounds like a bug if alter table command changes the cache name. I'm fine to leave it unchanged here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe cache name is used for debug purpose only (for RDD name and InMemoryTableScanExec). So if the cache name - which is tied to the table name - doesn't change when the table is changed, wouldn't it cause a confusion since it will still refer to the old table name? I can do a follow up PR if this seems like a bug.

}
}
Seq.empty[Row]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1285,4 +1285,24 @@ class CachedTableSuite extends QueryTest with SQLTestUtils
assert(spark.sharedState.cacheManager.lookupCachedData(sql("select 1, 2")).isDefined)
}
}

test("SPARK-33786: Cache's storage level should be respected when a table name is altered.") {
withTable("old", "new") {
withTempPath { path =>
def getStorageLevel(tableName: String): StorageLevel = {
val table = spark.table(tableName)
val cachedData = spark.sharedState.cacheManager.lookupCachedData(table).get
cachedData.cachedRepresentation.cacheBuilder.storageLevel
}
Seq(1 -> "a").toDF("i", "j").write.parquet(path.getCanonicalPath)
sql(s"CREATE TABLE old USING parquet LOCATION '${path.toURI}'")
sql("CACHE TABLE old OPTIONS('storageLevel' 'MEMORY_ONLY')")
val oldStorageLevel = getStorageLevel("old")

sql("ALTER TABLE old RENAME TO new")
val newStorageLevel = getStorageLevel("new")
assert(oldStorageLevel === newStorageLevel)
}
}
}
}