Skip to content

Commit cd3d9a5

Browse files
tdaspwendell
authored andcommitted
[SPARK-7930] [CORE] [STREAMING] Fixed shutdown hook priorities
Shutdown hook for temp directories had priority 100 while SparkContext was 50. So the local root directory was deleted before SparkContext was shutdown. This leads to scary errors on running jobs, at the time of shutdown. This is especially a problem when running streaming examples, where Ctrl-C is the only way to shutdown. The fix in this PR is to make the temp directory shutdown priority lower than SparkContext, so that the temp dirs are the last thing to get deleted, after the SparkContext has been shut down. Also, the DiskBlockManager shutdown priority is change from default 100 to temp_dir_prio + 1, so that it gets invoked just before all temp dirs are cleared. Author: Tathagata Das <tathagata.das1565@gmail.com> Closes #6482 from tdas/SPARK-7930 and squashes the following commits: d7cbeb5 [Tathagata Das] Removed unnecessary line 1514d0b [Tathagata Das] Fixed shutdown hook priorities
1 parent 04ddcd4 commit cd3d9a5

File tree

2 files changed

+12
-4
lines changed

2 files changed

+12
-4
lines changed

core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -139,8 +139,8 @@ private[spark] class DiskBlockManager(blockManager: BlockManager, conf: SparkCon
139139
}
140140

141141
private def addShutdownHook(): AnyRef = {
142-
Utils.addShutdownHook { () =>
143-
logDebug("Shutdown hook called")
142+
Utils.addShutdownHook(Utils.TEMP_DIR_SHUTDOWN_PRIORITY + 1) { () =>
143+
logInfo("Shutdown hook called")
144144
DiskBlockManager.this.doStop()
145145
}
146146
}

core/src/main/scala/org/apache/spark/util/Utils.scala

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,6 +73,13 @@ private[spark] object Utils extends Logging {
7373
*/
7474
val SPARK_CONTEXT_SHUTDOWN_PRIORITY = 50
7575

76+
/**
77+
* The shutdown priority of temp directory must be lower than the SparkContext shutdown
78+
* priority. Otherwise cleaning the temp directories while Spark jobs are running can
79+
* throw undesirable errors at the time of shutdown.
80+
*/
81+
val TEMP_DIR_SHUTDOWN_PRIORITY = 25
82+
7683
private val MAX_DIR_CREATION_ATTEMPTS: Int = 10
7784
@volatile private var localRootDirs: Array[String] = null
7885

@@ -189,10 +196,11 @@ private[spark] object Utils extends Logging {
189196
private val shutdownDeleteTachyonPaths = new scala.collection.mutable.HashSet[String]()
190197

191198
// Add a shutdown hook to delete the temp dirs when the JVM exits
192-
addShutdownHook { () =>
193-
logDebug("Shutdown hook called")
199+
addShutdownHook(TEMP_DIR_SHUTDOWN_PRIORITY) { () =>
200+
logInfo("Shutdown hook called")
194201
shutdownDeletePaths.foreach { dirPath =>
195202
try {
203+
logInfo("Deleting directory " + dirPath)
196204
Utils.deleteRecursively(new File(dirPath))
197205
} catch {
198206
case e: Exception => logError(s"Exception while deleting Spark temp dir: $dirPath", e)

0 commit comments

Comments
 (0)