Skip to content

Commit ae00d49

Browse files
committed
[SPARK-20967][SQL] SharedState.externalCatalog is not really lazy
## What changes were proposed in this pull request? `SharedState.externalCatalog` is marked as a `lazy val` but actually it's not lazy. We access `externalCatalog` while initializing `SharedState` and thus eliminate the effort of `lazy val`. When creating `ExternalCatalog` we will try to connect to the metastore and may throw an error, so it makes sense to make it a `lazy val` in `SharedState`. ## How was this patch tested? existing tests. Author: Wenchen Fan <wenchen@databricks.com> Closes #18187 from cloud-fan/minor. (cherry picked from commit d1b80ab) Signed-off-by: Wenchen Fan <wenchen@databricks.com>
1 parent 25cc800 commit ae00d49

File tree

1 file changed

+13
-13
lines changed

1 file changed

+13
-13
lines changed

sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -90,38 +90,38 @@ private[sql] class SharedState(val sparkContext: SparkContext) extends Logging {
9090
/**
9191
* A catalog that interacts with external systems.
9292
*/
93-
lazy val externalCatalog: ExternalCatalog =
94-
SharedState.reflect[ExternalCatalog, SparkConf, Configuration](
93+
lazy val externalCatalog: ExternalCatalog = {
94+
val externalCatalog = SharedState.reflect[ExternalCatalog, SparkConf, Configuration](
9595
SharedState.externalCatalogClassName(sparkContext.conf),
9696
sparkContext.conf,
9797
sparkContext.hadoopConfiguration)
9898

99-
// Create the default database if it doesn't exist.
100-
{
10199
val defaultDbDefinition = CatalogDatabase(
102100
SessionCatalog.DEFAULT_DATABASE,
103101
"default database",
104102
CatalogUtils.stringToURI(warehousePath),
105103
Map())
106-
// Initialize default database if it doesn't exist
104+
// Create default database if it doesn't exist
107105
if (!externalCatalog.databaseExists(SessionCatalog.DEFAULT_DATABASE)) {
108106
// There may be another Spark application creating default database at the same time, here we
109107
// set `ignoreIfExists = true` to avoid `DatabaseAlreadyExists` exception.
110108
externalCatalog.createDatabase(defaultDbDefinition, ignoreIfExists = true)
111109
}
112-
}
113110

114-
// Make sure we propagate external catalog events to the spark listener bus
115-
externalCatalog.addListener(new ExternalCatalogEventListener {
116-
override def onEvent(event: ExternalCatalogEvent): Unit = {
117-
sparkContext.listenerBus.post(event)
118-
}
119-
})
111+
// Make sure we propagate external catalog events to the spark listener bus
112+
externalCatalog.addListener(new ExternalCatalogEventListener {
113+
override def onEvent(event: ExternalCatalogEvent): Unit = {
114+
sparkContext.listenerBus.post(event)
115+
}
116+
})
117+
118+
externalCatalog
119+
}
120120

121121
/**
122122
* A manager for global temporary views.
123123
*/
124-
val globalTempViewManager: GlobalTempViewManager = {
124+
lazy val globalTempViewManager: GlobalTempViewManager = {
125125
// System preserved database should not exists in metastore. However it's hard to guarantee it
126126
// for every session, because case-sensitivity differs. Here we always lowercase it to make our
127127
// life easier.

0 commit comments

Comments
 (0)