Skip to content

Commit 25ee047

Browse files
srowendongjoon-hyun
authored andcommitted
[SPARK-26936][MINOR][FOLLOWUP] Don't need the JobConf anymore, it seems
## What changes were proposed in this pull request? On a second look in comments, seems like the JobConf isn't needed anymore here. It was used inconsistently before, it seems, and I don't see any reason a Hadoop Job config is required here anyway. ## How was this patch tested? Existing tests. Closes #24491 from srowen/SPARK-26936.2. Authored-by: Sean Owen <sean.owen@databricks.com> Signed-off-by: Dongjoon Hyun <dhyun@apple.com>
1 parent 7432e7d commit 25ee047

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveDirCommand.scala

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@ import org.apache.hadoop.hive.common.FileUtils
2222
import org.apache.hadoop.hive.ql.plan.TableDesc
2323
import org.apache.hadoop.hive.serde.serdeConstants
2424
import org.apache.hadoop.hive.serde2.`lazy`.LazySimpleSerDe
25-
import org.apache.hadoop.mapred._
2625

2726
import org.apache.spark.SparkException
2827
import org.apache.spark.sql.{Row, SparkSession}
@@ -80,13 +79,12 @@ case class InsertIntoHiveDirCommand(
8079
)
8180

8281
val hadoopConf = sparkSession.sessionState.newHadoopConf()
83-
val jobConf = new JobConf(hadoopConf)
8482

8583
val targetPath = new Path(storage.locationUri.get)
8684
val qualifiedPath = FileUtils.makeQualified(targetPath, hadoopConf)
8785
val (writeToPath: Path, fs: FileSystem) =
8886
if (isLocal) {
89-
val localFileSystem = FileSystem.getLocal(jobConf)
87+
val localFileSystem = FileSystem.getLocal(hadoopConf)
9088
(localFileSystem.makeQualified(targetPath), localFileSystem)
9189
} else {
9290
val dfs = qualifiedPath.getFileSystem(hadoopConf)

0 commit comments

Comments
 (0)