Skip to content

Commit d861357

Browse files
xuanyuankingcloud-fan
authored andcommitted
[SPARK-26700][CORE][FOLLOWUP] Add config spark.network.maxRemoteBlockSizeFetchToMem
### What changes were proposed in this pull request? Add new config `spark.network.maxRemoteBlockSizeFetchToMem` fallback to the old config `spark.maxRemoteBlockSizeFetchToMem`. ### Why are the changes needed? For naming consistency. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Existing tests. Closes #27463 from xuanyuanking/SPARK-26700-follow. Authored-by: Yuanjian Li <xyliyuanjian@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
1 parent 368ee62 commit d861357

File tree

3 files changed

+4
-3
lines changed

3 files changed

+4
-3
lines changed

core/src/main/scala/org/apache/spark/SparkConf.scala

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -684,7 +684,8 @@ private[spark] object SparkConf extends Logging {
684684
"spark.yarn.jars" -> Seq(
685685
AlternateConfig("spark.yarn.jar", "2.0")),
686686
MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM.key -> Seq(
687-
AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3")),
687+
AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3"),
688+
AlternateConfig("spark.maxRemoteBlockSizeFetchToMem", "3.0")),
688689
LISTENER_BUS_EVENT_QUEUE_CAPACITY.key -> Seq(
689690
AlternateConfig("spark.scheduler.listenerbus.eventqueue.size", "2.3")),
690691
DRIVER_MEMORY_OVERHEAD.key -> Seq(

core/src/main/scala/org/apache/spark/internal/config/package.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -895,7 +895,7 @@ package object config {
895895
.createWithDefault(Int.MaxValue)
896896

897897
private[spark] val MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM =
898-
ConfigBuilder("spark.maxRemoteBlockSizeFetchToMem")
898+
ConfigBuilder("spark.network.maxRemoteBlockSizeFetchToMem")
899899
.doc("Remote block will be fetched to disk when size of the block is above this threshold " +
900900
"in bytes. This is to avoid a giant request takes too much memory. Note this " +
901901
"configuration will affect both shuffle fetch and block manager remote block fetch. " +

docs/configuration.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1810,7 +1810,7 @@ Apart from these, the following properties are also available, and may be useful
18101810
</td>
18111811
</tr>
18121812
<tr>
1813-
<td><code>spark.maxRemoteBlockSizeFetchToMem</code></td>
1813+
<td><code>spark.network.maxRemoteBlockSizeFetchToMem</code></td>
18141814
<td>200m</td>
18151815
<td>
18161816
Remote block will be fetched to disk when size of the block is above this threshold

0 commit comments

Comments
 (0)