Skip to content

Commit 2858eaa

Browse files
BryanCutlergatorsmile
authored andcommitted
[SPARK-22221][SQL][FOLLOWUP] Externalize spark.sql.execution.arrow.maxRecordsPerBatch
## What changes were proposed in this pull request? This is a followup to #19575 which added a section on setting max Arrow record batches and this will externalize the conf that was referenced in the docs. ## How was this patch tested? NA Author: Bryan Cutler <cutlerb@gmail.com> Closes #20423 from BryanCutler/arrow-user-doc-externalize-maxRecordsPerBatch-SPARK-22221. (cherry picked from commit f235df6) Signed-off-by: gatorsmile <gatorsmile@gmail.com>
1 parent 75131ee commit 2858eaa

File tree

1 file changed

+0
-1
lines changed
  • sql/catalyst/src/main/scala/org/apache/spark/sql/internal

1 file changed

+0
-1
lines changed

sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1051,7 +1051,6 @@ object SQLConf {
10511051

10521052
val ARROW_EXECUTION_MAX_RECORDS_PER_BATCH =
10531053
buildConf("spark.sql.execution.arrow.maxRecordsPerBatch")
1054-
.internal()
10551054
.doc("When using Apache Arrow, limit the maximum number of records that can be written " +
10561055
"to a single ArrowRecordBatch in memory. If set to zero or negative there is no limit.")
10571056
.intConf

0 commit comments

Comments
 (0)