Skip to content

Commit ebd2fd7

Browse files
iRaksonsrowen
authored andcommitted
[SPARK-30415][SQL] Improve Readability of SQLConf Doc
### What changes were proposed in this pull request? SQLCOnf Doc updated. ### Why are the changes needed? Some doc comments were not written properly. Space was missing at many places. This patch updates the doc. ### Does this PR introduce any user-facing change? No. ### How was this patch tested? Documentation update. Closes #27091 from iRakson/SQLConfDoc. Authored-by: root1 <raksonrakesh@gmail.com> Signed-off-by: Sean Owen <srowen@gmail.com>
1 parent c42fbc7 commit ebd2fd7

File tree

1 file changed

+13
-12
lines changed
  • sql/catalyst/src/main/scala/org/apache/spark/sql/internal

1 file changed

+13
-12
lines changed

sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -408,7 +408,7 @@ object SQLConf {
408408
"reduce IO and improve performance. Note, multiple continuous blocks exist in single " +
409409
s"fetch request only happen when '${ADAPTIVE_EXECUTION_ENABLED.key}' and " +
410410
s"'${REDUCE_POST_SHUFFLE_PARTITIONS_ENABLED.key}' is enabled, this feature also depends " +
411-
"on a relocatable serializer, the concatenation support codec in use and the new version" +
411+
"on a relocatable serializer, the concatenation support codec in use and the new version " +
412412
"shuffle fetch protocol.")
413413
.booleanConf
414414
.createWithDefault(true)
@@ -557,7 +557,7 @@ object SQLConf {
557557
val PARQUET_INT64_AS_TIMESTAMP_MILLIS = buildConf("spark.sql.parquet.int64AsTimestampMillis")
558558
.doc(s"(Deprecated since Spark 2.3, please set ${PARQUET_OUTPUT_TIMESTAMP_TYPE.key}.) " +
559559
"When true, timestamp values will be stored as INT64 with TIMESTAMP_MILLIS as the " +
560-
"extended type. In this mode, the microsecond portion of the timestamp value will be" +
560+
"extended type. In this mode, the microsecond portion of the timestamp value will be " +
561561
"truncated.")
562562
.booleanConf
563563
.createWithDefault(false)
@@ -638,8 +638,9 @@ object SQLConf {
638638
val PARQUET_OUTPUT_COMMITTER_CLASS = buildConf("spark.sql.parquet.output.committer.class")
639639
.doc("The output committer class used by Parquet. The specified class needs to be a " +
640640
"subclass of org.apache.hadoop.mapreduce.OutputCommitter. Typically, it's also a subclass " +
641-
"of org.apache.parquet.hadoop.ParquetOutputCommitter. If it is not, then metadata summaries" +
642-
"will never be created, irrespective of the value of parquet.summary.metadata.level")
641+
"of org.apache.parquet.hadoop.ParquetOutputCommitter. If it is not, then metadata " +
642+
"summaries will never be created, irrespective of the value of " +
643+
"parquet.summary.metadata.level")
643644
.internal()
644645
.stringConf
645646
.createWithDefault("org.apache.parquet.hadoop.ParquetOutputCommitter")
@@ -676,7 +677,7 @@ object SQLConf {
676677
.createWithDefault("snappy")
677678

678679
val ORC_IMPLEMENTATION = buildConf("spark.sql.orc.impl")
679-
.doc("When native, use the native version of ORC support instead of the ORC library in Hive." +
680+
.doc("When native, use the native version of ORC support instead of the ORC library in Hive. " +
680681
"It is 'hive' by default prior to Spark 2.4.")
681682
.internal()
682683
.stringConf
@@ -1225,8 +1226,8 @@ object SQLConf {
12251226
buildConf("spark.sql.streaming.multipleWatermarkPolicy")
12261227
.doc("Policy to calculate the global watermark value when there are multiple watermark " +
12271228
"operators in a streaming query. The default value is 'min' which chooses " +
1228-
"the minimum watermark reported across multiple operators. Other alternative value is" +
1229-
"'max' which chooses the maximum across multiple operators." +
1229+
"the minimum watermark reported across multiple operators. Other alternative value is " +
1230+
"'max' which chooses the maximum across multiple operators. " +
12301231
"Note: This configuration cannot be changed between query restarts from the same " +
12311232
"checkpoint location.")
12321233
.stringConf
@@ -1381,7 +1382,7 @@ object SQLConf {
13811382
buildConf("spark.sql.statistics.parallelFileListingInStatsComputation.enabled")
13821383
.internal()
13831384
.doc("When true, SQL commands use parallel file listing, " +
1384-
"as opposed to single thread listing." +
1385+
"as opposed to single thread listing. " +
13851386
"This usually speeds up commands that need to list many directories.")
13861387
.booleanConf
13871388
.createWithDefault(true)
@@ -1702,21 +1703,21 @@ object SQLConf {
17021703

17031704
val CONCAT_BINARY_AS_STRING = buildConf("spark.sql.function.concatBinaryAsString")
17041705
.doc("When this option is set to false and all inputs are binary, `functions.concat` returns " +
1705-
"an output as binary. Otherwise, it returns as a string. ")
1706+
"an output as binary. Otherwise, it returns as a string.")
17061707
.booleanConf
17071708
.createWithDefault(false)
17081709

17091710
val ELT_OUTPUT_AS_STRING = buildConf("spark.sql.function.eltOutputAsString")
17101711
.doc("When this option is set to false and all inputs are binary, `elt` returns " +
1711-
"an output as binary. Otherwise, it returns as a string. ")
1712+
"an output as binary. Otherwise, it returns as a string.")
17121713
.booleanConf
17131714
.createWithDefault(false)
17141715

17151716
val VALIDATE_PARTITION_COLUMNS =
17161717
buildConf("spark.sql.sources.validatePartitionColumns")
17171718
.internal()
17181719
.doc("When this option is set to true, partition column values will be validated with " +
1719-
"user-specified schema. If the validation fails, a runtime exception is thrown." +
1720+
"user-specified schema. If the validation fails, a runtime exception is thrown. " +
17201721
"When this option is set to false, the partition column value will be converted to null " +
17211722
"if it can not be casted to corresponding user-specified schema.")
17221723
.booleanConf
@@ -2129,7 +2130,7 @@ object SQLConf {
21292130
buildConf("spark.sql.legacy.fromDayTimeString.enabled")
21302131
.internal()
21312132
.doc("When true, the `from` bound is not taken into account in conversion of " +
2132-
"a day-time string to an interval, and the `to` bound is used to skip" +
2133+
"a day-time string to an interval, and the `to` bound is used to skip " +
21332134
"all interval units out of the specified range. If it is set to `false`, " +
21342135
"`ParseException` is thrown if the input does not match to the pattern " +
21352136
"defined by `from` and `to`.")

0 commit comments

Comments
 (0)