Skip to content

Commit 2f38378

Browse files
gatorsmilemarmbrus
authored andcommitted
[SPARK-11360][DOC] Loss of nullability when writing parquet files
This fix is to add one line to explain the current behavior of Spark SQL when writing Parquet files. All columns are forced to be nullable for compatibility reasons. Author: gatorsmile <gatorsmile@gmail.com> Closes #9314 from gatorsmile/lossNull.
1 parent 9565c24 commit 2f38378

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

docs/sql-programming-guide.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -982,7 +982,8 @@ when a table is dropped.
982982

983983
[Parquet](http://parquet.io) is a columnar format that is supported by many other data processing systems.
984984
Spark SQL provides support for both reading and writing Parquet files that automatically preserves the schema
985-
of the original data.
985+
of the original data. When writing Parquet files, all columns are automatically converted to be nullable for
986+
compatibility reasons.
986987

987988
### Loading Data Programmatically
988989

0 commit comments

Comments
 (0)