Skip to content

Commit 21a37a7

Browse files
nchammasMaxGekk
authored andcommitted
[SPARK-50814][DOCS] Remove unused SQL error pages
### What changes were proposed in this pull request? Remove standalone SQL error pages that were made obsolete by the work completed in #44971. Also fix the formatting of the error message for `QUERY_ONLY_CORRUPT_RECORD_COLUMN`, since it was incorrect and overflowing the table cell it belongs to. ### Why are the changes needed? These error pages are either already captured completely in `common/utils/src/main/resources/error/error-conditions.json`, or are obsolete and not needed (and are not being rendered in the documentation output anyway). The formatting of `QUERY_ONLY_CORRUPT_RECORD_COLUMN` before and after: <img src="https://github.com/user-attachments/assets/476c57e0-dfa5-403e-8a7d-2d05301eb7a3" width=650 /> <img src="https://github.com/user-attachments/assets/106d5bca-6569-488c-9b9c-1a27345fc7a8" width=450 /> ### Does this PR introduce _any_ user-facing change? Yes, documentation formatting. ### How was this patch tested? Built the docs locally and reviewed them in my browser. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #49486 from nchammas/SPARK-50814-unused-error-docs. Authored-by: Nicholas Chammas <nicholas.chammas@gmail.com> Signed-off-by: Max Gekk <max.gekk@gmail.com>
1 parent 47d831e commit 21a37a7

14 files changed

+5
-599
lines changed

common/utils/src/main/resources/error/error-conditions.json

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5485,12 +5485,12 @@
54855485
"message" : [
54865486
"Queries from raw JSON/CSV/XML files are disallowed when the",
54875487
"referenced columns only include the internal corrupt record column",
5488-
"(named _corrupt_record by default). For example:",
5489-
"spark.read.schema(schema).json(file).filter($\"_corrupt_record\".isNotNull).count()",
5490-
"and spark.read.schema(schema).json(file).select(\"_corrupt_record\").show().",
5488+
"(named `_corrupt_record` by default). For example:",
5489+
"`spark.read.schema(schema).json(file).filter($\"_corrupt_record\".isNotNull).count()`",
5490+
"and `spark.read.schema(schema).json(file).select(\"_corrupt_record\").show()`.",
54915491
"Instead, you can cache or save the parsed results and then send the same query.",
5492-
"For example, val df = spark.read.schema(schema).json(file).cache() and then",
5493-
"df.filter($\"_corrupt_record\".isNotNull).count()."
5492+
"For example, `val df = spark.read.schema(schema).json(file).cache()` and then",
5493+
"`df.filter($\"_corrupt_record\".isNotNull).count()`."
54945494
]
54955495
},
54965496
"REMOVE_NAMESPACE_COMMENT" : {

docs/sql-error-conditions-codec-not-available-error-class.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

docs/sql-error-conditions-collation-mismatch-error-class.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

docs/sql-error-conditions-failed-read-file-error-class.md

Lines changed: 0 additions & 52 deletions
This file was deleted.

docs/sql-error-conditions-illegal-state-store-value-error-class.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

docs/sql-error-conditions-invalid-aggregate-filter-error-class.md

Lines changed: 0 additions & 49 deletions
This file was deleted.

docs/sql-error-conditions-invalid-conf-value-error-class.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

docs/sql-error-conditions-invalid-datetime-pattern-error-class.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

docs/sql-error-conditions-invalid-delimiter-value-error-class.md

Lines changed: 0 additions & 49 deletions
This file was deleted.

0 commit comments

Comments
 (0)