-
Notifications
You must be signed in to change notification settings - Fork 28.6k
[SPARK-26108][SQL] Support custom lineSep in CSV datasource #23080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
a790bb3
7a47990
be2870f
a058a6f
7e3c026
486b090
a0fedbb
5f013f5
65786df
49b91ea
12022ad
bb8a13b
0869b81
1f5399f
918d163
c06899f
a4c4b67
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -192,6 +192,20 @@ class CSVOptions( | |
*/ | ||
val emptyValueInWrite = emptyValue.getOrElse("\"\"") | ||
|
||
/** | ||
* A string between two consecutive JSON records. | ||
*/ | ||
val lineSeparator: Option[String] = parameters.get("lineSep").map { sep => | ||
require(sep.nonEmpty, "'lineSep' cannot be an empty string.") | ||
require(sep.length == 1, "'lineSep' can contain only 1 character.") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I currently have a project where we are importing windows newlines CRLF from CSV files. I backported these changes but ran into an issue with this check, because to properly parse Windows CSV files I must be able to set "\r\n" for lineSep in the settings. It appears the reason this require was added is no longer needed as the code for asReaderSettings/asWriterSettings never calls that function anymore. I was able to remove this assert and now able to import the windows newline CSV files into dataframes properly now. Another issue I had before this was the very last column would always get a "\r" at the end of the column name, so something like "TEXT" would become "TEXT\r", and therefore we would be unable to query the TEXT column anymore. Setting lineSep to "\r\n" solved this issue as well. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
You don't need to set There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am setting multiLine = "true". The problem I am having with this is the column name of the last column in the CSV header gets a \r added to the end of it. So if I have name,age,text\r\nfred,30,"likes\r\npie,cookies,milk"\njill,30,"likes\ncake,cookies,milk"\r\n I was getting schema with StringType("NAME") Could it be the mixed use of \r\n and \n so it only wants to use \n for newlines? Another issue is the configuration for lineSep is controlled upstream from a different configuration provided by users who have no knowledge of spark, but know how they formatted their CSV files, and without some re-architecture, it is not possible to detect that this setting is set to \r\n and then set it to None for the CSVOptions. lineSeparator.foreach(format.setLineSeparator) already handles 1 to 2 characters so I figured this is a safe thing to support for lineSep configuration no? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For multiline true, we have fixed auto-multiline detect feature in CSV (see #22503) That will do the job. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That is taken care of in this by the following line that I backported no?
I am still having the issue that univocity keeps a \r in the column name with multiline set to True and lineSeparatorInRead is unset. The only way I seem to be able to get spark to not put a \r in the column name is to specifiy the lineSep option with two characters explicitly to I'm wondering if this is just some really pedantic CSV file that I'm working with? Its a CSV that is exported upstream by python pandas.to_csv function with no extra arguments set. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would you be able to file a JIRA after testing out against the master branch if the issue is persistent? |
||
sep | ||
} | ||
|
||
val lineSeparatorInRead: Option[Array[Byte]] = lineSeparator.map { lineSep => | ||
lineSep.getBytes(charset) | ||
} | ||
val lineSeparatorInWrite: Option[String] = lineSeparator | ||
|
||
def asWriterSettings: CsvWriterSettings = { | ||
val writerSettings = new CsvWriterSettings() | ||
val format = writerSettings.getFormat | ||
|
@@ -200,6 +214,8 @@ class CSVOptions( | |
format.setQuoteEscape(escape) | ||
charToEscapeQuoteEscaping.foreach(format.setCharToEscapeQuoteEscaping) | ||
format.setComment(comment) | ||
lineSeparatorInWrite.foreach(format.setLineSeparator) | ||
|
||
writerSettings.setIgnoreLeadingWhitespaces(ignoreLeadingWhiteSpaceFlagInWrite) | ||
writerSettings.setIgnoreTrailingWhitespaces(ignoreTrailingWhiteSpaceFlagInWrite) | ||
writerSettings.setNullValue(nullValue) | ||
|
@@ -216,8 +232,10 @@ class CSVOptions( | |
format.setDelimiter(delimiter) | ||
format.setQuote(quote) | ||
format.setQuoteEscape(escape) | ||
lineSeparator.foreach(format.setLineSeparator) | ||
charToEscapeQuoteEscaping.foreach(format.setCharToEscapeQuoteEscaping) | ||
format.setComment(comment) | ||
|
||
settings.setIgnoreLeadingWhitespaces(ignoreLeadingWhiteSpaceInRead) | ||
settings.setIgnoreTrailingWhitespaces(ignoreTrailingWhiteSpaceInRead) | ||
settings.setReadInputOnSeparateThread(false) | ||
|
@@ -227,7 +245,10 @@ class CSVOptions( | |
settings.setEmptyValue(emptyValueInRead) | ||
settings.setMaxCharsPerColumn(maxCharsPerColumn) | ||
settings.setUnescapedQuoteHandling(UnescapedQuoteHandling.STOP_AT_DELIMITER) | ||
settings.setLineSeparatorDetectionEnabled(multiLine == true) | ||
settings.setLineSeparatorDetectionEnabled(lineSeparatorInRead.isEmpty && multiLine) | ||
lineSeparatorInRead.foreach { _ => | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice! |
||
settings.setNormalizeLineEndingsWithinQuotes(!multiLine) | ||
} | ||
|
||
settings | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I'm missing something, but has this removed the ability use
\r\n
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spark never supported
\r\n
in writing path.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revisiting this since I'd like to get rid of a local patch.
Why do you say it doesn't support this?
Reverting to the 2 character restriction works in my testing, on both the read and write paths and using arbitrary two character delimiters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the extra comments: hadn't read deeply enough.
So the problem is Univocity's
normalizedNewLine
stuff? It fails in multiline cases? That's what I'm seeing in the tests and would explain why I don't see it in my use cases.If that's the case, wondering if it's okay to allow two characters for the non-multiline cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please file a JIRA and go ahead if you can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack