Skip to content

[SPARK-25407][SQL] Ensure we pass a compatible pruned schema to ParquetRowConverter #22880

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

mallman
Copy link
Contributor

@mallman mallman commented Oct 29, 2018

What changes were proposed in this pull request?

(Link to Jira issue: https://issues.apache.org/jira/browse/SPARK-25407)

As part of schema clipping in ParquetReadSupport.scala, we add fields in the Catalyst requested schema which are missing from the Parquet file schema to the Parquet clipped schema. However, nested schema pruning requires we ignore unrequested field data when reading from a Parquet file. Therefore we pass two schema to ParquetRecordMaterializer: the schema of the file data we want to read and the schema of the rows we want to return. The reader is responsible for reconciling the differences between the two.

Aside from checking whether schema pruning is enabled, there is an additional complication to constructing the Parquet requested schema. The manner in which Spark's two Parquet readers reconcile the differences between the Parquet requested schema and the Catalyst requested schema differ. Spark's vectorized reader does not (currently) support reading Parquet files with complex types in their schema. Further, it assumes that the Parquet requested schema includes all fields requested in the Catalyst requested schema. It includes logic in its read path to skip fields in the Parquet requested schema which are not present in the file.

Spark's parquet-mr based reader supports reading Parquet files of any kind of complex schema, and it supports nested schema pruning as well. Unlike the vectorized reader, the parquet-mr reader requires that the Parquet requested schema include only those fields present in the underlying Parquet file's schema. Therefore, in the case where we use the parquet-mr reader we intersect the Parquet clipped schema with the Parquet file's schema to construct the Parquet requested schema that's set in the ReadContext.

How was this patch tested?

A previously ignored test case which exercises the failure scenario this PR addresses has been enabled.

@SparkQA
Copy link

SparkQA commented Oct 29, 2018

Test build #98225 has finished for PR 22880 at commit e5e60ad.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

// parquet-mr reader requires that parquetRequestedSchema include only those fields present in
// the underlying parquetFileSchema. Therefore, in the case where we use the parquet-mr reader
// we intersect the parquetClippedSchema with the parquetFileSchema to construct the
// parquetRequestedSchema set in the ReadContext.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For vectorized reader, even we do this additional intersectParquetGroups, will it cause any problem?

Copy link
Contributor Author

@mallman mallman Oct 30, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For vectorized reader, even we do this additional intersectParquetGroups, will it cause any problem?

Yes. The relevant passage being

Further, [the vectorized reader] assumes that parquetRequestedSchema includes all fields
requested in catalystRequestedSchema. It includes logic in its read path to skip fields in
parquetRequestedSchema which are not present in the file.

If we break this assumption by giving the vectorized reader a Parquet requested schema which does not include all of the fields in the Catalyst requested schema, then it will fail with an exception. This scenario is covered by the tests. (Comment out the relevant code below and run the tests to see.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I see.

@@ -202,11 +204,15 @@ private[parquet] class ParquetRowConverter(

override def start(): Unit = {
var i = 0
while (i < currentRow.numFields) {
while (i < fieldConverters.length) {
fieldConverters(i).updater.start()
currentRow.setNullAt(i)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now fieldConverters(i) may not be matched to currentRow(i)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is correct. Now that we're passing a Parquet schema that's a (non-strict) subset of the Catalyst schema, we cannot assume that their fields are in 1:1 correspondence.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, I think it should be fine to do setNullAt at non-corresponding field, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The following while loop at

while (i < currentRow.numFields) {
currentRow.setNullAt(i)
i += 1
}
ensures all remaining columns/fields in the current row are nulled-out.

Copy link
Contributor

@attilapiros attilapiros Oct 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am also lost here. The i index seems to me following the parquet fields so is not the updater.ordinal the correct index to update the currentRow?

I would expect something like:

val updater = fieldConverters(i).updater
updater.start()
currentRow.setNullAt(updater.ordinal)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@viirya @attilapiros Hi guys. Does my explanation make sense? If so, do you want me to change the code as I suggested or leave it as-is in the current PR commit?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I think doing this separately is better and you can rewrite it to one-liner, like for setNullAt:

(0 until currentRow.numFields).foreach(currentRow.setNullAt)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with current commit. Seems It can save some redundant iterations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you both for your feedback.

Seems It can save some redundant iterations.

That was my motivation in writing the code this way. While the code is not as clear as it could be, it is very performance critical.

I'm going to push a new commit keeping the current code but with a brief explanatory comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to push a new commit keeping the current code but with a brief explanatory comment.

On further careful consideration, I believe that separating the calls to currentRow.setNullAt(i) into their own loop actually won't incur any significant performance degradation—if any at all.

The performance of the start() method is dominated by the calls to fieldConverters(i).updater.start() and currentRow.setNullAt(i). Putting the latter calls into their own loop won't change the count of those method calls, just the order. @viirya LMK if you disagree with my analysis.

I will push a new commit with separate while loops. I won't use the more elegant (0 until currentRow.numFields).foreach(currentRow.setNullAt) because that's not a loop, and I doubt either the Spark or Hotspot optimizer can turn that into a loop.

s"""Going to read the following fields from the Parquet file with the following schema:
|Parquet file schema:
|$fileSchema
|Parquet read schema:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might increase a lot of log data. Do we need to output fileSchema?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This detailed, formatted information was very helpful in developing and debugging this patch. Perhaps this should be logged at the debug level instead? Even the original message does seem rather technical for info-level logging. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is useful for debugging this patch, but may not useful for end users and will increase log size. Make it as debug level sounds good to me. But let's wait for others opinions too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, we should maybe change this into debugging level for them. I would additionally log them somewhere as debugging level.

@dbtsai
Copy link
Member

dbtsai commented Oct 30, 2018

I can confirm that this fixes https://issues.apache.org/jira/browse/SPARK-25879

cc @cloud-fan @gatorsmile @beettlle

Thanks.

@SparkQA
Copy link

SparkQA commented Oct 30, 2018

Test build #98278 has finished for PR 22880 at commit 6b19f57.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mallman
Copy link
Contributor Author

mallman commented Nov 5, 2018

cc @HyukjinKwon

Would you like to review this PR? It's a bug fix.

ParquetRowConverter.start() into their own loop for clarity
@SparkQA
Copy link

SparkQA commented Nov 6, 2018

Test build #98528 has finished for PR 22880 at commit 598d965.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mallman
Copy link
Contributor Author

mallman commented Nov 7, 2018

Jenkins retest please.

@mallman
Copy link
Contributor Author

mallman commented Nov 7, 2018

Can someone with Jenkins retest privileges please kick off a retest?

@viirya
Copy link
Member

viirya commented Nov 7, 2018

retest this please.

@SparkQA
Copy link

SparkQA commented Nov 8, 2018

Test build #98572 has finished for PR 22880 at commit 598d965.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mallman
Copy link
Contributor Author

mallman commented Nov 8, 2018

@gatorsmile How do you feel about merging this in? Anyone else I should ping for review?

@HyukjinKwon
Copy link
Member

Let me take a look on this weekends.

@HyukjinKwon
Copy link
Member

Looks good. I or someone else should take a closer look before getting this in.

case ((parquetFieldType, catalystField), ordinal) =>
// Converted field value should be set to the `ordinal`-th cell of `currentRow`
newConverter(parquetFieldType, catalystField.dataType, new RowUpdater(currentRow, ordinal))
parquetType.getFields.asScala.map {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also .. nit: parquetType.getFields.asScala.map { parquetField => per https://github.com/databricks/scala-style-guide#pattern-matching

parquetType.getFieldCount == catalystType.length,
s"""Field counts of the Parquet schema and the Catalyst schema don't match:
parquetType.getFieldCount <= catalystType.length,
s"""Field count of the Parquet schema is greater than the field count of the Catalyst schema:
Copy link
Member

@HyukjinKwon HyukjinKwon Nov 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we assert this only when this pruning is enabled? - we could fix the condition like enabled && parquetType.getFieldCount <= catalystType.length || parquetType.getFieldCount == catalystType.length for instance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you ask? Is it for safety, clarity? My concern is around reducing complexity, but I'm not strictly against this.

i += 1
}
i = 0
while (i < currentRow.numFields) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we loop once with if?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I think it's clearer this way. If @viirya has an opinion either way I'll take it as a "tie-breaker".

@SparkQA
Copy link

SparkQA commented Nov 13, 2018

Test build #98791 has finished for PR 22880 at commit 4dfd459.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@mallman
Copy link
Contributor Author

mallman commented Dec 5, 2018

Hi @dbtsai @HyukjinKwon @gatorsmile @viirya. Can we merge this to master?

Copy link
Member

@MaxGekk MaxGekk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@gatorsmile
Copy link
Member

ping @hvanhovell

@dongjoon-hyun
Copy link
Member

Hi, @mallman . I can review and help this PR. Could you rebase once more?

@dongjoon-hyun
Copy link
Member

Okay. I'll take over this with @mallman 's authorship in a new PR.

@dongjoon-hyun
Copy link
Member

dongjoon-hyun commented Apr 6, 2019

Ping, @mallman here, too. Since you are back, please rebase this one. I can help you here as I mentioned here. In the PR I made, you are the author also, but I don't like creating that kind of PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants