feat(reader): cache Parquet metadata for when FileScanTasks read the same file #2100
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Which issue does this PR close?
While running Spark/Iceberg with DataFusion Comet on a workload that generates ~80,000
FileScanTaskobjects passed into theArrowReader, we see the majority of CPU time spent inget_metadatacalls viaArrowReader::create_parquet_record_batch_stream_builder.This is a screenshot from the CPU time flame graph from one of the executors in this Spark job:

I suspect the
ArrowReaderis processingFileScanTasks for the same Parquet data files and fetching the same metadata, burning CPU cycles to parse and adding extra object store calls.What changes are included in this PR?
ParquetMetadataCachemodeled after delete_filter.rs's behavior. I made the key a composite of the location and whether the page index was requested to be read, since a subsequenttruewhen cached withfalsewill yield improper results.ArrowReaderhas a metadata cache.BasicDeleteFileLoaderhas a metadata cache.Are these changes tested?