Skip to content

Avoid Buffering Arrow Data for Entire Row Group in parquet::ArrowWriter #3871

@tustvold

Description

@tustvold

Is your feature request related to a problem or challenge? Please describe what you are trying to do.

Currently ArrowWriter buffers up RecordBatch until it has enough rows to populate an entire row group, and then proceeds to write each column in turn to the output buffer.

Describe the solution you'd like

The encoded parquet data is often orders of magnitude smaller than the corresponding arrow data. The read path goes to great lengths to allow incremental reading of data within a row group. It may therefore be desirable to instead encode arrow data eagerly, writing each ColumnChunk to its own temporary buffer, and then stitching these back together.

This would allow writing larger row groups, whilst potentially consuming less memory in the arrow writer.

This would likely involve extending or possibly replacing SerializedRowGroupWriter to allow writing to the same column multiple times

Describe alternatives you've considered

We could not do this, parquet is inherently a read-optimised format and write performance may therefore be less of a priority for many workloads.

Additional context

Metadata

Metadata

Assignees

Labels

arrowChanges to the arrow crateenhancementAny new improvement worthy of a entry in the changelogparquetChanges to the parquet crate

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions