Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: Use mmap-ed memory if possible in Parquet reader #17725

Merged
merged 1 commit into from
Jul 19, 2024

Conversation

coastalwhite
Copy link
Collaborator

@coastalwhite coastalwhite commented Jul 19, 2024

This resolves a discussion had in #17712.

This seems to have negative performance with madvise or without prefetching.

The current implementation uses prefetching to the L2 cache. This seems to have ~5% increased performance for multithreaded and ~10% increased performance on single-threaded. All this testing is done on cold file reads. Warm file reads seem to be faster as well, but it is more noisy.

Multi-threaded:

Benchmark 1: ./plparbench-before
  Time (mean ± σ):      6.049 s ±  0.031 s    [User: 5.813 s, System: 5.811 s]
  Range (min … max):    6.013 s …  6.086 s    5 runs

Benchmark 2: ./plparbench-after
  Time (mean ± σ):      5.761 s ±  0.020 s    [User: 5.083 s, System: 5.792 s]
  Range (min … max):    5.735 s …  5.788 s    5 runs

Summary
  ./plparbench-after ran
    1.05 ± 0.01 times faster than ./plparbench-before

Single-threaded:

Benchmark 1: ./plparbench-before
  Time (mean ± σ):     13.601 s ±  0.184 s    [User: 5.295 s, System: 5.206 s]
  Range (min … max):   13.447 s … 13.858 s    5 runs

Benchmark 2: ./plparbench-after
  Time (mean ± σ):     12.398 s ±  0.152 s    [User: 4.862 s, System: 5.134 s]
  Range (min … max):   12.276 s … 12.664 s    5 runs

Summary
  ./plparbench-after ran
    1.10 ± 0.02 times faster than ./plparbench-before

@ruihe774 does this look okay to you or is there something you would do different here?

@github-actions github-actions bot added performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars labels Jul 19, 2024
Copy link

codecov bot commented Jul 19, 2024

Codecov Report

Attention: Patch coverage is 79.12621% with 43 lines in your changes missing coverage. Please review.

Project coverage is 80.44%. Comparing base (f70b7f9) to head (adb7b7e).
Report is 5 commits behind head on main.

Files Patch % Lines
crates/polars-utils/src/mmap.rs 75.47% 39 Missing ⚠️
crates/polars-parquet/src/parquet/mod.rs 75.00% 3 Missing ⚠️
crates/polars-io/src/mmap.rs 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #17725      +/-   ##
==========================================
+ Coverage   80.38%   80.44%   +0.05%     
==========================================
  Files        1501     1502       +1     
  Lines      196772   196954     +182     
  Branches     2793     2794       +1     
==========================================
+ Hits       158172   158435     +263     
+ Misses      38087    38005      -82     
- Partials      513      514       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

This seems to have negative performance without prefetching or with madvise.
The current implementation uses prefetching to the L2 cache. This seems to have ~5% increased performance for multithreaded and ~10% increased performance on single-threaded. All this testing is done on cold file reads. Warm file reads seems to be faster as well, but it is more noisy.

Multi-threaded:

```
Benchmark 1: ./plparbench-before
  Time (mean ± σ):      6.049 s ±  0.031 s    [User: 5.813 s, System: 5.811 s]
  Range (min … max):    6.013 s …  6.086 s    5 runs

Benchmark 2: ./plparbench-after
  Time (mean ± σ):      5.761 s ±  0.020 s    [User: 5.083 s, System: 5.792 s]
  Range (min … max):    5.735 s …  5.788 s    5 runs

Summary
  ./plparbench-after ran
    1.05 ± 0.01 times faster than ./plparbench-before
```

Single-threaded:

```
Benchmark 1: ./plparbench-before
  Time (mean ± σ):     13.601 s ±  0.184 s    [User: 5.295 s, System: 5.206 s]
  Range (min … max):   13.447 s … 13.858 s    5 runs

Benchmark 2: ./plparbench-after
  Time (mean ± σ):     12.398 s ±  0.152 s    [User: 4.862 s, System: 5.134 s]
  Range (min … max):   12.276 s … 12.664 s    5 runs

Summary
  ./plparbench-after ran
    1.10 ± 0.02 times faster than ./plparbench-before
```
@ritchie46 ritchie46 merged commit b331538 into pola-rs:main Jul 19, 2024
21 checks passed

/// Attempt to prefetch the memory belonging to to this [`MemSlice`]
#[inline]
pub fn prefetch(&self) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really doubt whether it can work as expected.

_mm_prefetch prefetches data from RAM to cache. If the data is not yet present in RAM (i.e., mmap), it's a no-op. For mmap, I think madvise should be used instead.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried madvise, no-op reading and prefetching. Prefetching was the only one that was faster over the memcpy approach.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not notice a difference between no-op and prefetch_l2. May I ask how you benched it?

Copy link
Collaborator Author

@coastalwhite coastalwhite Jul 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this, I think cold performance on large files is the most important. So I am running a drop_caches before every run and running on a dataset I got from someone of 11GB.

hyperfine --warmup 2 \
    -p 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' './read-parquet-before' \
    -p 'sync; echo 3 | sudo tee /proc/sys/vm/drop_caches' './read-parquet-after'  \
    --runs 5

I have done the same while preparing with pcu-fadvise but it seemed to cause the same benchmarks.

Warm performance does not really matter here.

Copy link
Contributor

@ruihe774 ruihe774 Jul 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether the input parquet is compressed. I believe the mmap code path is only used when reading uncompressed parquet.

For mmap, I think madvise (Mmap::advise) should be used instead of fadvise. Here is the code I used:

pub fn prefetch(&self) {
    if self.len() == 0 {
        return;
    }

    if let MemSliceInner::Mmap(MmapSlice { ref mmap, ptr, len }) = self.0 {
        let offset = ptr as usize - mmap.as_ptr() as usize;
        mmap.advise_range(Advice::WillNeed, offset, len).unwrap();
    }
}

On my machine, with a 9GB uncompressed parquet, there is nearly no difference between no-op and both prefetching methods.

no-op: 13972 ms
prefetch_l2: 14084 ms
madvise: 13967 ms

I think it is because not the whole file content is used when parsing parquet, so it is not worthy to do aggressive pre-reading or prefetching.

BTW, I wonder why you perform prefetching page by page. IMO it will prefetch too much. And _mm_prefetch does not prefetches a whole page but only few cache lines. It, as stated in Rust doc and intrinsic doc, does not trigger page faults, either.

Copy link
Collaborator Author

@coastalwhite coastalwhite Jul 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder whether the input parquet is compressed. I believe the mmap code path is only used when reading uncompressed parquet.

I don't think this is true. The MmapSlice are used for both uncompressed and compressed pages at the moment.

For mmap, I think madvise (Mmap::advise) should be used instead of fadvise.

I meant pcu-fadvise as an alternative for drop_caches. As a file eviction tool.

I think it is because not the whole file content is used when parsing parquet, so it is not worthy to do aggressive pre-reading or prefetching.

At the moment, we prefetch per ColumnChunk. All this data should be used unless you request a limited amount of rows.

But I agree with you. This seems like faulty benchmarking on my side. I will investigate further later. The mmap, even without prefetching, seems to have some performance benefits over memcpying mmap-ed data to the heap.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also want to test Mmap::lock.

It prevents memory from being paged to the swap area, and the amount of memory that a process can lock is limited by RLIMIT_MEMLOCK. I don't think this effect is desired. If you want to immediately read all pages into RAM, you can use PopulateRead; however in my bench it is slower than no-op.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to immediately read all pages into RAM, you can use PopulateRead; however in my bench it is slower than no-op.

Yeah, I tried it as well just now. It is quite a bit slower.

@c-peters c-peters added the accepted Ready for implementation label Jul 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted Ready for implementation performance Performance issues or improvements python Related to Python Polars rust Related to Rust Polars
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

4 participants