Skip to content

Commit

Permalink
update benchmark results
Browse files Browse the repository at this point in the history
  • Loading branch information
ibraheemdev committed Jul 10, 2024
1 parent 2939daa commit b37628f
Show file tree
Hide file tree
Showing 8 changed files with 1,000 additions and 832 deletions.
6 changes: 3 additions & 3 deletions BENCHMARKS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

*As always, benchmarks should be taken with a grain of salt. Always measure for your workload.*

Below are the benchmark results from the [`conc-map-bench`](https://github.com/xacrimon/conc-map-bench) benchmarking harness under varying workloads. All benchmarks were run on a Ryzen 3700X (16 threads) with [`ahash`](https://github.com/tkaitchuck/aHash) and the [`mimalloc`](https://github.com/microsoft/mimalloc) allocator.
Below are the benchmark results from the [`conc-map-bench`](https://github.com/xacrimon/conc-map-bench) benchmarking harness under varying workloads. All benchmarks were run on a 16-core AMD EPYC processor, using [`ahash`](https://github.com/tkaitchuck/aHash) and the [`mimalloc`](https://github.com/microsoft/mimalloc) allocator.

### Read Heavy

Expand All @@ -24,9 +24,9 @@ Below are the benchmark results from the [`conc-map-bench`](https://github.com/x

# Discussion

`papaya` is read-heavy workloads and outperforms all competitors in the read-heavy benchmark. It falls short in update and write-heavy workloads due to allocator pressure, which is expected. However, an important guarantee of `papaya` is that reads *never* block under any circumstances. This is crucial for providing consistent read latency regardless of write concurrency.
As mentioned in the [performance](../README#performance) section of the guide, `papaya` is optimized read-heavy workloads. As expected, it outperforms all competitors in the read-heavy benchmark. An important guarantee of `papaya` is that reads *never* block under any circumstances. This is crucial for providing consistent read latency regardless of write concurrency. However, it falls short in update and insert-heavy workloads due to allocator pressure and the overhead of memory reclamation, which is necessary for lock-free reads. If your workload is write-heavy and you do not benefit from any of `papaya`'s features, you may wish to consider an alternate hash-table implementation.

Additionally, `papaya` does a lot better in terms of latency distribution due to incremental resizing and the lack of bucket locks. Comparing histograms of `insert` latency between `papaya` and `dashmap`, we see that `papaya` manages to keep tail latency orders of magnitude lower. Some tail latency is unavoidable due to the large allocations necessary to resize a hash-table, but the distribution is much more consistent (notice the scale of the y-axis).
Additionally, `papaya` does a lot better in terms of latency distribution due to incremental resizing and the lack of bucket locks. Comparing histograms of `insert` latency between `papaya` and `dashmap`, we see that `papaya` manages to keep tail latency lower by a few orders of magnitude. Some latency spikes are unavoidable due to the allocations necessary to maintain a large hash-table, but the distribution is much more consistent (notice the scale of the y-axis).

![](assets/papaya-hist.png)
![](assets/dashmap-hist.png)
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -186,9 +186,9 @@ The `Guard` trait supports both local and owned guards. Note the `'guard` lifeti

## Performance

`papaya` is built with read-heavy workloads in mind. As such, read operations are extremely high throughput and provide consistent performance that scales with concurrency, meaning `papaya` will excel in any workload in which reads are more common than writes. In write heavy workloads, `papaya` will still provide competitive performance despite not being it's primary use case. See the [benchmarks] for details.
`papaya` is built with read-heavy workloads in mind. As such, read operations are extremely high throughput and provide consistent performance that scales with concurrency, meaning `papaya` will excel in workloads where reads are more common than writes. In write heavy workloads, `papaya` will still provide competitive performance despite not being it's primary use case. See the [benchmarks] for details.

`papaya` also aims to provide predictable, consistent latency across all operations. Most operations are lock-free, and those that aren't only block under rare and constrained conditions. `papaya` also features [incremental resizing]. Predictable latency is an important part of performance that doesn't often show up in benchmarks, but has significant implications for real-world usage.
`papaya` aims to provide predictable and consistent latency across all operations. Most operations are lock-free, and those that aren't only block under rare and constrained conditions. `papaya` also features [incremental resizing]. Predictable latency is an important part of performance that doesn't often show up in benchmarks, but has significant implications for real-world usage.

[benchmarks]: ./BENCHMARKS.md
[`seize`]: https://docs.rs/seize/latest
Expand Down
Loading

0 comments on commit b37628f

Please sign in to comment.