Skip to content

mpsc stream is currently a pessimization #44512

Closed
@JLockerman

Description

@JLockerman

TLDR
As of rustc 1.22.0-nightly (981ce7d8d 2017-09-03) cloning a mpsc::Sender speeds up spsc workloads. My benchmarking seems to indicate that this is due to:

  1. Contention on shared counters in the node cache (push, pop).

  2. False sharing between the producer and consumer fields in the underlying queue.

  3. False sharing or contention in the wakeup code.

2 can be fixed by simply changing the alignment of the offending members so that the producer and consumer parts are on separate cache lines. 1 can be fixed with a small rewrite so that the queue only tracks its cache size at the consumer (a version of this code can be found here). 3 can be mitigated by reworking the alignment, but I am not sure if it's a full fix, rewriting the the send and recv logic so that no counter may also fix the issue (a version of the code that does this can be found (here and here), but it is not complete).


In the following benchmark:

fn bench_mpsc_stream() -> f64 {
    let (sender, reciever) = channel();
    bench_spsc(sender, reciever)
}

fn bench_mpsc_shared() -> f64 {
    let (sender, reciever) = channel();
    // this clone forces the queue into shared mode and makes the benchmark faster
    let _clone = sender.clone();
    bench_spsc(sender, reciever)
}

const COUNT: u64 = 10_000_000;

fn bench_spsc(tx: Sender<u64>, rx: Receiver<u64>) -> f64 {
    // ensure that the channel is not in Once mode
    tx.send(0).unwrap();
    tx.send(0).unwrap();
    rx.recv().unwrap();
    rx.recv().unwrap();

    let start = ::std::time::Instant::now();
    scope(|scope| {
        scope.spawn(move || {
            for x in 0..(COUNT*2) {
                let _ = black_box(tx.send(x));
            }
        });

        for _i in 0..(COUNT*2) {
            let _ = black_box(rx.recv().unwrap());
        }
    });
    let d = start.elapsed();

    nanos(d) / ((COUNT*2) as f64)
}

(derived from crossbeam, full code can be found in this repo)

on an early 2014 Intel i5 running macos, using rustc 1.22.0-nightly (981ce7d8d 2017-09-03) or rustc 1.20.0 (f3d6973f4 2017-08-27), stream runs at roughly 201 ns/send while shared runs at 134 ns/send.
Running on linux on EC2 and a raspberry pi show similar behavior.

The underlying datastructures show some difference in performance (spsc queue 75 ns/send, mpsc 59 ns/send), though not enough to fully explain the difference. Though I have not yet looked enough into the mpsc code enough to be sure of the difference, I did find potential improvements for spsc:

SPSC Performance Unaligned Cache Aligned
Default Cache 75 ns/send 97 ns/send
No Cache 46 ns/send 47 ns/send
Unbounded Cache 30 ns/send 12 ns/send
Low Contention Cache 35 ns/send 9 ns/send

Where Unaligned is the current struct layout, Cache Aligned aligns the consumer fields and producer fields to their own cache line (code is here).
Default cache is the current node cache implementation in std, No Cache disables the Node cache entirely (producer code, consumer code), and Unbounded Cache should be self explanatory.

Low Contention Cache rewrites the cache bounding logic to be done entirely on the consumer side.
Instead of keeping a count of how many nodes are in the cache, it keeps track of how many nodes
are marked as eligible to be cached, and only and always caches those nodes so marked (code for this can be found here, my experimental implementation stores the eligible flag in the node, though it could also be done by stealing a bit from the pointer).

Some of these performance improvements translate to the full stream structure:

Stream Performance Unaligned Cache Aligned
Default Cache 205 ns/send 212 ns/send
No Cache 103 ns/send 91 ns/send
Low Contention Cache 179 ns/send 102 ns/send

But to fully see the benefits, contention with the wakeup state needs to be removed

Stream2 Performance Unaligned Cache Aligned
Default Cache 131 ns/send 112 ns/send
No Cache 91 ns/send 58 ns/send
Low Contention Cache 103 ns/send 27 ns/send

(The numbers were collected from a version of stream that does not a counter at all I've gotten similar numbers from simply putting every field in stream on their own cache line. I think there should be a layout which uses exactly 2 lines, one for the producer and one for the consumer, with similar performance, but I have not done enough benchmarking to confirm it yet).

All the code code to reproduce these numbers can be found in this repo, along with the number for a raspberry pi.
Note that the raspberry pi seemed to be undergoing throttling as the benchmark ran, so numbers gathered later in the run are significantly worse than the one gathered at the beginning.

My apologies for the length and quality of writing.

cc @alexcrichton I believe.

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-enhancementCategory: An issue proposing an enhancement or a PR with one.I-slowIssue: Problems and improvements with respect to performance of generated code.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions