Skip to content
Félix Saparelli edited this page Oct 23, 2018 · 6 revisions

Writing a Backend

Getting started

Set up

  • Rust

  • Cargo

  • Cargo-generate template

Do the reading

  • Streams

  • Tokio

  • Your chosen platform’s docs

  • This guide + notify docs

Base code explainer

extern crate notify_backend as backend;

use backend::prelude::*; // (1)

pub struct YourBackend {
    // (2)
}

impl Backend for YourBackend { // (3)
}

impl Stream for YourBackend { // (4)
}
  1. With

  2. code

  3. listing

  4. callouts

Testing and testing

(Testing as in plugging it in a notify sample and playing around, and testing as in writing tests + the compliance tests.)

The details

Event system

Kind quick reference

Backpressure, queue overflow, and the Buffer

Backpressure is a term used to describe a situation where data builds up on one side of a stream, because of a clog, slowdown in the consumer, or other issues. In push-based streaming systems, backpressure can be an important problem as data will fill up buffers and balloon memory usage.

Tokio and Rust Futures/Streams are poll/pull/lazy systems, where producers only generate data when asked, and backpressure is generally not an issue.

In our domain, most platforms behave reasonably, issuing an overflow event (to let us know some events were dropped) and dropping further events while the build up remains, or not exposing this mechanism at all and managing it internally without negative consequences. In those cases, leaving events in kernel memory is correct and okay. If available, the overflow event should be translated to a Missed event.

But:

  • if the kernel queue limit is too low for typical usage, or

  • if the platform has a bad reaction to overflows, such as dropping all events (even those before the overflow) or closing down the watch,

you should use a Buffer.

More commonly, a Buffer is useful if it’s impossible to only retrieve a single event at a time, instead of implementing a custom userspace queue to hold events yourself.

Notify’s Buffer is a FIFO queue with a fixed capacity and a handy Stream endpoint. Events received when the buffer is full are discarded and a Missed event is generated. If a Missed event is received to the buffer while it’s full, the counters will be summed.

The default capacity of Buffer is 16KiB divided by the size of Event on the platform. On x64, and at the time of this writing, that’s 292. That should be more than enough for light use, in the common case of using it to hold events when not able to just read one at a time.

However, in the overflow scenarios discussed above, a much larger limit may be chosen. You’ll need to balance memory consumption against how event production and the risks over overflow. Keep in mind that the Event size does not include pathnames nor attribute data — those can add up dramatically. For example, if the average path length is 80, a full Buffer with capacity set to 10'000 would use 1.3MiB! instead of what one could naively expect to be 560KiB.

// Buffer is not part of the backend prelude, so you need to import it:
use backend::Buffer;

struct YourBackend {
    buffer: Buffer,
}

impl Backend for YourBackend {
    fn new(...) -> ... {
        // do your thing

       let buffer = Buffer::default();

       // or with custom capacity in number of Events:
       let buffer = Buffer::new(768);
    }
}

impl Stream for YourBackend {
    fn poll(...) -> ... {
        // do your thing

        // add to the buffer
        self.buffer.add(event);

        // handy Stream endpoint as return!
        self.buffer.poll()
    }
}

Finishing up

Crate publishing

(or leaving it as a repo crate)

Advertising

  • Telling us (twitter, email)

  • Putting it up on the wiki

  • Telling the world (reddit, twitter)

Making it official

Only for really polished and general-interest backends. Criteria, process, etc.

Clone this wiki locally