-
Notifications
You must be signed in to change notification settings - Fork 217
Writing a Backend
TBD
extern crate notify_backend as backend;
use backend::prelude::*; // (1)
pub struct YourBackend {
// (2)
}
impl Backend for YourBackend { // (3)
}
impl Stream for YourBackend { // (4)
}
-
With
-
code
-
listing
-
callouts
Backpressure is a term used to describe a situation where data builds up on one side of a stream, because of a clog, slowdown in the consumer, or other issues. In push-based streaming systems, backpressure can be an important problem as data will fill up buffers and balloon memory usage.
Tokio and Rust Futures/Streams are poll/pull/lazy systems, where producers only generate data when asked, and backpressure is generally not an issue.
In our domain, most platforms behave reasonably, issuing an overflow event (to let us know some events were dropped) and dropping further events while the build up remains, or not exposing this mechanism at all and managing it internally without negative consequences. In those cases, leaving events in kernel memory is correct and okay. If available, the overflow event should be translated to a Missed
event.
But:
-
if the kernel queue limit is too low for typical usage, or
-
if the platform has a bad reaction to overflows, such as dropping all events (even those before the overflow) or closing down the watch,
you should use a Buffer
.
More commonly, a Buffer
is useful if it’s impossible to only retrieve a single event at a time, instead of implementing a custom userspace queue to hold events yourself.
Notify’s Buffer
is a FIFO queue with a fixed capacity and a handy Stream
endpoint. Events received when the buffer is full are discarded and a Missed
event is generated. If a Missed
event is received to the buffer while it’s full, the counters will be summed.
The default capacity of Buffer
is 16KiB divided by the size of Event
on the platform. On x64, and at the time of this writing, that’s 292. That should be more than enough for light use, in the common case of using it to hold events when not able to just read one at a time.
However, in the overflow scenarios discussed above, a much larger limit may be chosen. You’ll need to balance memory consumption against how event production and the risks over overflow. Keep in mind that the Event
size does not include pathnames nor attribute data — those can add up dramatically. For example, if the average path length is 80, a full Buffer
with capacity set to 10'000 would use 1.3MiB! instead of what one could naively expect to be 560KiB.
// Buffer is not part of the backend prelude, so you need to import it:
use backend::Buffer;
struct YourBackend {
buffer: Buffer,
}
impl Backend for YourBackend {
fn new(...) -> ... {
// do your thing
let buffer = Buffer::default();
// or with custom capacity in number of Events:
let buffer = Buffer::new(768);
}
}
impl Stream for YourBackend {
fn poll(...) -> ... {
// do your thing
// add to the buffer
self.buffer.add(event);
// handy Stream endpoint as return!
self.buffer.poll()
}
}