Description
The documentation for server.Policy
indicates that when the limits are exceeded calls will immediately return exceptions. However, when pairing to debug #189, @lthibault and I discovered this is not actually what happens. Instead, the receive loop just blocks until an item in the queue clears.
However, I do not actually think what the documentation describes is what we want; this would mean lost messages if the sender is sending too fast, and it seems like it would be challenging to program around. We really want some kind of backpressure like that provided by the C++ implementation's streaming support.
I propose the following roadmap for dealing with memory/backpressure instead:
- Short term, fix the docs to describe what the code actually does.
- Slightly longer term, get rid of the bounded queue and replace it with an unbounded one.
- Remove the
server.Policy
type entirely. - Implement something like the C++ implementation's per connection flow limit, for some soft backpressure at the connection level. Probably as a field on
rpc.Options
- Implement per-object flow control as is done in the C++ implementation for streaming specifically. I think we can do this for all clients irrespective of the streaming annotation, without changing any APIs; we just need to block the caller in appropriate places.
- In
rpc.Options
, add a field for a hard limit on memory consumed by outstanding rpc messages (of all types, including returns), after which the connection will just be dropped.
Thoughts?
@zombiezen, particularly since I'm proposing removing something you added for v3, I would appreciate your thoughts.