-
Notifications
You must be signed in to change notification settings - Fork 851
Description
Periodically we get cache misses through our hierarchy where the client has terribly slow throughput. In practice, this means that the transfers through the hierarchy are limited by that initial client's throughput -- we only buffer enough to send to the client. If a later, faster client asks for the same object, it will also get tied to the original slow client throughput.
It'd be nice to have a way for the cache to be the primary consumer -- assume that the cache nodes in the hierarchy are the fastest -- all requests would then effectively become read-while-writer. In the first slow client scenario, the node closest to the origin's cache would get written to as fast as it can, and so forth down through the hierarchy layers. The one closest to the client might be tied to the slow client ... but, a second, faster client requesting the same object from a different node could be served quickly as the outer node fills from the other nodes.