-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipelining support #927
Comments
I think it should be possible, but I'm not entirely sure that implementing a simple pipeline, as in My understanding of the This approach, if possible, would be a very good fit for That all said, it's a little odd that the There are two active protocols in
In both cases there's a recommendation to use the Existing So @casperisfine if you have the time to put together a PR and confirm that it works, that would be welcome. |
Pretty much yes. It's purely a client side feature, the client simply write all the commands, and read all the responses sequentially, e.g (pseudo code). def call_pipelined(commands)
commands.each do |command|
@socket.write(command)
end
commands.size.times.map do
read_one_response(@socket)
end
end That alone allows to execute
It indeed seem that Memcache has some server side support for pipelining, I'll dig more as to why, but I suspect it's to allow the server to process the command out of order or concurrently.
That would surprise me, but I guess the best way to know is to try.
Thanks! I'll try to find time to explore that and at the very least report my findings. One issue that may come up is Another thing I didn't think of before opening this issue, is that Memcached is generally used with client side distribution (or hashing), so the pipelining would need to be sent to different servers concurrently and then the responses put back in order. That may be why Memcached offer some server side pipelining features while redis doesn't (and why |
Seems like the quiet mode is to simplify the client work, and to avoid sending useless NOT_FOUND responses:
|
Ok, so I started looking at implementing this, and I must admit it's a lot more work that I initially envisioned, but more importantly it would require either an heavy refactoring or to duplicate a lot of code. The main issue is that the protocols implementations are not well suited to delay reading the response. e.g. def get(key, options = nil)
req = RequestFormatter.standard_request(opkey: :get, key: key)
write(req)
response_processor.get(cache_nils: cache_nils?(options))
end So to be able to write multiple commands, and then later read multiple responses, all of this would need to be decomposed etc. It's totally doable, but not sure I'll get enough time to do this soon, and also not sure you'd agree on that much changes. So I'd rather pause for now, if you still think it's a good idea I might get back to it, but also feel free to close this issue as too much of a change. |
Also for the record my quick experiment branch is there: main...Shopify:dalli:client-pipelining |
Let me keep it open for now. I have some thoughts on how to do that refactoring, as well as how to deal with the ring. I can't look at it right now, but I may be able to get back to it in a bit. And I do think it's valuable, assuming it works. |
So as far as I can tell, Dalli's pipelining support is limited to the
#quiet
method which prevent to get the commands results.Use case
In Rails'
MemCacheStore
I'd like to issue two combined commands on#increment
:The problem is that this double the latency as we now need to full rountrips to the Memcached server. If
Dalli
had apipelined
method likeredis-rb
, we could do:And only wait for a single roundtrip.
Is there a reason such a method wasn't implemented? And if not would a PR be welcome for it?
The text was updated successfully, but these errors were encountered: