Skip to content

Option to issue GET in parallel #5

@matsken

Description

@matsken

First of all, thanks for creating this package - this is a perfect fit for our use case where we have two endpoints, each configured with Sentinel cluster but sometimes one of them goes down for maintenance. With this package, we can configure and forget about switching endpoints during maintenance.

Question/problem
If I understand correctly, GET commands are executed in series. So if there are two instances and the first instance is down, the GET command will not get sent to the 2nd one until the first one times out (after childCommandTimeout value).

Then we have a situation where, when the first instance is down, all GET commands will take an additional X time to complete, where X is childCommandTimeout value. This was quite apparent when used with redis session store, where all http requests were taking 10+ seconds.

Describe the solution you'd like
I would like to have an option where, if set to true, run the GET (or any) command in parallel if possible. In the case of a GET command, it will simply issue the command to all clients and respond with the first non-error result. Even if the results were different among the clients, I would be ok with that.
I imagine that it would not be too hard to implement a global option to simply issue all commands in parallel (override the runInSequence flag if set). What do you think?

Metadata

Metadata

Assignees

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions