Skip to content

cache: respond to watches in parallel #75

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

cache: respond to watches in parallel #75

wants to merge 1 commit into from

Conversation

snowp
Copy link
Contributor

@snowp snowp commented Oct 14, 2018

This responds to watches in parallel by executing the watch callbacks on
a worker thread scoped to each group. By using one thread per group, we
maintain a strict response order while while avoiding holding the lock
while issuing gRPC responses.

Fixes #59

Signed-off-by: Snow Pettersen snowp@squareup.com

This responds to watches in parallel by executing the watch callbacks on
a worker thread scoped to each group. By using one thread per group, we
maintain a strict response order while while avoiding holding the lock
while issuing gRPC responses.

Fixes #59

Signed-off-by: Snow Pettersen <snowp@squareup.com>
@snowp
Copy link
Contributor Author

snowp commented Oct 15, 2018

This causes flaky tests at the moment, I'll try to fix the tests

@sschepens
Copy link
Contributor

@snowp how about reviving this? Do we really need an executor per group? isn't it only necessary to mantain order on a given connection?

@snowp
Copy link
Contributor Author

snowp commented Nov 9, 2018

@sschepens I haven't had time to work on this, but I definitely still want something like this. I agree that one executor per group is probably overkill, but I haven't figured out exactly what kind of concurrency we want here. Maybe just a single executor is sufficient here?

Yes, for ADS we only need to maintain order on the given connection, so another approach would be to maintain an executor per stream.

If you're interested in this feel free to pick it up, otherwise I'll try to get to it when I have free cycles, but I don't think that will be anytime soon.

@sschepens
Copy link
Contributor

sschepens commented Nov 23, 2018

@snowp we could probably use something like netty does, binding channels to executors, we could have a pool of executors, dependent on the amount of cpus and then just round robin streams to those executors.
This could probably work, the only concern i have is that you could have streams that are more demanding than others so you could have unbalanced executors, but i don't know if it would be that concerning.

Let me know if you're ok with this, I could probably have a go at it.

@joeyb
Copy link
Contributor

joeyb commented Dec 10, 2018

Closing this since #85 was merged.

@joeyb joeyb closed this Dec 10, 2018
@joeyb joeyb deleted the para branch December 10, 2018 17:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants