-
Notifications
You must be signed in to change notification settings - Fork 802
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support custom yarpc peer chooser for p2p connections #6345
Support custom yarpc peer chooser for p2p connections #6345
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Some questions about choices: Lazy vs upfront connection setup?
For now I'll vote lazy (unless upfront is easier somehow) since it's what we currently do and it's apparently fast enough. Plus, lazy is needed to some degree since hosts can enter and exit, so I would generally expect "only lazy" to be simpler and more obviously-correct (because it's always used, rather than only in edge cases / long-running deployments). I have not read the changes yet tho, so that's just going off initial feels. |
And after reading: yeah I'll vote for lazy connections only / primarily. And probably idle-shutdown too. You'll need lazy to be able to dynamically turn this on and off, and have it actually be off and not just unused. Overall makes sense though. Not too bad for the part leading up to "and then draw the rest of the horse" :) Footnotes |
Lazy approach is more efficient way to go. It's how it's done currently. No reason to switch to upfront conn setup with this change. I don't think it makes implementation any simpler. |
0d46ae2
to
c9db53b
Compare
f5a53bf
to
98e667f
Compare
98e667f
to
cc96652
Compare
Problem
Cadence services use yarpc's "direct chooser" implementation for p2p connections over grpc. These p2p calls happen
Existing implementation of "direct chooser" doesn't retain connections. Closing connections immediately after each request is not ideal because same hosts will most likely keep connecting again and again.
The fix is to replace the direct chooser with a custom implementation that retains connections and closes them only when the target host disappears from peer member list.
What changed
This PR contains the prep work to be able to switch the yarpc chooser implementation with a custom one that retains connections. It will be toggleable via a dynamic config.
Changes to propagate membership updates to new chooser implementation:
resourceImpl
.Outbounds
is a new object thatOutboundsBuilder
s return instead ofyarpc.Outbounds
.Outbounds
encapsulatesyarpc.Outbounds
+UpdatePeers()
func.UpdatePeers
callback to underlying Chooser.PeerChooser
which encapsulates yarpc'speer.Chooser
+UpdatePeers()
funcAdded new custom chooser implementation which gets membership updates but currently it behaves as existing "direct chooser" and doesn't use membership update information. Actual implementation of the connection retaining chooser will be added in a follow up PR.
How did you test it?