Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection retaining mode for p2p peer chooser #6471

Merged

Conversation

taylanisikdemir
Copy link
Member

@taylanisikdemir taylanisikdemir commented Nov 2, 2024

What changed?
This PR is continuation of PR #6345 to complete implementation of custom peer chooser.
The custom chooser differs from yarpc's default "direct" chooser in how it handles connections. Direct chooser creates & releases p2p connections for each request.
The new peer chooser retains connections until the peer disappears from member list. Peers are added as needed basis (when a request is about to be made to a target peer).
There's going to be one peer chooser instance per target service which will cache active peers internally.
Only matching and history are such target services that all other services communicate p2p.

This is hidden behind feature flag system.enableConnectionRetainingDirectChooser and yarpc's direct peer chooser will still be used by default.

Other changes:

  • Start/Stop and logging improvements to rpc factory
  • Carry target service name to the UpdatePeers callbacks so it can be used to honor/ignore membership updates accordingly.
  • Handle Start/Stop for lazy initialized legacyChooser scenario for runtime dynamic config toggling

Why?
Avoid unnecessary cost of recreating connections for each p2p request.

How did you test it?

  • Unit tests
  • Deployed to a dev environment and validated via logs & metrics

Copy link

codecov bot commented Nov 4, 2024

Codecov Report

Attention: Patch coverage is 81.06509% with 32 lines in your changes missing coverage. Please review.

Project coverage is 81.10%. Comparing base (a6c36bf) to head (03318db).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
common/rpc/direct_peer_chooser.go 81.20% 18 Missing and 7 partials ⚠️
common/resource/resource_impl.go 0.00% 2 Missing and 1 partial ⚠️
common/rpc/outbounds.go 60.00% 1 Missing and 1 partial ⚠️
common/rpc/factory.go 94.73% 1 Missing ⚠️
common/rpc/peer_chooser.go 83.33% 1 Missing ⚠️
Additional details and impacted files
Files with missing lines Coverage Δ
common/rpc/params.go 77.94% <100.00%> (ø)
common/rpc/factory.go 72.26% <94.73%> (+4.85%) ⬆️
common/rpc/peer_chooser.go 83.33% <83.33%> (+1.51%) ⬆️
common/rpc/outbounds.go 82.14% <60.00%> (ø)
common/resource/resource_impl.go 86.18% <0.00%> (-0.74%) ⬇️
common/rpc/direct_peer_chooser.go 79.54% <81.20%> (+11.36%) ⬆️

... and 9 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9d328c7...03318db. Read the comment docs.

@taylanisikdemir taylanisikdemir merged commit d2d1d47 into cadence-workflow:master Nov 12, 2024
20 checks passed
@taylanisikdemir taylanisikdemir deleted the taylan/p2p_retain branch November 12, 2024 18:32
return c.Start()
}

return fmt.Errorf("failed to start direct peer chooser because direct peer chooser initialization failed, err: %v", g.legacyChooserErr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: %w

common/rpc/direct_peer_chooser.go Show resolved Hide resolved
)

const (
defaultGRPCSizeLimit = 4 * 1024 * 1024
factoryComponentName = "rpc-factory"
)

var (
// P2P outbounds are only needed for history and matching services
servicesToTalkP2P = []string{service.History, service.Matching}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is frontend --> History not in scope? I can see a strong case for having frontend retain connections to history and matching instances as peers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also think that worker might need need membership updates for the domain config stack (@Shaddoll can correct me)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are sharded services that all other services communicate via p2p connections. RPC factory subscribes to membership updates of these services for that reason. Frontend and worker will still use this rpc factory and will be able to communicate with matching&history via p2p

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants