Skip to content

Prevent LoadBalancer updates on follower. #6872

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 5, 2025

Conversation

tsaarni
Copy link
Member

@tsaarni tsaarni commented Jan 20, 2025

This PR fixes a memory leak that was triggered by LoadBalancer status updates. Only the leader instance runs loadBalancerStatusWriter and therefore follower does not have anyone reading from the channel that receives status updates. The LoadBalancer status updates are still watched by followers and sent to the channel, causing the go routine calling ServiceStatusLoadBalancerWatcher.notify() to block. This led to LoadBalancerStatus updates piling up and consuming memory, eventually causing an out-of-memory condition and killing the Contour process.

With this change, Service updates are watched only after the instance becomes the leader and starts the loadBalancerStatusWriter.

Fixes #6860

@tsaarni tsaarni requested a review from a team as a code owner January 20, 2025 18:20
@tsaarni tsaarni requested review from skriss and sunjayBhatia and removed request for a team January 20, 2025 18:20
@sunjayBhatia sunjayBhatia requested review from a team, davinci26 and izturn and removed request for a team January 20, 2025 18:20
@tsaarni tsaarni force-pushed the lbstatus-on-follower branch from 8b2af7d to 920d0ce Compare January 20, 2025 18:22
@tsaarni tsaarni added the release-note/small A small change that needs one line of explanation in the release notes. label Jan 20, 2025
Copy link

codecov bot commented Jan 20, 2025

Codecov Report

Attention: Patch coverage is 0% with 30 lines in your changes missing coverage. Please review.

Project coverage is 80.73%. Comparing base (7e2dcaf) to head (0ea6f5c).
Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
cmd/contour/ingressstatus.go 0.00% 27 Missing ⚠️
cmd/contour/serve.go 0.00% 3 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #6872      +/-   ##
==========================================
- Coverage   80.77%   80.73%   -0.05%     
==========================================
  Files         131      131              
  Lines       19936    19946      +10     
==========================================
  Hits        16104    16104              
- Misses       3540     3550      +10     
  Partials      292      292              
Files with missing lines Coverage Δ
cmd/contour/serve.go 22.30% <0.00%> (+0.41%) ⬆️
cmd/contour/ingressstatus.go 24.59% <0.00%> (-6.99%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

func (s *ServiceStatusLoadBalancerWatcher) notify(lbstatus core_v1.LoadBalancerStatus) {
s.LBStatus <- lbstatus
if s.leader.Load() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to lose the envoy service event in this way?

Copy link
Member Author

@tsaarni tsaarni Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Currently the event processing logic relies on Added/Updated/Deleted events alone. It works for other resource types as they will be processed by all Contour instances but I think new approach is needed with the load balancer status updates.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can simply handle it. Notify the latest envoy service when it becomes Leader.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed the approach to start the service watcher only after the instance becomes the leader.

@tsaarni tsaarni marked this pull request as draft January 21, 2025 18:21
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 10, 2025
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot closed this Mar 13, 2025
@tsaarni tsaarni removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2025
@tsaarni tsaarni reopened this Mar 13, 2025
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 28, 2025
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 12, 2025
@tsaarni tsaarni force-pushed the lbstatus-on-follower branch 3 times, most recently from fc9581e to a2a792e Compare May 3, 2025 10:32
@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 4, 2025
Start watching Service updates only after becoming leader, when starting
loadBalancerStatusWriter. This avoids piling up unprocessed Service updates on
follower instance, consuming memory and eventually causing an out-of-memory
condition and killing Contour process.

Signed-off-by: Tero Saarni <tero.saarni@est.tech>
@tsaarni tsaarni force-pushed the lbstatus-on-follower branch from a2a792e to 0ea6f5c Compare May 5, 2025 07:36
@tsaarni
Copy link
Member Author

tsaarni commented May 5, 2025

Some notes for manual testing:

Prepare environment with modified Contour

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
make container 
docker tag ghcr.io/projectcontour/contour:$(git rev-parse --short HEAD) localhost/contour:latest
kind load docker-image localhost/contour:latest --name contour
kubectl -n projectcontour set image deployment/contour contour=localhost/contour:latest

Then create HTTPProxy

kubectl apply -f https://raw.githubusercontent.com/tsaarni/devenvs/refs/heads/main/contour/manifests/echoserver.yaml

Set IP address for service

kubectl -n projectcontour patch service envoy --type=merge --subresource=status --patch '"status": {"loadBalancer":{"ingress":[{"ip":"1.2.3.4"}]}}'

Check that the address is reflected on HTTPProxy

kubectl get httpproxy echoserver -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

Scale down replicas to zero and change address while Contour is down

kubectl -n projectcontour scale deployment --replicas=0 contour
kubectl -n projectcontour patch service envoy --type=merge --subresource=status --patch '"status": {"loadBalancer":{"ingress":[{"ip":"1.2.3.5"}]}}'

Scale up and check that leader picks up the latest address and updates the HTTPProxy

kubectl get httpproxy echoserver -o jsonpath='{.status.loadBalancer.ingress[*].ip}'

@tsaarni tsaarni marked this pull request as ready for review May 5, 2025 07:42
@sunjayBhatia sunjayBhatia requested a review from a team May 5, 2025 07:42
Copy link
Member

@sunjayBhatia sunjayBhatia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice fix! yeah the load balancer status writer already has other informers for updating resources that need the load balancer status and it makes sense to start the load balancer service watcher there too, since it only needs to run on the leader instance

@sunjayBhatia sunjayBhatia merged commit 154999b into projectcontour:main May 5, 2025
26 of 27 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release-note/small A small change that needs one line of explanation in the release notes.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Non-leader contour controller pod memory keeps increasing until OOM
3 participants