Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epic: Contour Benchmarking and Performance Improvements #4154

Open
4 of 15 tasks
sunjayBhatia opened this issue Nov 4, 2021 · 1 comment
Open
4 of 15 tasks

Epic: Contour Benchmarking and Performance Improvements #4154

sunjayBhatia opened this issue Nov 4, 2021 · 1 comment
Labels
Epic performance priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@sunjayBhatia
Copy link
Member

sunjayBhatia commented Nov 4, 2021

Starting an epic on this topic to ensure work is tracked and we get organized

While we have worked to ensure Contour code performs well via optimizations in feature implementation, code reviews, and fixing performance regressions reported by production users, we do not yet have a dedicated set of tests or benchmarks that demonstrate how we expect Contour to perform in production (under some specific conditions we test against).

When we say "performance" here we're mostly talking about measuring how long Contour takes to reconcile kubernetes resources into Envoy configuration, as Contour the software component is primarily the xDS server and control plane for Envoy. A secondary performance concern may be benchmarking requests throughput in Envoy. Contour does not directly participate in the data plane however it does configure Envoy and we do provide deployment guidelines for Envoy.

Ultimately we don't want to provide "absolute" performance numbers as users may have wildly varying environments, but we could establish benchmarks of the form: Given a cluster with specific characteristics and workloads running in this cluster (that resembles a plausible production cluster), users should expect Contour to make apps exposed by Ingress/HTTPProxy/HTTPRoute resources in X time, Y percentage of the time

We also would like to make performance improvements and be able to assess how changes to Contour internals will affect Contour performance. Building an automated test suite will help us perform these sorts of experiments more easily. This work dovetails with improvements we would like to Contour xDS code for example.

List of items/issues that this work will encompass:

@github-actions
Copy link

github-actions bot commented Feb 5, 2023

The Contour project currently lacks enough contributors to adequately respond to all Issues.

This bot triages Issues according to the following rules:

  • After 60d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the Issue is closed

You can:

  • Mark this Issue as fresh by commenting
  • Close this Issue
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 5, 2023
@sunjayBhatia sunjayBhatia added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Epic performance priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
Status: No status
Development

No branches or pull requests

1 participant