-
Notifications
You must be signed in to change notification settings - Fork 680
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature] Add rate limiting support #370
Comments
Thanks for raising this issue. To be honest, this isn't high on my priority list because I don't see how to implement this cleanly with a deployment of more than one contour pod as each will keep its own counter of requests in fly. You could probably do some back of the envelope math and set the rate limit to N over the number of pods in your contour deployment, but that's pretty fragile. |
Just dropping a reference into this ticket... The Lyft reference implementation for rate limiting in envoy would require some support to work with Contour, to:
That reference implementation is built to address the very kinds of issues raised earlier in the ticket around how to apply the limits globally vs. per-pod. |
@jhohertz thanks for the update. I had a 5 second glance at the link you included to make sure I understood how this was implemented -- it looks like another xDS api, RLS. That looks pretty straight forward. I'm not sure I can look at this for the 0.8 timeline, which is due at the end of next week. This is probably a medium to large (depending on your knowledge of Envoy and Contour) piece of work, but it should be straight forward to implement. |
I did some work on implementing this feature and there's a roadblock in how RateLimiting is currently implemented. Right now, RateLimiting can only be configured via the bootstrap config. There is an open issue to design the implementation to where it can be configured dynamically as we'd want in Contour (envoyproxy/envoy#4669). If someone wanted this work sooner, they'd need to pass a custom bootstrap config to Envoy, it wouldn't be clean, but would work in a pinch. Here's my wip branch which was some examples (https://github.com/stevesloka/contour/tree/ratelimit). Also, I have a PR to add a Dockerfile to the rate limit service from Lyft, which will allow us to make a build of the rate limit service (envoyproxy/ratelimit#58). |
Looks like a PR was merged (envoyproxy/envoy#5242) to enable rate limiting dynamically. The only blocker now is to have a new enough version of Envoy to utilize this feature. I'm going to do a build of |
Scheduling to the 0.15 milestone to talk about implementing RLS for Contour 1.0 |
Moving to the rc.1 milestone to decide to include this feature or not. |
We will make a call on if this feature will be included for 1.0.0 on Wednesday. |
I'm sorry to report that we will not be attempting this feature for contour 1.0.0. Assigning to the backlog to prioritise after 1.0 |
Related #65 |
Including my use case here per a discussion on the #contour slack channel, though it looks like my request is a bit different and may well be considered outside the scope of what Contour aims to provide. So feel free to shoot it down! What I'd like is something akin to Traefik's RateLimit middleware. I assume there's no mechanism in Envoy for communicating per-client request counters to its sibling pods, but I would be OK with knowing that the real limit is equal to the defined limit multiplied by the number of Envoy daemonset pods controlled by Contour. I definitely don't know what all Envoy is capable of, but from a user's perspective I'd also be happy with either having the ability to implement my own rate limiting service that Envoy would consult, or providing Envoy with an in-cluster redis/memcached instance. The advantage of both these approaches would of course be that the defined limit is the real limit, and the first approach would let us get as fancy as we want. I'm fairly agnostic regarding where the limit should be defined (as a service annotation or within the HTTPProxy CRD), but if I had to pick I'd say the HTTPProxy CRD as that seems like its ability to function either standalone or via including other HTTPProxies could satisfy both folks who like to centralize and those who do not. |
Local rate limiting that does not have a dependency on an external rate limiting service was added in 1.13.0 - https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/network_filters/local_rate_limit_filter#config-network-filters-local-rate-limit. Might be a simpler way to implement rate limiting in Contour |
Thanks for sharing @moderation. It looks like that new feature is intended to be used in conjunction with the global rate limiting service, but it's good to take into account with the design. |
Updates projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Updates projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Updates #370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes #370. Signed-off-by: Steve Kriss <krisss@vmware.com>
Adds support for configuring an external Rate Limit Service (RLS) that is used for making global rate limit decisions. Adds support to HTTPProxy for global rate limit policies, that generate descriptors for requests to be sent to the RLS for rate-limit decisions. Closes projectcontour#370. Signed-off-by: Steve Kriss <krisss@vmware.com> Signed-off-by: iyacontrol <gaohj2015@yeah.net>
From annotations.md, there are several limitations currently supported by contour, e.g.
max-connections
andmax-requests
.In some scenario, we may also need ratelimits. For example, we don't expect envoy to send more than 1k requests per second to service foo, otherwise which would cause service foo to overload. This is pretty useful when service foo are actually doing multimedia jobs (transcoding/thumbnail/watermark/etc) which has estimable QPS capacity.
However, I'm not clear about how to make ratelimit globally restricted by envoy cluster, since all previous limitations are of per instance.
Hope this issue is worth discussing.
The text was updated successfully, but these errors were encountered: