-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
another dynamic limit need #225
Comments
This can be easily implemented. Example of the Gin framework // 5-S
middleware1 := mgin.NewMiddleware(limiter.New(store, rate))
// 100-S
middleware2 := mgin.NewMiddleware(limiter.New(store, rate))
router.Use(func(ctx *gin.Context) {
if(ctx.Request.URL == "/api/v1") {
middleware1(ctx)
return
}
if(ctx.Request.URL == "/api/v2") {
middleware2(ctx)
return
}
}) |
thanks for the example, but what if some user want higher quotas for the same resource? should i generate all possible middleware ? |
This depends on the specific business. Below is pseudocode provided as a reference: var cache sync.Map
router.Use(func(ctx *gin.Context) {
uid := getUserID(ctx)
if v,ok := cache.Load(uid); ok {
h := v.(gin.HandlerFunc)
h(ctx)
return
}
rate := genRateByUserID(uid)
h = mgin.NewMiddleware(limiter.New(store, rate))
cache.Store(uid, h)
h(ctx)
}) Please pay attention to the store used by the limiter. If you are using Redis for storage, you will need to use different prefixes. However, if you are using in-memory storage, you should not encounter any issues. |
Thank you for this very nice library. I was wondering, performance wise, how scalable is this approach? For let's say 10k users. My current understanding is that my program would need to store all the middleware in memory. |
Regarding the performance and scalability of this approach, I haven't personally tested it. However, using Redis as a middleware storage can be a viable option for achieving scalability. |
Thank you @myml for answering these questions. |
Thank you for your answer. Indeed that could be interesting. Gonna try with the local approach one and then see how well it scales. |
@myml Thank you for your answer, I guess probably using lower level code is more elegant, I checked the store interface. it's looks more flexibile to do something like dynamic rate limit. I can just pass in different users id as keys and set different rate. |
@myml In the above way, I will still find that the request for interface /api/v1 may be calculated on interface /api/v2 |
I have multiple resources that are being requested, each with varying capacities. Therefore, I need to implement rate limiting based on the capacity of each individual resource.
The text was updated successfully, but these errors were encountered: