-
-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Labels
Description
Problem
No rate limiting on API endpoints:
- Denial of service risk
- Resource exhaustion possible
- No fair usage enforcement
Proposed Solution
Token Bucket Algorithm
type RateLimiter struct {
tokens float64
maxTokens float64
refillRate float64 // tokens per second
lastRefill time.Time
mu sync.Mutex
}
func (rl *RateLimiter) Allow() bool {
rl.mu.Lock()
defer rl.mu.Unlock()
now := time.Now()
elapsed := now.Sub(rl.lastRefill).Seconds()
rl.tokens = min(rl.maxTokens, rl.tokens + elapsed*rl.refillRate)
rl.lastRefill = now
if rl.tokens >= 1 {
rl.tokens--
return true
}
return false
}Middleware
func RateLimitMiddleware(limiter *RateLimiter) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !limiter.Allow() {
w.Header().Set("Retry-After", "1")
w.Header().Set("X-RateLimit-Limit", "100")
w.Header().Set("X-RateLimit-Remaining", "0")
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
}Acceptance Criteria
- Per-client rate limiting (by API key)
- Configurable limits (default: 100 req/min)
- Rate limit headers in responses
- 429 Too Many Requests response
- Retry-After header
- Different limits per endpoint category
Configuration
rate_limiting:
enabled: true
default_limit: 100 # requests per minute
endpoints:
"/v1/tasks/*/run": 10 # more restrictive for executionReferences
- Evaluation Report Section: 1.1 API Design Quality
Reactions are currently unavailable