-
-
Notifications
You must be signed in to change notification settings - Fork 28
refactor: rate limit store #1497
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Utilize DB-backed rate limit store instead of cross-process in-memory store This setup will work better if there are multiple nodes of the application running and removed 3rd party dependency This was made from scratch to work with Prisma vs available 3rd party solution
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub. |
|
Warning Review the following alerts detected in dependencies. According to your organization's Security Policy, it is recommended to resolve "Warn" alerts. Learn more about Socket for GitHub.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the rate limiting implementation from an in-memory cross-process store to a database-backed Prisma store, enabling better multi-node scalability and removing the @express-rate-limit/cluster-memory-store dependency.
Key Changes:
- Implements custom
PrismaRateLimitStoreclass that stores rate limit data in PostgreSQL - Adds new
RateLimitmodel to Prisma schema with indexed resetTime for efficient cleanup - Removes cluster-specific rate limit initialization code from main.ts
Reviewed changes
Copilot reviewed 6 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
libs/api-config/src/lib/api-rate-limit.config.ts |
Implements new PrismaRateLimitStore class with get/increment/decrement/reset operations and automatic cleanup |
prisma/schema.prisma |
Adds RateLimit model with key, hits, resetTime fields and appropriate indexes |
prisma/migrations/20260104191734_add_rate_limit_store/migration.sql |
Creates rate_limit table with primary key on key field and index on resetTime |
package.json |
Removes @express-rate-limit/cluster-memory-store dependency |
yarn.lock |
Updates lockfile to reflect removed dependency and its transitive dependencies |
apps/api/src/main.ts |
Removes cluster-specific rate limiter store initialization code |
.env.example |
Removes HONEYCOMB environment variable examples (appears unrelated to this PR) |
Comments suppressed due to low confidence (1)
.env.example:106
- The removal of HONEYCOMB environment variables appears to be unrelated to this rate limit refactoring PR. These variables are still referenced in the codebase (see libs/api-config/src/lib/env-config.ts lines 227-228). If this is intentional, it should be part of a separate PR focused on removing Honeycomb integration.
# Nx 18 enables using plugins to infer targets by default
# This is disabled for existing workspaces to maintain compatibility
# For more info, see: https://nx.dev/concepts/inferred-tasks
NX_ADD_PLUGINS=false
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Only decrement if the entry exists and hasn't expired | ||
| // Don't let hits go below 0 | ||
| await prisma.rateLimit.updateMany({ | ||
| where: { key: prefixedKey, resetTime: { gt: now } }, | ||
| data: { | ||
| hits: { decrement: 1 }, | ||
| updatedAt: now, | ||
| }, | ||
| }); |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment states "Don't let hits go below 0" but there's no actual constraint preventing negative values. Prisma's decrement operation will allow hits to become negative. Add a WHERE clause condition to only decrement when hits > 0, for example: where: { key: prefixedKey, resetTime: { gt: now }, hits: { gt: 0 } }
| // Use an atomic upsert (INSERT ... ON CONFLICT) for optimal performance | ||
| // If the key exists and hasn't expired, increment the counter | ||
| // If the key doesn't exist or has expired, create/reset with count of 1 | ||
| const row = await prisma.rateLimit.upsert({ | ||
| where: { key: prefixedKey }, | ||
| select: { hits: true, resetTime: true }, | ||
| update: { | ||
| hits: { increment: 1 }, | ||
| resetTime, | ||
| updatedAt: now, | ||
| }, | ||
| create: { | ||
| key: prefixedKey, | ||
| hits: 1, | ||
| resetTime, | ||
| createdAt: now, | ||
| updatedAt: now, | ||
| }, |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The increment logic has a race condition with expired entries. When an entry exists but has expired, the upsert will increment the expired counter instead of resetting to 1. The logic should check if the existing entry has expired (resetTime <= now) and either delete it first or use conditional logic in the update to reset hits to 1 when the entry is expired. Currently, an expired entry with 50 hits would become 51 hits instead of resetting to 1.
| // Use an atomic upsert (INSERT ... ON CONFLICT) for optimal performance | |
| // If the key exists and hasn't expired, increment the counter | |
| // If the key doesn't exist or has expired, create/reset with count of 1 | |
| const row = await prisma.rateLimit.upsert({ | |
| where: { key: prefixedKey }, | |
| select: { hits: true, resetTime: true }, | |
| update: { | |
| hits: { increment: 1 }, | |
| resetTime, | |
| updatedAt: now, | |
| }, | |
| create: { | |
| key: prefixedKey, | |
| hits: 1, | |
| resetTime, | |
| createdAt: now, | |
| updatedAt: now, | |
| }, | |
| // Perform the read/modify/write logic in a transaction to avoid race conditions | |
| const row = await prisma.$transaction(async (tx) => { | |
| // Fetch the existing entry, if any | |
| const existing = await tx.rateLimit.findUnique({ | |
| where: { key: prefixedKey }, | |
| select: { hits: true, resetTime: true }, | |
| }); | |
| // If the key doesn't exist or has expired, reset the counter to 1 | |
| if (!existing || existing.resetTime <= now) { | |
| return tx.rateLimit.upsert({ | |
| where: { key: prefixedKey }, | |
| select: { hits: true, resetTime: true }, | |
| update: { | |
| hits: 1, | |
| resetTime, | |
| updatedAt: now, | |
| }, | |
| create: { | |
| key: prefixedKey, | |
| hits: 1, | |
| resetTime, | |
| createdAt: now, | |
| updatedAt: now, | |
| }, | |
| }); | |
| } | |
| // If the key exists and hasn't expired, increment the counter | |
| return tx.rateLimit.update({ | |
| where: { key: prefixedKey }, | |
| select: { hits: true, resetTime: true }, | |
| data: { | |
| hits: { increment: 1 }, | |
| resetTime, | |
| updatedAt: now, | |
| }, | |
| }); |
| select: { hits: true, resetTime: true }, | ||
| update: { | ||
| hits: { increment: 1 }, | ||
| resetTime, |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resetTime is being updated on every increment (line 132), which means the window keeps sliding forward with each request. This is incorrect behavior for rate limiting. The resetTime should remain fixed from when the window was first created, not update on every request. Only set resetTime in the create branch, not in the update branch.
| resetTime, |
| const prefixedKey = this.prefixKey(key); | ||
|
|
||
| try { | ||
| await prisma.rateLimit.delete({ |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The resetKey method uses prisma.rateLimit.delete which will throw an error if the key doesn't exist (Prisma throws RecordNotFound). This should either catch and ignore the specific not found error, or use deleteMany instead which won't throw if the record doesn't exist.
| await prisma.rateLimit.delete({ | |
| await prisma.rateLimit.deleteMany({ |
Utilize DB-backed rate limit store instead of cross-process in-memory store
This setup will work better if there are multiple nodes of the application running
and removed 3rd party dependency
This was made from scratch to work with Prisma vs available 3rd party solution