Skip to content

Conversation

@rameerez
Copy link
Owner

Summary

Fixes a PostgreSQL error when creating API keys with type limits configured:

PG::FeatureNotSupported: ERROR: FOR UPDATE is not allowed with aggregate functions

Problem

The within_key_type_limit validation used .lock(true).count which generates:

SELECT COUNT(*) FROM api_keys WHERE ... FOR UPDATE

PostgreSQL doesn't allow FOR UPDATE with aggregate functions like COUNT(*).

Solution

Changed to .lock(true).ids.size which generates:

SELECT "api_keys"."id" FROM api_keys WHERE ... FOR UPDATE

Then counts the results in Ruby. This achieves the same race condition protection while being valid SQL.

Database Compatibility

Database Before After
PostgreSQL ❌ Fails ✅ Works
MySQL ✅ Works ✅ Works
SQLite ✅ Works ✅ Works

Test plan

  • All 125 existing tests pass
  • Verified fix works in production PostgreSQL environment

🤖 Generated with Claude Code

PostgreSQL doesn't allow FOR UPDATE with aggregate functions like COUNT(*).
This caused a PG::FeatureNotSupported error when creating keys with limits.

Changed from .lock(true).count to .lock(true).ids.size which:
- Selects IDs with row locking (valid SQL on all databases)
- Counts the results in Ruby

This fix is compatible with PostgreSQL, MySQL, and SQLite.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@claude
Copy link
Contributor

claude bot commented Jan 22, 2026

Code Review for PR #7

Summary

This PR fixes a PostgreSQL-specific bug where creating API keys with type limits fails due to FOR UPDATE being used with aggregate functions. The solution is elegant and maintains race condition protection. All tests pass across multiple Ruby/Rails versions.


Strengths

  1. Correct Problem Identification: The root cause analysis is spot-on. PostgreSQL indeed doesn't support FOR UPDATE with aggregate functions like COUNT(*), as per SQL standards.

  2. Elegant Solution: Changing from .lock(true).count to .lock(true).ids.size is the idiomatic Rails approach:

    • Maintains pessimistic locking for race condition protection
    • Generates valid SQL across all database engines
    • Memory overhead is minimal since we're only loading IDs, not full records
  3. Good Documentation: The updated comment clearly explains why this approach is needed, which will help future maintainers.

  4. Test Coverage: The existing test suite at test/key_types_test.rb includes comprehensive coverage:

    • Lines 294-307: Tests key limit enforcement
    • Lines 309-321: Tests per-environment limits
    • All 125 tests passing confirms no regressions
  5. Database Compatibility: The solution works across PostgreSQL, MySQL, and SQLite.


🔍 Observations

  1. Query Execution Order: The PR moves .lock(true) to execute after .where() filters. This is actually more efficient since it locks fewer rows (only the filtered subset).

  2. Memory Considerations: For the use case (enforcing a limit: 1 for publishable keys per environment), loading IDs into memory is negligible. Even with larger limits, ID arrays are lightweight.

  3. Performance: .ids.size is typically slightly faster than .count when the result set is small, since it avoids a separate aggregate query and the result can be cached by Active Record.


💡 Suggestions for Future Enhancement (Not blockers)

While the current implementation is solid, here are some thoughts for future consideration:

  1. Large Limit Handling: If you ever support very large limits (e.g., limit: 10000), consider using .exists? checks instead:

    # For example, if limits ever get much larger:
    if owner.api_keys.active.where(...).limit(limit + 1).count > limit

    However, for the current use case (limit: 1 for publishable keys), this is unnecessary optimization.

  2. Test for Concurrent Creation: While the existing tests verify limits work, a concurrent stress test could verify the locking mechanism under race conditions:

    # Example test structure:
    threads = 5.times.map do
      Thread.new { user.create_api_key!(...) }
    end
    threads.each(&:join)
    assert_equal 1, user.api_keys.where(key_type: "publishable").count

    This would provide additional confidence in the locking behavior, though the current fix should handle this correctly.


🎯 Verdict

APPROVED

This is a well-reasoned fix for a legitimate PostgreSQL compatibility issue. The solution:

  • Solves the immediate problem
  • Maintains all safety guarantees (race condition protection)
  • Has no performance regressions
  • Works across all supported databases
  • Is well-documented

The PR is ready to merge.


📊 Technical Details

Before (PostgreSQL fails):

SELECT COUNT(*) FROM api_keys WHERE ... FOR UPDATE
-- ❌ ERROR: FOR UPDATE is not allowed with aggregate functions

After (All databases work):

SELECT "api_keys"."id" FROM api_keys WHERE ... FOR UPDATE
-- ✅ Returns [123, 456] → .size in Ruby → 2

Race Condition Protection Preserved:
Both approaches lock the rows during the transaction, preventing concurrent inserts from bypassing the limit check.


Great work on the fix! 🚀

@rameerez rameerez merged commit ffa9f09 into main Jan 22, 2026
7 checks passed
@rameerez rameerez deleted the fix/postgresql-for-update-count branch January 22, 2026 19:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants