Skip to content

fix: resolve race condition in compound trigger evaluation#33

Open
tomerqodo wants to merge 2 commits intocoderabbit_full_base_fix_resolve_race_condition_in_compound_trigger_evaluation_pr7from
coderabbit_full_head_fix_resolve_race_condition_in_compound_trigger_evaluation_pr7
Open

fix: resolve race condition in compound trigger evaluation#33
tomerqodo wants to merge 2 commits intocoderabbit_full_base_fix_resolve_race_condition_in_compound_trigger_evaluation_pr7from
coderabbit_full_head_fix_resolve_race_condition_in_compound_trigger_evaluation_pr7

Conversation

@tomerqodo
Copy link

@tomerqodo tomerqodo commented Jan 29, 2026

Benchmark PR from agentic-review-benchmarks#7

Summary by CodeRabbit

Release Notes

  • Bug Fixes

    • Resolved race conditions that could cause composite triggers to fire multiple times when child triggers fire concurrently by implementing serialization mechanisms.
  • Tests

    • Added comprehensive concurrency test suite for composite triggers to verify correct single-fire behavior under simultaneous child trigger evaluations.

✏️ Tip: You can customize this high-level summary in your review settings.

desertaxle and others added 2 commits January 25, 2026 12:10
Fixes two race conditions in compound trigger evaluation:

1. **Never-firing race** (transactional): When two child triggers fire
   concurrently in separate transactions, each only sees its own
   uncommitted insert due to READ COMMITTED isolation. Neither sees
   enough firings to trigger the parent.

   Fix: Use PostgreSQL advisory locks to serialize concurrent evaluations
   for the same compound trigger.

2. **Double-firing race** (autocommit): When both transactions see all
   firings, both delete and both fire the parent.

   Fix: Use DELETE ... RETURNING to make clearing a claim operation.
   Only the worker that successfully deletes the expected firings
   proceeds; others bail out.

Based on the fix in PrefectHQ/nebula#10716.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Jan 29, 2026

Walkthrough

The changes add concurrency safety to composite trigger evaluations by introducing PostgreSQL advisory locks and race-condition detection. A new acquire_composite_trigger_lock function serializes concurrent child trigger evaluations, while clear_child_firings now returns deleted IDs to verify that actual deletions match expected firings, preventing double-firing scenarios.

Changes

Cohort / File(s) Summary
Advisory Lock & Race Detection
src/prefect/server/events/models/composite_trigger_child_firing.py, src/prefect/server/events/triggers.py
Added acquire_composite_trigger_lock for PostgreSQL advisory lock acquisition to serialize concurrent evaluations. Modified clear_child_firings to return set[UUID] of deleted IDs. Enhanced evaluate_composite_trigger to acquire lock and detect race conditions by comparing deleted IDs count against expected firing IDs, with early exit if mismatch detected.
Concurrency Test Suite
tests/events/server/triggers/test_composite_triggers.py
Added TestCompoundTriggerConcurrency class with fixture compound_automation_concurrent and two tests (test_compound_trigger_does_not_double_fire_when_children_race, test_concurrent_child_firings_still_triggers_parent) that exercise concurrent child firing scenarios and verify single parent trigger firing.

Sequence Diagram

sequenceDiagram
    participant Worker1 as Worker 1
    participant Worker2 as Worker 2
    participant DB as PostgreSQL Database
    participant Trigger as Composite Trigger Evaluator

    Worker1->>DB: acquire_composite_trigger_lock(trigger_id)
    Worker2->>DB: acquire_composite_trigger_lock(trigger_id)<br/>(blocks)
    
    Worker1->>Trigger: evaluate_composite_trigger()
    Trigger->>DB: clear_child_firings() → returns deleted_ids
    Trigger->>Trigger: Compare len(deleted_ids)<br/>vs expected firing_ids
    
    alt Race Condition Detected
        Trigger->>Trigger: Debug log race condition
        Trigger->>Trigger: Return early (no double-fire)
    else No Race Condition
        Trigger->>Trigger: Proceed with parent trigger firing
    end
    
    DB->>Worker1: Release advisory lock
    Worker2->>DB: acquire_composite_trigger_lock(trigger_id)<br/>(acquires)
    Worker2->>Trigger: evaluate_composite_trigger()
    Trigger->>DB: clear_child_firings() → returns empty (already deleted)
    Trigger->>Trigger: Race condition detected (0 ≠ expected)
    Trigger->>Trigger: Return early (prevent double-fire)
    DB->>Worker2: Release advisory lock
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Two triggers race to fire at once,
A lock steps in—no double-dunce!
We clear the firings, check the count,
And catch the race at every mount.
Now composite triggers fire just right,
One bloom of logic, shining bright! ✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is minimal and only references a benchmark PR without explaining the changes, required checklist items, or issue references. It does not follow the repository's description template structure. Add a comprehensive overview of the changes addressing both race conditions, reference the related issue, confirm test coverage, and complete the provided checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 62.50% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: resolve race condition in compound trigger evaluation' clearly and specifically describes the main change, which is resolving race conditions in compound trigger evaluation as shown across all modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py`:
- Around line 147-155: The delete query in the function in
composite_trigger_child_firing.py is returning
db.CompositeTriggerChildFiring.child_trigger_id but the function is documented
to return child_firing_ids; change the .returning(...) column to
db.CompositeTriggerChildFiring.child_firing_id so the result contains the
deleted child_firing_id values (and ensure any subsequent use of the query
result still extracts those child_firing_id values).
- Around line 38-45: The current advisory lock key uses Python's randomized
hash(str(trigger.id)) which is not stable across processes; replace that with
the UUID's deterministic integer representation (use trigger.id.int) truncated
to fit a signed bigint (e.g. modulo 2**63 or bitmask) when computing lock_key in
composite_trigger_child_firing.py so all workers derive the same key; keep the
existing pg_advisory_xact_lock call using the new lock_key.

In `@tests/events/server/triggers/test_composite_triggers.py`:
- Around line 1634-1639: The test function compound_automation_concurrent
currently takes side-effect-only fixtures cleared_buckets and
cleared_automations as parameters which triggers ARG002; instead remove those
unused parameters and apply pytest.mark.usefixtures("cleared_buckets",
"cleared_automations") to the compound_automation_concurrent definition so
pytest still runs the fixtures' side effects while keeping the function
signature as async def compound_automation_concurrent(self, automations_session:
AsyncSession) -> Automation.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86468ec and 8c360ef.

📒 Files selected for processing (3)
  • src/prefect/server/events/models/composite_trigger_child_firing.py
  • src/prefect/server/events/triggers.py
  • tests/events/server/triggers/test_composite_triggers.py
🧰 Additional context used
🧬 Code graph analysis (3)
src/prefect/server/events/models/composite_trigger_child_firing.py (2)
src/prefect/server/utilities/database.py (1)
  • get_dialect (741-769)
src/prefect/server/database/interface.py (1)
  • dialect (141-142)
tests/events/server/triggers/test_composite_triggers.py (3)
src/prefect/server/events/schemas/automations.py (2)
  • triggers (516-518)
  • Firing (649-718)
src/prefect/server/events/triggers.py (2)
  • load_automation (713-728)
  • act (424-471)
src/prefect/utilities/asyncutils.py (1)
  • gather (533-544)
src/prefect/server/events/triggers.py (1)
src/prefect/server/events/models/composite_trigger_child_firing.py (2)
  • acquire_composite_trigger_lock (17-47)
  • clear_child_firings (134-157)
🪛 Ruff (0.14.14)
tests/events/server/triggers/test_composite_triggers.py

1637-1637: Unused method argument: cleared_buckets

(ARG002)


1638-1638: Unused method argument: cleared_automations

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (38)
  • GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.11, postgres:14
  • GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.14, postgres:14
  • GitHub Check: Client Tests - python:3.11, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.13, sqlite
  • GitHub Check: Database and Orchestration Tests - python:3.12, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.10, postgres:14
  • GitHub Check: Client Tests - python:3.10, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.14, postgres:14
  • GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.12, postgres:14
  • GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.13, postgres:14
  • GitHub Check: Client Tests - python:3.12, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.13, postgres:14
  • GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.10, postgres:14
  • GitHub Check: Client Tests - python:3.13, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.14, sqlite
  • GitHub Check: Database and Orchestration Tests - python:3.12, sqlite
  • GitHub Check: Client Tests - python:3.14, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.11, sqlite
  • GitHub Check: Database and Orchestration Tests - python:3.10, sqlite
  • GitHub Check: Server Tests - python:3.11, postgres:14
  • GitHub Check: Server Tests - python:3.13, postgres:14
  • GitHub Check: Database and Orchestration Tests - python:3.11, postgres:14
  • GitHub Check: Server Tests - python:3.14, postgres:14
  • GitHub Check: Server Tests - python:3.14, sqlite
  • GitHub Check: Server Tests - python:3.12, postgres:14
  • GitHub Check: Server Tests - python:3.11, sqlite
  • GitHub Check: Server Tests - python:3.12, sqlite
  • GitHub Check: Server Tests - python:3.10, postgres:14
  • GitHub Check: Server Tests - python:3.13, sqlite
  • GitHub Check: Server Tests - python:3.10, sqlite
  • GitHub Check: Pre-commit checks
  • GitHub Check: Benchmark
  • GitHub Check: Test SQLite 3.24.0 Compatibility
  • GitHub Check: Benchmark
  • GitHub Check: Integration tests @main
  • GitHub Check: Prefect Client Compatibility Tests
  • GitHub Check: Test SQLite 3.31.1 Compatibility
  • GitHub Check: typesafety
🔇 Additional comments (4)
src/prefect/server/events/triggers.py (3)

71-74: Standard logger initialization looks good.

Using the module logger is clear and consistent.


349-353: Good serialization point for composite trigger evaluation.

Acquiring the advisory lock here is the right place to prevent concurrent evaluations from racing.


390-410: Race-detection guard on delete is solid.

The “claim by delete” check prevents double-firing in concurrent evaluations.

tests/events/server/triggers/test_composite_triggers.py (1)

1674-1747: Concurrency regression tests are a nice addition.

These cases exercise the race-fix paths and validate single-fire behavior under concurrent child firings.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +38 to +45
if dialect.name == "postgresql":
# Use the trigger's UUID as the lock key
# pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation
# truncated to fit (collision is extremely unlikely and benign)
lock_key = hash(str(trigger.id)) % (2**63)
await session.execute(
sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key}
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n src/prefect/server/events/models/composite_trigger_child_firing.py | head -80

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 3630


🏁 Script executed:

# Search for trigger.id type and imports in the file
rg -A 5 -B 5 "trigger\.id" src/prefect/server/events/models/composite_trigger_child_firing.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 2265


🏁 Script executed:

# Check UUID imports and usage in this file
rg "UUID|uuid" src/prefect/server/events/models/composite_trigger_child_firing.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 292


🏁 Script executed:

# Check the trigger model definition to see what type trigger.id is
fd composite_trigger -t f

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 361


🌐 Web query:

Python hash randomization PYTHONHASHSEED how does it affect hash(str()) consistency across processes

💡 Result:

hash(str_obj) in CPython is stable only within a single interpreter process unless you control the hash seed.

What hash randomization does

  • By default (PYTHONHASHSEED unset or set to random), CPython chooses a different per-process secret seed at startup, so hash("same text") will generally differ across separate runs / processes. (docs.python.org)
  • The randomization applies to str and bytes (and, at the implementation level, also covers the same hashing machinery used for memoryview). (docs.python.org)
  • The motivation is security (mitigating hash-collision DoS); CPython uses a keyed string/bytes hash (SipHash variants on most modern builds). (peps.python.org)

How PYTHONHASHSEED affects cross-process consistency

From the Python docs: (docs.python.org)

  • Unset / random → different hash seed each process → hash("x") not consistent across processes.
  • Set to an integer N (0…4294967295) → fixed seed → hash("x") becomes repeatable across processes that use the same seed.
  • Set to 0 → “disable hash randomization” (i.e., no per-process random seed), giving repeatable results across runs. (docs.python.org)

Example (Unix-like shells):

PYTHONHASHSEED=123 python -c 'print(hash("abc"))'
PYTHONHASHSEED=123 python -c 'print(hash("abc"))'  # same output
PYTHONHASHSEED=random python -c 'print(hash("abc"))'
PYTHONHASHSEED=random python -c 'print(hash("abc"))'  # likely different

Important caveats

  • Even with a fixed PYTHONHASHSEED, don’t treat hash() as a stable persistent identifier: results can differ across Python versions/build options/word size/platforms (the hash algorithm is an internal detail exposed via sys.hash_info.algorithm). (docs.python.org)
  • PYTHONHASHSEED must be set before the interpreter starts; changing os.environ["PYTHONHASHSEED"] inside a running process won’t retroactively change hash() behavior for that process. (stackoverflow.com)

Citations:


🏁 Script executed:

# Find CompositeTrigger definition
rg -A 20 "class CompositeTrigger" src/prefect/server/events/schemas/

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 1950


🏁 Script executed:

# Verify UUID.int attribute exists and works
python3 << 'EOF'
from uuid import UUID
test_uuid = UUID('550e8400-e29b-41d4-a716-446655440000')
print(f"UUID: {test_uuid}")
print(f"UUID.int: {test_uuid.int}")
print(f"UUID.int % (2**63): {test_uuid.int % (2**63)}")
print(f"Type of UUID.int: {type(test_uuid.int)}")
EOF

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 250


🏁 Script executed:

# Find the Trigger base class to confirm trigger.id type
rg -B 5 -A 30 "^class Trigger" src/prefect/server/events/schemas/automations.py | head -60

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 1909


Use a deterministic advisory-lock key across processes.

hash(str(trigger.id)) is randomized per process by default (controlled by PYTHONHASHSEED), so different workers compute different keys and skip mutual exclusion. Use the UUID's stable integer representation instead, which is identical across all processes and aligns with the comment's intent.

🔒 Proposed fix
-        lock_key = hash(str(trigger.id)) % (2**63)
+        lock_key = trigger.id.int % (2**63)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if dialect.name == "postgresql":
# Use the trigger's UUID as the lock key
# pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation
# truncated to fit (collision is extremely unlikely and benign)
lock_key = hash(str(trigger.id)) % (2**63)
await session.execute(
sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key}
)
if dialect.name == "postgresql":
# Use the trigger's UUID as the lock key
# pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation
# truncated to fit (collision is extremely unlikely and benign)
lock_key = trigger.id.int % (2**63)
await session.execute(
sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key}
)
🤖 Prompt for AI Agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py` around
lines 38 - 45, The current advisory lock key uses Python's randomized
hash(str(trigger.id)) which is not stable across processes; replace that with
the UUID's deterministic integer representation (use trigger.id.int) truncated
to fit a signed bigint (e.g. modulo 2**63 or bitmask) when computing lock_key in
composite_trigger_child_firing.py so all workers derive the same key; keep the
existing pg_advisory_xact_lock call using the new lock_key.

Comment on lines +147 to 155
result = await session.execute(
sa.delete(db.CompositeTriggerChildFiring)
.filter(
db.CompositeTriggerChildFiring.automation_id == trigger.automation.id,
db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id,
db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids),
)
.returning(db.CompositeTriggerChildFiring.child_trigger_id)
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, read the specific file and lines mentioned
echo "=== Reading the file at lines 147-155 ==="
sed -n '140,160p' src/prefect/server/events/models/composite_trigger_child_firing.py

echo -e "\n=== Full file to understand context ==="
wc -l src/prefect/server/events/models/composite_trigger_child_firing.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 996


🏁 Script executed:

#!/bin/bash
# Look at the ORM model definition
echo "=== Finding CompositeTriggerChildFiring model ==="
rg -n "class CompositeTriggerChildFiring" -A 20 src/

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 1771


🏁 Script executed:

#!/bin/bash
# Search for the function containing lines 147-155 to understand docstring and intent
echo "=== Finding the function context ==="
sed -n '130,175p' src/prefect/server/events/models/composite_trigger_child_firing.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 1065


🏁 Script executed:

#!/bin/bash
# Search for callers of this function to see what they expect
echo "=== Finding callers of the function ==="
rg -n "\.child_firing_id\|\.child_trigger_id" src/prefect/server/events/ -A 2 -B 2

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 125


Return deleted child_firing_id values, not child_trigger_id.

The docstring explicitly states the function returns child_firing_ids that were successfully deleted, and callers use this to detect races. Returning child_trigger_id violates the documented contract and breaks the intended comparison logic with the input firing_ids.

🧩 Proposed fix
-        .returning(db.CompositeTriggerChildFiring.child_trigger_id)
+        .returning(db.CompositeTriggerChildFiring.child_firing_id)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
result = await session.execute(
sa.delete(db.CompositeTriggerChildFiring)
.filter(
db.CompositeTriggerChildFiring.automation_id == trigger.automation.id,
db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id,
db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids),
)
.returning(db.CompositeTriggerChildFiring.child_trigger_id)
)
result = await session.execute(
sa.delete(db.CompositeTriggerChildFiring)
.filter(
db.CompositeTriggerChildFiring.automation_id == trigger.automation.id,
db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id,
db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids),
)
.returning(db.CompositeTriggerChildFiring.child_firing_id)
)
🤖 Prompt for AI Agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py` around
lines 147 - 155, The delete query in the function in
composite_trigger_child_firing.py is returning
db.CompositeTriggerChildFiring.child_trigger_id but the function is documented
to return child_firing_ids; change the .returning(...) column to
db.CompositeTriggerChildFiring.child_firing_id so the result contains the
deleted child_firing_id values (and ensure any subsequent use of the query
result still extracts those child_firing_id values).

Comment on lines +1634 to +1639
async def compound_automation_concurrent(
self,
automations_session: AsyncSession,
cleared_buckets: None,
cleared_automations: None,
) -> Automation:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, locate the test file and check if it exists
fd "test_composite_triggers.py" --type f

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 141


🏁 Script executed:

# Get line count to understand file size
wc -l tests/events/server/triggers/test_composite_triggers.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 146


🏁 Script executed:

# Read the specific lines mentioned in the review (1634-1639)
sed -n '1634,1650p' tests/events/server/triggers/test_composite_triggers.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 771


🌐 Web query:

Ruff ARG002 pytest fixture unused arguments pattern

💡 Result:

If Ruff is flagging a pytest-injected fixture parameter as unused (often in class-based tests/fixtures, showing up as ARG002 for methods), you have three practical options:

  1. If you can rename the parameter, rename it to match Ruff’s “dummy” pattern (leading underscore). Ruff treats underscore-prefixed args as intentionally unused via lint.dummy-variable-rgx. [1]
class TestX:
    def test_something(self, _unused_fixture): ...

(Only works if pytest will still resolve the fixture name as written.)

  1. If the fixture is only needed for side effects, prefer usefixtures instead of an unused parameter (Ruff even has a dedicated pytest rule for this pattern: PT019). [2]
import pytest

`@pytest.mark.usefixtures`("db")
def test_something():
    ...
  1. If you must keep the parameter name for pytest injection (common case), suppress ARG002 either locally with noqa [3] or via per-file ignores: [4]
# local
def test_something(self, db):  # noqa: ARG002
    ...

# pyproject.toml / ruff.toml
[tool.ruff.lint.per-file-ignores]
"tests/**/*.py" = ["ARG001", "ARG002"]

Notes

  • ARG002 is specifically “unused method argument” (instance methods). [1]
  • per-file-ignores is the standard Ruff mechanism to ignore rules by path/pattern. [4]

Sources: Ruff ARG002 rule docs [1], Ruff PT019 rule docs [2], Ruff suppression (noqa) docs [3], Ruff per-file-ignores setting docs [4].


🏁 Script executed:

# Read more lines to see complete method body and verify fixtures aren't used
sed -n '1634,1680p' tests/events/server/triggers/test_composite_triggers.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 1766


🌐 Web query:

pytest fixture parameter naming underscore prefix ARG002 pytest inject fixture by name

💡 Result:

In pytest, fixture injection is name-based: pytest looks at a test function’s parameter names and injects fixtures with the same names. Changing a parameter from db to _db means pytest will now look for a fixture literally named _db (and db will no longer be injected). [1]

If you’re adding a leading underscore only to silence a linter warning like Ruff ARG002 (“unused argument”), note that Ruff treats leading-underscore names as intentionally unused. [2] But that conflicts with pytest’s “inject-by-name” behavior unless your fixture is also named with the underscore.

Practical options:

  1. If you don’t need the fixture value (setup/side-effects only): use @pytest.mark.usefixtures instead of a function argument (this is the idiomatic pytest solution, and Ruff even has a dedicated rule recommending it). [3]

    `@pytest.mark.usefixtures`("db")
    def test_something():
        ...
  2. If you really want the argument to be _db, then the fixture must also be named _db (either define it with that name, or alias it with @pytest.fixture(name="...")). Pytest supports explicitly naming fixtures via the name= parameter. [1]

  3. Alternatively, keep the real fixture name (db) and silence/configure the linter (e.g., per-file ignores for tests, or inline # noqa: ARG002), so you don’t have to rename the parameter and break injection. [2]

Sources: pytest fixture injection & name= parameter [1]; Ruff ARG002 underscore guidance [2]; Ruff PT019 recommending usefixtures when the fixture argument is unused [3].


🏁 Script executed:

# Search for `@pytest.mark.usefixtures` usage in the test file
rg "@pytest.mark.usefixtures" tests/events/server/triggers/test_composite_triggers.py -A 2

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 68


🏁 Script executed:

# Also search for fixtures decorator definitions to understand fixture naming conventions
rg "@pytest.fixture" tests/events/server/triggers/test_composite_triggers.py -A 1 | head -30

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 821


🏁 Script executed:

# Check if cleared_buckets or cleared_automations are defined/used elsewhere in the test file
rg "cleared_buckets|cleared_automations" tests/events/server/triggers/test_composite_triggers.py

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 985


🏁 Script executed:

# Also check the broader context - search for similar fixture cleanup patterns in the file
rg "@pytest.fixture.*cleared" tests/events/server/triggers/test_composite_triggers.py -B 1 -A 3

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 68


🏁 Script executed:

# Search for fixture definitions in conftest.py files
fd "conftest.py" --type f | xargs grep -l "cleared_buckets\|cleared_automations" 2>/dev/null

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 158


🏁 Script executed:

# Search the entire test directory for fixture definitions
rg "def cleared_buckets|def cleared_automations" tests/ -A 3

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 648


🏁 Script executed:

# Check if there's a conftest in the test directory or parent
find tests/events/server/triggers -name "conftest.py" -exec cat {} \;

Repository: qodo-benchmark/prefect-combined-coderabbit

Length of output: 2112


Use @pytest.mark.usefixtures for side-effect fixtures instead of renaming parameters.

cleared_buckets and cleared_automations trigger side effects (clearing database/resetting triggers) but aren't used within the method body. Ruff flags this with ARG002. However, renaming the parameters with an underscore prefix will break the test because pytest resolves fixtures by exact name matching—it would look for fixtures named _cleared_buckets and _cleared_automations, which don't exist.

The correct approach for side-effect-only fixtures is to use the @pytest.mark.usefixtures decorator:

`@pytest.mark.usefixtures`("cleared_buckets", "cleared_automations")
async def compound_automation_concurrent(
    self,
    automations_session: AsyncSession,
) -> Automation:

This is the idiomatic pytest pattern and aligns with Ruff's PT019 recommendation.

🧰 Tools
🪛 Ruff (0.14.14)

1637-1637: Unused method argument: cleared_buckets

(ARG002)


1638-1638: Unused method argument: cleared_automations

(ARG002)

🤖 Prompt for AI Agents
In `@tests/events/server/triggers/test_composite_triggers.py` around lines 1634 -
1639, The test function compound_automation_concurrent currently takes
side-effect-only fixtures cleared_buckets and cleared_automations as parameters
which triggers ARG002; instead remove those unused parameters and apply
pytest.mark.usefixtures("cleared_buckets", "cleared_automations") to the
compound_automation_concurrent definition so pytest still runs the fixtures'
side effects while keeping the function signature as async def
compound_automation_concurrent(self, automations_session: AsyncSession) ->
Automation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants