fix: resolve race condition in compound trigger evaluation#33
Conversation
Fixes two race conditions in compound trigger evaluation: 1. **Never-firing race** (transactional): When two child triggers fire concurrently in separate transactions, each only sees its own uncommitted insert due to READ COMMITTED isolation. Neither sees enough firings to trigger the parent. Fix: Use PostgreSQL advisory locks to serialize concurrent evaluations for the same compound trigger. 2. **Double-firing race** (autocommit): When both transactions see all firings, both delete and both fire the parent. Fix: Use DELETE ... RETURNING to make clearing a claim operation. Only the worker that successfully deletes the expected firings proceeds; others bail out. Based on the fix in PrefectHQ/nebula#10716. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
WalkthroughThe changes add concurrency safety to composite trigger evaluations by introducing PostgreSQL advisory locks and race-condition detection. A new Changes
Sequence DiagramsequenceDiagram
participant Worker1 as Worker 1
participant Worker2 as Worker 2
participant DB as PostgreSQL Database
participant Trigger as Composite Trigger Evaluator
Worker1->>DB: acquire_composite_trigger_lock(trigger_id)
Worker2->>DB: acquire_composite_trigger_lock(trigger_id)<br/>(blocks)
Worker1->>Trigger: evaluate_composite_trigger()
Trigger->>DB: clear_child_firings() → returns deleted_ids
Trigger->>Trigger: Compare len(deleted_ids)<br/>vs expected firing_ids
alt Race Condition Detected
Trigger->>Trigger: Debug log race condition
Trigger->>Trigger: Return early (no double-fire)
else No Race Condition
Trigger->>Trigger: Proceed with parent trigger firing
end
DB->>Worker1: Release advisory lock
Worker2->>DB: acquire_composite_trigger_lock(trigger_id)<br/>(acquires)
Worker2->>Trigger: evaluate_composite_trigger()
Trigger->>DB: clear_child_firings() → returns empty (already deleted)
Trigger->>Trigger: Race condition detected (0 ≠ expected)
Trigger->>Trigger: Return early (prevent double-fire)
DB->>Worker2: Release advisory lock
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py`:
- Around line 147-155: The delete query in the function in
composite_trigger_child_firing.py is returning
db.CompositeTriggerChildFiring.child_trigger_id but the function is documented
to return child_firing_ids; change the .returning(...) column to
db.CompositeTriggerChildFiring.child_firing_id so the result contains the
deleted child_firing_id values (and ensure any subsequent use of the query
result still extracts those child_firing_id values).
- Around line 38-45: The current advisory lock key uses Python's randomized
hash(str(trigger.id)) which is not stable across processes; replace that with
the UUID's deterministic integer representation (use trigger.id.int) truncated
to fit a signed bigint (e.g. modulo 2**63 or bitmask) when computing lock_key in
composite_trigger_child_firing.py so all workers derive the same key; keep the
existing pg_advisory_xact_lock call using the new lock_key.
In `@tests/events/server/triggers/test_composite_triggers.py`:
- Around line 1634-1639: The test function compound_automation_concurrent
currently takes side-effect-only fixtures cleared_buckets and
cleared_automations as parameters which triggers ARG002; instead remove those
unused parameters and apply pytest.mark.usefixtures("cleared_buckets",
"cleared_automations") to the compound_automation_concurrent definition so
pytest still runs the fixtures' side effects while keeping the function
signature as async def compound_automation_concurrent(self, automations_session:
AsyncSession) -> Automation.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/prefect/server/events/models/composite_trigger_child_firing.pysrc/prefect/server/events/triggers.pytests/events/server/triggers/test_composite_triggers.py
🧰 Additional context used
🧬 Code graph analysis (3)
src/prefect/server/events/models/composite_trigger_child_firing.py (2)
src/prefect/server/utilities/database.py (1)
get_dialect(741-769)src/prefect/server/database/interface.py (1)
dialect(141-142)
tests/events/server/triggers/test_composite_triggers.py (3)
src/prefect/server/events/schemas/automations.py (2)
triggers(516-518)Firing(649-718)src/prefect/server/events/triggers.py (2)
load_automation(713-728)act(424-471)src/prefect/utilities/asyncutils.py (1)
gather(533-544)
src/prefect/server/events/triggers.py (1)
src/prefect/server/events/models/composite_trigger_child_firing.py (2)
acquire_composite_trigger_lock(17-47)clear_child_firings(134-157)
🪛 Ruff (0.14.14)
tests/events/server/triggers/test_composite_triggers.py
1637-1637: Unused method argument: cleared_buckets
(ARG002)
1638-1638: Unused method argument: cleared_automations
(ARG002)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (38)
- GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.11, postgres:14
- GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.14, postgres:14
- GitHub Check: Client Tests - python:3.11, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.13, sqlite
- GitHub Check: Database and Orchestration Tests - python:3.12, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.10, postgres:14
- GitHub Check: Client Tests - python:3.10, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.14, postgres:14
- GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.12, postgres:14
- GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.13, postgres:14
- GitHub Check: Client Tests - python:3.12, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.13, postgres:14
- GitHub Check: Runner, Worker, Settings, Input, and CLI Tests - python:3.10, postgres:14
- GitHub Check: Client Tests - python:3.13, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.14, sqlite
- GitHub Check: Database and Orchestration Tests - python:3.12, sqlite
- GitHub Check: Client Tests - python:3.14, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.11, sqlite
- GitHub Check: Database and Orchestration Tests - python:3.10, sqlite
- GitHub Check: Server Tests - python:3.11, postgres:14
- GitHub Check: Server Tests - python:3.13, postgres:14
- GitHub Check: Database and Orchestration Tests - python:3.11, postgres:14
- GitHub Check: Server Tests - python:3.14, postgres:14
- GitHub Check: Server Tests - python:3.14, sqlite
- GitHub Check: Server Tests - python:3.12, postgres:14
- GitHub Check: Server Tests - python:3.11, sqlite
- GitHub Check: Server Tests - python:3.12, sqlite
- GitHub Check: Server Tests - python:3.10, postgres:14
- GitHub Check: Server Tests - python:3.13, sqlite
- GitHub Check: Server Tests - python:3.10, sqlite
- GitHub Check: Pre-commit checks
- GitHub Check: Benchmark
- GitHub Check: Test SQLite 3.24.0 Compatibility
- GitHub Check: Benchmark
- GitHub Check: Integration tests
@main - GitHub Check: Prefect Client Compatibility Tests
- GitHub Check: Test SQLite 3.31.1 Compatibility
- GitHub Check: typesafety
🔇 Additional comments (4)
src/prefect/server/events/triggers.py (3)
71-74: Standard logger initialization looks good.Using the module logger is clear and consistent.
349-353: Good serialization point for composite trigger evaluation.Acquiring the advisory lock here is the right place to prevent concurrent evaluations from racing.
390-410: Race-detection guard on delete is solid.The “claim by delete” check prevents double-firing in concurrent evaluations.
tests/events/server/triggers/test_composite_triggers.py (1)
1674-1747: Concurrency regression tests are a nice addition.These cases exercise the race-fix paths and validate single-fire behavior under concurrent child firings.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| if dialect.name == "postgresql": | ||
| # Use the trigger's UUID as the lock key | ||
| # pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation | ||
| # truncated to fit (collision is extremely unlikely and benign) | ||
| lock_key = hash(str(trigger.id)) % (2**63) | ||
| await session.execute( | ||
| sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key} | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/prefect/server/events/models/composite_trigger_child_firing.py | head -80Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 3630
🏁 Script executed:
# Search for trigger.id type and imports in the file
rg -A 5 -B 5 "trigger\.id" src/prefect/server/events/models/composite_trigger_child_firing.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 2265
🏁 Script executed:
# Check UUID imports and usage in this file
rg "UUID|uuid" src/prefect/server/events/models/composite_trigger_child_firing.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 292
🏁 Script executed:
# Check the trigger model definition to see what type trigger.id is
fd composite_trigger -t fRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 361
🌐 Web query:
Python hash randomization PYTHONHASHSEED how does it affect hash(str()) consistency across processes
💡 Result:
hash(str_obj) in CPython is stable only within a single interpreter process unless you control the hash seed.
What hash randomization does
- By default (
PYTHONHASHSEEDunset or set torandom), CPython chooses a different per-process secret seed at startup, sohash("same text")will generally differ across separate runs / processes. (docs.python.org) - The randomization applies to
strandbytes(and, at the implementation level, also covers the same hashing machinery used formemoryview). (docs.python.org) - The motivation is security (mitigating hash-collision DoS); CPython uses a keyed string/bytes hash (SipHash variants on most modern builds). (peps.python.org)
How PYTHONHASHSEED affects cross-process consistency
From the Python docs: (docs.python.org)
- Unset /
random→ different hash seed each process →hash("x")not consistent across processes. - Set to an integer
N(0…4294967295) → fixed seed →hash("x")becomes repeatable across processes that use the same seed. - Set to
0→ “disable hash randomization” (i.e., no per-process random seed), giving repeatable results across runs. (docs.python.org)
Example (Unix-like shells):
PYTHONHASHSEED=123 python -c 'print(hash("abc"))'
PYTHONHASHSEED=123 python -c 'print(hash("abc"))' # same output
PYTHONHASHSEED=random python -c 'print(hash("abc"))'
PYTHONHASHSEED=random python -c 'print(hash("abc"))' # likely differentImportant caveats
- Even with a fixed
PYTHONHASHSEED, don’t treathash()as a stable persistent identifier: results can differ across Python versions/build options/word size/platforms (the hash algorithm is an internal detail exposed viasys.hash_info.algorithm). (docs.python.org) PYTHONHASHSEEDmust be set before the interpreter starts; changingos.environ["PYTHONHASHSEED"]inside a running process won’t retroactively changehash()behavior for that process. (stackoverflow.com)
Citations:
- 1: https://docs.python.org/3.12/using/cmdline.html?utm_source=openai
- 2: https://docs.python.org/3.12/using/cmdline.html?utm_source=openai
- 3: https://peps.python.org/pep-0456/?utm_source=openai
- 4: https://docs.python.org/3.12/using/cmdline.html?utm_source=openai
- 5: https://docs.python.org/3.12/using/cmdline.html?utm_source=openai
- 6: https://docs.python.org/3/c-api/hash.html?utm_source=openai
- 7: https://stackoverflow.com/questions/25684349/can-os-environpythonhashseed-be-set-dynamically-from-within-an-application?utm_source=openai
🏁 Script executed:
# Find CompositeTrigger definition
rg -A 20 "class CompositeTrigger" src/prefect/server/events/schemas/Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 1950
🏁 Script executed:
# Verify UUID.int attribute exists and works
python3 << 'EOF'
from uuid import UUID
test_uuid = UUID('550e8400-e29b-41d4-a716-446655440000')
print(f"UUID: {test_uuid}")
print(f"UUID.int: {test_uuid.int}")
print(f"UUID.int % (2**63): {test_uuid.int % (2**63)}")
print(f"Type of UUID.int: {type(test_uuid.int)}")
EOFRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 250
🏁 Script executed:
# Find the Trigger base class to confirm trigger.id type
rg -B 5 -A 30 "^class Trigger" src/prefect/server/events/schemas/automations.py | head -60Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 1909
Use a deterministic advisory-lock key across processes.
hash(str(trigger.id)) is randomized per process by default (controlled by PYTHONHASHSEED), so different workers compute different keys and skip mutual exclusion. Use the UUID's stable integer representation instead, which is identical across all processes and aligns with the comment's intent.
🔒 Proposed fix
- lock_key = hash(str(trigger.id)) % (2**63)
+ lock_key = trigger.id.int % (2**63)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if dialect.name == "postgresql": | |
| # Use the trigger's UUID as the lock key | |
| # pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation | |
| # truncated to fit (collision is extremely unlikely and benign) | |
| lock_key = hash(str(trigger.id)) % (2**63) | |
| await session.execute( | |
| sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key} | |
| ) | |
| if dialect.name == "postgresql": | |
| # Use the trigger's UUID as the lock key | |
| # pg_advisory_xact_lock takes a bigint, so we use the UUID's int representation | |
| # truncated to fit (collision is extremely unlikely and benign) | |
| lock_key = trigger.id.int % (2**63) | |
| await session.execute( | |
| sa.text("SELECT pg_advisory_xact_lock(:key)"), {"key": lock_key} | |
| ) |
🤖 Prompt for AI Agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py` around
lines 38 - 45, The current advisory lock key uses Python's randomized
hash(str(trigger.id)) which is not stable across processes; replace that with
the UUID's deterministic integer representation (use trigger.id.int) truncated
to fit a signed bigint (e.g. modulo 2**63 or bitmask) when computing lock_key in
composite_trigger_child_firing.py so all workers derive the same key; keep the
existing pg_advisory_xact_lock call using the new lock_key.
| result = await session.execute( | ||
| sa.delete(db.CompositeTriggerChildFiring) | ||
| .filter( | ||
| db.CompositeTriggerChildFiring.automation_id == trigger.automation.id, | ||
| db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id, | ||
| db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids), | ||
| ) | ||
| .returning(db.CompositeTriggerChildFiring.child_trigger_id) | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, read the specific file and lines mentioned
echo "=== Reading the file at lines 147-155 ==="
sed -n '140,160p' src/prefect/server/events/models/composite_trigger_child_firing.py
echo -e "\n=== Full file to understand context ==="
wc -l src/prefect/server/events/models/composite_trigger_child_firing.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 996
🏁 Script executed:
#!/bin/bash
# Look at the ORM model definition
echo "=== Finding CompositeTriggerChildFiring model ==="
rg -n "class CompositeTriggerChildFiring" -A 20 src/Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 1771
🏁 Script executed:
#!/bin/bash
# Search for the function containing lines 147-155 to understand docstring and intent
echo "=== Finding the function context ==="
sed -n '130,175p' src/prefect/server/events/models/composite_trigger_child_firing.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 1065
🏁 Script executed:
#!/bin/bash
# Search for callers of this function to see what they expect
echo "=== Finding callers of the function ==="
rg -n "\.child_firing_id\|\.child_trigger_id" src/prefect/server/events/ -A 2 -B 2Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 125
Return deleted child_firing_id values, not child_trigger_id.
The docstring explicitly states the function returns child_firing_ids that were successfully deleted, and callers use this to detect races. Returning child_trigger_id violates the documented contract and breaks the intended comparison logic with the input firing_ids.
🧩 Proposed fix
- .returning(db.CompositeTriggerChildFiring.child_trigger_id)
+ .returning(db.CompositeTriggerChildFiring.child_firing_id)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| result = await session.execute( | |
| sa.delete(db.CompositeTriggerChildFiring) | |
| .filter( | |
| db.CompositeTriggerChildFiring.automation_id == trigger.automation.id, | |
| db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id, | |
| db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids), | |
| ) | |
| .returning(db.CompositeTriggerChildFiring.child_trigger_id) | |
| ) | |
| result = await session.execute( | |
| sa.delete(db.CompositeTriggerChildFiring) | |
| .filter( | |
| db.CompositeTriggerChildFiring.automation_id == trigger.automation.id, | |
| db.CompositeTriggerChildFiring.parent_trigger_id == trigger.id, | |
| db.CompositeTriggerChildFiring.child_firing_id.in_(firing_ids), | |
| ) | |
| .returning(db.CompositeTriggerChildFiring.child_firing_id) | |
| ) |
🤖 Prompt for AI Agents
In `@src/prefect/server/events/models/composite_trigger_child_firing.py` around
lines 147 - 155, The delete query in the function in
composite_trigger_child_firing.py is returning
db.CompositeTriggerChildFiring.child_trigger_id but the function is documented
to return child_firing_ids; change the .returning(...) column to
db.CompositeTriggerChildFiring.child_firing_id so the result contains the
deleted child_firing_id values (and ensure any subsequent use of the query
result still extracts those child_firing_id values).
| async def compound_automation_concurrent( | ||
| self, | ||
| automations_session: AsyncSession, | ||
| cleared_buckets: None, | ||
| cleared_automations: None, | ||
| ) -> Automation: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate the test file and check if it exists
fd "test_composite_triggers.py" --type fRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 141
🏁 Script executed:
# Get line count to understand file size
wc -l tests/events/server/triggers/test_composite_triggers.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 146
🏁 Script executed:
# Read the specific lines mentioned in the review (1634-1639)
sed -n '1634,1650p' tests/events/server/triggers/test_composite_triggers.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 771
🌐 Web query:
Ruff ARG002 pytest fixture unused arguments pattern
💡 Result:
If Ruff is flagging a pytest-injected fixture parameter as unused (often in class-based tests/fixtures, showing up as ARG002 for methods), you have three practical options:
- If you can rename the parameter, rename it to match Ruff’s “dummy” pattern (leading underscore). Ruff treats underscore-prefixed args as intentionally unused via
lint.dummy-variable-rgx. [1]
class TestX:
def test_something(self, _unused_fixture): ...(Only works if pytest will still resolve the fixture name as written.)
- If the fixture is only needed for side effects, prefer
usefixturesinstead of an unused parameter (Ruff even has a dedicated pytest rule for this pattern:PT019). [2]
import pytest
`@pytest.mark.usefixtures`("db")
def test_something():
...- If you must keep the parameter name for pytest injection (common case), suppress
ARG002either locally withnoqa[3] or via per-file ignores: [4]
# local
def test_something(self, db): # noqa: ARG002
...
# pyproject.toml / ruff.toml
[tool.ruff.lint.per-file-ignores]
"tests/**/*.py" = ["ARG001", "ARG002"]Notes
ARG002is specifically “unused method argument” (instance methods). [1]per-file-ignoresis the standard Ruff mechanism to ignore rules by path/pattern. [4]
Sources: Ruff ARG002 rule docs [1], Ruff PT019 rule docs [2], Ruff suppression (noqa) docs [3], Ruff per-file-ignores setting docs [4].
🏁 Script executed:
# Read more lines to see complete method body and verify fixtures aren't used
sed -n '1634,1680p' tests/events/server/triggers/test_composite_triggers.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 1766
🌐 Web query:
pytest fixture parameter naming underscore prefix ARG002 pytest inject fixture by name
💡 Result:
In pytest, fixture injection is name-based: pytest looks at a test function’s parameter names and injects fixtures with the same names. Changing a parameter from db to _db means pytest will now look for a fixture literally named _db (and db will no longer be injected). [1]
If you’re adding a leading underscore only to silence a linter warning like Ruff ARG002 (“unused argument”), note that Ruff treats leading-underscore names as intentionally unused. [2] But that conflicts with pytest’s “inject-by-name” behavior unless your fixture is also named with the underscore.
Practical options:
-
If you don’t need the fixture value (setup/side-effects only): use
@pytest.mark.usefixturesinstead of a function argument (this is the idiomatic pytest solution, and Ruff even has a dedicated rule recommending it). [3]`@pytest.mark.usefixtures`("db") def test_something(): ...
-
If you really want the argument to be
_db, then the fixture must also be named_db(either define it with that name, or alias it with@pytest.fixture(name="...")). Pytest supports explicitly naming fixtures via thename=parameter. [1] -
Alternatively, keep the real fixture name (
db) and silence/configure the linter (e.g., per-file ignores for tests, or inline# noqa: ARG002), so you don’t have to rename the parameter and break injection. [2]
Sources: pytest fixture injection & name= parameter [1]; Ruff ARG002 underscore guidance [2]; Ruff PT019 recommending usefixtures when the fixture argument is unused [3].
🏁 Script executed:
# Search for `@pytest.mark.usefixtures` usage in the test file
rg "@pytest.mark.usefixtures" tests/events/server/triggers/test_composite_triggers.py -A 2Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 68
🏁 Script executed:
# Also search for fixtures decorator definitions to understand fixture naming conventions
rg "@pytest.fixture" tests/events/server/triggers/test_composite_triggers.py -A 1 | head -30Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 821
🏁 Script executed:
# Check if cleared_buckets or cleared_automations are defined/used elsewhere in the test file
rg "cleared_buckets|cleared_automations" tests/events/server/triggers/test_composite_triggers.pyRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 985
🏁 Script executed:
# Also check the broader context - search for similar fixture cleanup patterns in the file
rg "@pytest.fixture.*cleared" tests/events/server/triggers/test_composite_triggers.py -B 1 -A 3Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 68
🏁 Script executed:
# Search for fixture definitions in conftest.py files
fd "conftest.py" --type f | xargs grep -l "cleared_buckets\|cleared_automations" 2>/dev/nullRepository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 158
🏁 Script executed:
# Search the entire test directory for fixture definitions
rg "def cleared_buckets|def cleared_automations" tests/ -A 3Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 648
🏁 Script executed:
# Check if there's a conftest in the test directory or parent
find tests/events/server/triggers -name "conftest.py" -exec cat {} \;Repository: qodo-benchmark/prefect-combined-coderabbit
Length of output: 2112
Use @pytest.mark.usefixtures for side-effect fixtures instead of renaming parameters.
cleared_buckets and cleared_automations trigger side effects (clearing database/resetting triggers) but aren't used within the method body. Ruff flags this with ARG002. However, renaming the parameters with an underscore prefix will break the test because pytest resolves fixtures by exact name matching—it would look for fixtures named _cleared_buckets and _cleared_automations, which don't exist.
The correct approach for side-effect-only fixtures is to use the @pytest.mark.usefixtures decorator:
`@pytest.mark.usefixtures`("cleared_buckets", "cleared_automations")
async def compound_automation_concurrent(
self,
automations_session: AsyncSession,
) -> Automation:This is the idiomatic pytest pattern and aligns with Ruff's PT019 recommendation.
🧰 Tools
🪛 Ruff (0.14.14)
1637-1637: Unused method argument: cleared_buckets
(ARG002)
1638-1638: Unused method argument: cleared_automations
(ARG002)
🤖 Prompt for AI Agents
In `@tests/events/server/triggers/test_composite_triggers.py` around lines 1634 -
1639, The test function compound_automation_concurrent currently takes
side-effect-only fixtures cleared_buckets and cleared_automations as parameters
which triggers ARG002; instead remove those unused parameters and apply
pytest.mark.usefixtures("cleared_buckets", "cleared_automations") to the
compound_automation_concurrent definition so pytest still runs the fixtures'
side effects while keeping the function signature as async def
compound_automation_concurrent(self, automations_session: AsyncSession) ->
Automation.
Benchmark PR from agentic-review-benchmarks#7
Summary by CodeRabbit
Release Notes
Bug Fixes
Tests
✏️ Tip: You can customize this high-level summary in your review settings.