Skip to content

feat(runner): Global concurrency pool for file analyses#158

Merged
dcramer merged 2 commits intomainfrom
feat/global-concurrency-pool
Feb 17, 2026
Merged

feat(runner): Global concurrency pool for file analyses#158
dcramer merged 2 commits intomainfrom
feat/global-concurrency-pool

Conversation

@dcramer
Copy link
Member

@dcramer dcramer commented Feb 17, 2026

Replace the two-level concurrency model (skill pool x file pool) with a
single global semaphore. Previously, runner.concurrency (default: 4)
controlled how many skills ran in parallel, but each skill also spawned up
to 5 file workers internally. With default settings this meant up to 20
concurrent API calls (4 skills x 5 files).

Now all skills launch immediately (so the UI shows them all as "running"),
but each file analysis must acquire a semaphore permit before starting.
The configured concurrency value becomes the hard cap on simultaneous
file analyses across all skills.

Approach: A Semaphore class in src/utils/async.ts is the sole
concurrency gate. runSkillTask gains an optional semaphore param that
wraps each file's processFile call with acquire/release. The three
callers (runSkillTasks, runSkillTasksWithInk, executeAllTriggers)
create a semaphore from the concurrency config and launch all skills in
parallel with unlimited fileConcurrency, letting the semaphore do the
throttling.

Fixes #154

Replace the two-level concurrency model (skill pool x file pool) with a
single global semaphore. Previously, `concurrency` controlled how many
skills ran in parallel, but each skill also spawned up to 5 file workers,
leading to up to 20 concurrent API calls with default settings.

Now all skills launch immediately (so the UI shows them as running), but
each file analysis must acquire a semaphore permit first. The configured
`concurrency` value is the hard cap on simultaneous file analyses across
all skills.

Fixes #154

Co-Authored-By: Claude <noreply@anthropic.com>
@vercel
Copy link

vercel bot commented Feb 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
warden Ready Ready Preview, Comment Feb 17, 2026 10:57pm

Request Review

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

When fileConcurrency was set to MAX_SAFE_INTEGER, two things broke:

1. The batchDelayMs rate-limit check (index >= fileConcurrency) could
   never trigger, silently disabling the delay feature.
2. runPool spawned one worker per file, grabbing all items before
   shouldAbort could intervene, making Ctrl+C ineffective for queued files.

Fix: use semaphore.initialPermits for the delay threshold, and check the
abort signal after acquiring the semaphore (before processing).

Co-Authored-By: Claude <noreply@anthropic.com>
@dcramer dcramer merged commit 0cc4753 into main Feb 17, 2026
13 checks passed
@dcramer dcramer deleted the feat/global-concurrency-pool branch February 17, 2026 23:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: Concurrency limit applies per-skill, not globally across all tasks

1 participant