Make local RLM REPL concurrency configurable#777
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| sandbox_client_max_workers: int = 10, | ||
| sandbox_client_max_connections: int = 100, | ||
| sandbox_client_max_keepalive_connections: int = 50, | ||
| local_repl_max_workers: int | None = None, |
There was a problem hiding this comment.
Missing documentation for new configuration parameter
Low Severity · Bugbot Rules
The PR adds a new user-facing parameter local_repl_max_workers to RLMEnv, but the documentation in docs/environments.md is not updated to reflect this. The existing RLMEnv documentation at line 775 describes various configuration options but doesn't mention this new concurrency setting. Per the project's review rules, documentation updates are required when adding user-facing functionality to classes described in docs/.
There was a problem hiding this comment.
docs for the rlm will have to be properly updated in another PR; right now, it changes so quickly that this doesn't make sense (and a bunch of other args are un-documented as well). Will do this later.
Description
Make local RLM REPL concurrency configurable, set high default.
Type of Change
Testing
uv run pytestlocally.Checklist
Note
Introduces configurable local REPL concurrency and wires it into the local executor.
local_repl_max_workersparam toRLMEnv.__init__and documents itclamp(os.cpu_count(), 1..64)and validates>= 1LocalRLMExecutornow constructsThreadPoolExecutor(max_workers=self.env.local_repl_max_workers)Written by Cursor Bugbot for commit ebbcf3d. This will update automatically on new commits. Configure here.