Merged
Conversation
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Introduce
sub_prompt_verbosityandroot_prompt_verbosity. Both can take the values"light","medium", and"heavy". The prompt verbosity is adjusted accordingly.Ran vf-eval tests with gpt-5-nano (n=10, r=1) on the rlm-secrets environment; here are the accuracy results for different combinations of root- and sub-prompt-verbosity:
Heavier prompting improves performance. The control is doing the same with gpt-5-mini; this model doesn't care about the prompt verbosity at all and simply nails the task almost every time, suggesting that as the models become better, less prompting is needed (good news for training & generalization from training).
Type of Change
Testing
uv run pytestlocally.Checklist
Note
Medium Risk
Changes how system prompts are generated for both root and sub-LLM requests, which can meaningfully affect model behavior and eval outcomes, though it does not touch auth/data persistence.
Overview
Adds
root_prompt_verbosityandsub_prompt_verbosityparameters toRLMEnv(validated and stored on init) to control how verbose the system prompts are for the root REPL (Python/Bash) and for sub-LLMs invoked viallm_batch.Replaces the single hardcoded root/sub system prompts with per-verbosity prompt stores and updates prompt construction to select and format the appropriate variant (including
{filesystem_summary}and{num_turns}). Adds tests asserting the correct snippets/system message are used for each verbosity level.Written by Cursor Bugbot for commit 3ffda4f. This will update automatically on new commits. Configure here.