Skip to content

Comments

RLM: add prompt verbosity parameters#814

Merged
snimu merged 2 commits intomainfrom
sebastian/rlm-system-prompts-2026-02-02
Feb 2, 2026
Merged

RLM: add prompt verbosity parameters#814
snimu merged 2 commits intomainfrom
sebastian/rlm-system-prompts-2026-02-02

Conversation

@snimu
Copy link
Contributor

@snimu snimu commented Feb 2, 2026

Description

Introduce sub_prompt_verbosity and root_prompt_verbosity. Both can take the values "light", "medium", and "heavy". The prompt verbosity is adjusted accordingly.

Ran vf-eval tests with gpt-5-nano (n=10, r=1) on the rlm-secrets environment; here are the accuracy results for different combinations of root- and sub-prompt-verbosity:

  • root: "light", sub: "light"
    • answer: 0.3
    • filesystem: 0.0
  • root: "light", sub: "heavy"
    • answer: 0.6
    • filesystem: 0.5
  • root: "heavy", sub: "light"
    • answer: 0.5
    • filesystem: 0.3
  • root: "heavy", sub: "heavy"
    • answer: 0.9
    • filesystem: 0.3

Heavier prompting improves performance. The control is doing the same with gpt-5-mini; this model doesn't care about the prompt verbosity at all and simply nails the task almost every time, suggesting that as the models become better, less prompting is needed (good news for training & generalization from training).

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Test improvement

Testing

  • All existing tests pass when running uv run pytest locally.
  • New tests have been added to cover the changes

Checklist

  • My code follows the style guidelines of this project as outlined in AGENTS.md
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

Note

Medium Risk
Changes how system prompts are generated for both root and sub-LLM requests, which can meaningfully affect model behavior and eval outcomes, though it does not touch auth/data persistence.

Overview
Adds root_prompt_verbosity and sub_prompt_verbosity parameters to RLMEnv (validated and stored on init) to control how verbose the system prompts are for the root REPL (Python/Bash) and for sub-LLMs invoked via llm_batch.

Replaces the single hardcoded root/sub system prompts with per-verbosity prompt stores and updates prompt construction to select and format the appropriate variant (including {filesystem_summary} and {num_turns}). Adds tests asserting the correct snippets/system message are used for each verbosity level.

Written by Cursor Bugbot for commit 3ffda4f. This will update automatically on new commits. Configure here.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 3 potential issues.

Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.

@snimu snimu merged commit 44762ee into main Feb 2, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant