Skip to content

Conversation

@ReyNeill
Copy link

@ReyNeill ReyNeill commented Dec 29, 2025

Problem

When auto-compaction triggers mid-implementation, the summary plus preserved user messages can cause the model to re-check the entire session instead of continuing the current task.

Solution

Refocus the compaction prompt and summary prefix so the summary targets only the active task and explicitly avoids re-opening completed work unless the user asks.

Notes / Future options

  • Keep only the most recent N user messages after compaction
  • Store both an active-task summary and a full-session summary, reintroducing the latter only if needed
  • Use retrieval for prior context instead of always re-injecting full history

Changes

  • codex-rs/core/templates/compact/prompt.md
  • codex-rs/core/templates/compact/summary_prefix.md

Tests

Not run (content-only changes).

@github-actions
Copy link
Contributor

github-actions bot commented Dec 29, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@ReyNeill
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

github-actions bot added a commit that referenced this pull request Dec 29, 2025
@etraut-openai
Copy link
Collaborator

Is there a bug report filed for this issue? In our contributor's guidance we ask that all bug fix PRs start with a bug report. This helps us prioritize and track issues, especially in cases where we decide to reject a PR.

@etraut-openai etraut-openai added the needs-response Additional information is requested label Dec 30, 2025
@etraut-openai
Copy link
Collaborator

We do significant testing with our system prompts. Changes to these prompts can easily introduce regressions, so we have a high bar for accepting PRs that modify prompts.

@ReyNeill
Copy link
Author

Is there a bug report filed for this issue? In our contributor's guidance we ask that all bug fix PRs start with a bug report. This helps us prioritize and track issues, especially in cases where we decide to reject a PR.

Understood, would that mean delete this PR and start a new one quoting the big report, or I can just simply add it here?

@etraut-openai
Copy link
Collaborator

@ReyNeill, no need to create a new PR. Just add a link to a bug report.

@ReyNeill
Copy link
Author

ReyNeill commented Dec 30, 2025

I happen to run with the issue right now and submitted the bug: https://github.com/openai/codex/issues/new?template=2-bug-report.yml&steps=Uploaded%20thread:%20019b704d-04f3-7031-b1bb-fb5af42be238

thread ID 019b704d-04f3-7031-b1bb-fb5af42be238

The model wastes a lot of tokens and time verifying all of the session's tasks instead of continuing the task it was doing when the session history auto-compacted

@etraut-openai
Copy link
Collaborator

How much testing have you done with your modified prompt? I'm trying to get a sense for whether the solution is speculative or whether you have strong evidence that it improves the behavior, at least for your usage.

@ReyNeill
Copy link
Author

Not enough testing, but I'll do it and comeback. Thanks for the fast feedback speed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs-response Additional information is requested

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants