-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Description
What version of Codex is running?
codex-cli 0.34.0
Which model were you using?
gpt-5-high
What platform is your computer?
Windows 11 Pro:
What steps can reproduce the bug?
It's difficult to say for sure how this behavior happens, but if you use Codex CLI to do some research using web research coupled with 'context7' MCP servers searching for package docs, you'll see often times not many tokens used (at least not 400k) and the context left will be 0%. Other times, when you keep running a very long chat (maybe just continue from the research thread), the tokens will keep accumulating, but the % context will become low again.
What is the expected behavior?
I'd expect tokens and % context is linked? if I do codex --resume on a chat, then those tokens used were from the previous chat, and I should start with 0 tokens and 100% context available for the new chat session?
What do you see instead?
If you use Codex CLI to do some research using web research coupled with 'context7' MCP servers searching for package docs, you'll see often times not many tokens used (at least not 400k) and the context left will be 0%.

Other times, when you keep running a very long chat (maybe just continue from the research thread), the tokens will keep accumulating, but the % context will become low again (see image below).

or maybe I'm misinterpreting here that tokens are counted cumulatively while % context can be reset? But then I've ran into 1.77m tokens used at suddenly the entire chat session auto terminates. So that's when I found out how long a chat session can keep going.
Additional information
No response