Keeping up with token limit #51
Replies: 1 comment 4 replies
-
ChatGPT rolls older messages out (but not custom instructions, normally) when the context max is in danger of being overrun. It may also do a generative summary, but as far as I know, that’s not been demonstrated conclusively. With the AutoExpert Developer Edition, the epilogue “history/symbol tree” section does a good job of keeping things moving even if context is overflowed. In Standard Edition (non-coding), it depends more on what you’re doing. The custom instructions themselves take 7% if the 8192 context buffer. The ADA system message isn’t that much more. So, IMO, there’s plenty of space. I’ve found no reliable way of counting tokens used that doesn’t itself eat up token space. Asking ChatGPT how many tokens have been used is hit-or-miss. |
Beta Was this translation helpful? Give feedback.
-
The token limit for the Advanced Data Analyses is 8K tokens and keeping in mind system instructions, the conversation limit could run out pretty quickly. Does the length of the conversation impact its quality significantly? Do you track the token count in any way?
Beta Was this translation helpful? Give feedback.
All reactions