Closed
Description
The extra logic added to support this functionality is a bit questionable (#5195 (comment)) and it introduces too much complexity around the context management. With new models available where the training context is plenty (32k and even 128k), we should remove this feature in view of simplifying the server implementation and potentially look to re-introduce it in the future in a better way.