Description
Prerequisites
- I am running the latest code. Mention the version if possible as well.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
A new flag --swa-extra which enables specifying an amount to extend the sliding window length beyond the training length in models with SWA.
Motivation
Use of this parameter will allow spec decode to be used without losing any information while not needing --swa-full. Spec decode gives around 2X speedup and there is no reason to give up this performance gain on SWA models without requiring --swa-full to be used (i.e. the high memory of SWA and high speed of spec decode can both be achieved at once)
Possible Implementation
Example use: Here the SWA is extended by 8 tokens to accommodate a maximum 8 token speculation.
--swa-extra 8
when this flag is used the extra tokens must be masked out in the attention computation similar to the way they are now masked out with --swa-full and context > sliding window length.