Further fixes for performance with internal bucketing. #781
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Calculate kv cache sliding idx for the decode phase only.
This PR has additional enhancements over #720.
token_idx_cpu is introduced which is an integer rather than a tensor to keep track of buckets. And the switch of buckets happens after the prefill phase.
Assume input tokens as 128 and new tokens as 512.
Without bucketing and slicing changes in these 2 PRs, in the decode phase we used to calculate the attention scores by multiplying with full KV cache of size (512+128).
Now with the changes of these 2 PRs, we l consider KV caches as below:
Decode phases of tokens 128-256 -> sliced KV cache till seq len 256.
Decode phases of tokens 256-384 -> sliced KV cache till seq len 384.
Decode phases of tokens 384-512 -> sliced KV cache till seq len 512.
And so on.
The bucketing changes along with reuse cache gives enhanced performance. See improved performances in below table.
What does this PR do?
Fixes # (issue)
Before submitting