Skip to content

Commit

Permalink
fix: prefer inplace softmax to avoid copy (#2661)
Browse files Browse the repository at this point in the history
* fix: prefer inplace softmax to avoid copy

* Update server/text_generation_server/models/flash_causal_lm.py

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
  • Loading branch information
drbh and Narsil authored Oct 17, 2024
1 parent 1b97e08 commit 5f32dea
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions server/text_generation_server/models/flash_causal_lm.py
Original file line number Diff line number Diff line change
Expand Up @@ -1922,8 +1922,9 @@ def generate_token(
batch.adapter_meta.adapter_indices = next_adapter_indices

if prefill and prefill_logprobs:
# Get prefill logprobs
prefill_logprobs_tensor = torch.log_softmax(out, -1)
# Get prefill logprobs with inplace softmax (avoid copying the `out` tensor (max_batch_prefill_tokens * vocab_size))
torch.log_softmax(out, -1, out=out)
prefill_logprobs_tensor = out
prefill_logprobs = torch.gather(
prefill_logprobs_tensor, 1, prefill_tokens_indices.view(-1, 1)
)
Expand Down

0 comments on commit 5f32dea

Please sign in to comment.