Skip to content

Commit 2664659

Browse files
Merge pull request vllm-project#30 from HabanaAI/private/kzawora/cumsum_wa
WA: Disable cumsum in HPU _prepare_prompt
2 parents ae3d612 + fdf282b commit 2664659

File tree

1 file changed

+0
-9
lines changed

1 file changed

+0
-9
lines changed

vllm/worker/habana_model_runner.py

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -402,15 +402,6 @@ def _prepare_prompt(
402402
dtype=torch.int32,
403403
device=self.device)
404404

405-
torch.cumsum(query_lens_tensor,
406-
dim=0,
407-
dtype=subquery_start_loc.dtype,
408-
out=subquery_start_loc[1:])
409-
410-
torch.cumsum(seq_lens_tensor,
411-
dim=0,
412-
dtype=seq_start_loc.dtype,
413-
out=seq_start_loc[1:])
414405
attn_metadata = self.attn_backend.make_metadata(
415406
is_prompt=True,
416407
seq_lens=seq_lens,

0 commit comments

Comments
 (0)