Skip to content

Commit

Permalink
[Misc][Gaudi] Avoid torch.compile and enable lazy collectives (vllm-p…
Browse files Browse the repository at this point in the history
…roject#10897)

Signed-off-by: Konrad Zawora <kzawora@habana.ai>
  • Loading branch information
kzawora-intel authored and BKitor committed Dec 30, 2024
1 parent 353aecd commit 67df918
Showing 1 changed file with 14 additions and 0 deletions.
14 changes: 14 additions & 0 deletions vllm/plugins/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,20 @@ def load_general_plugins():
if current_platform.is_xpu():
# see https://github.com/pytorch/pytorch/blob/8cada5cbe5450e17c26fb8b358116785324537b2/torch/_dynamo/config.py#L158 # noqa
os.environ['TORCH_COMPILE_DISABLE'] = 'True'
if current_platform.is_hpu():
# NOTE(kzawora): PT HPU lazy backend (PT_HPU_LAZY_MODE = 1)
# does not support torch.compile
# Eager backend (PT_HPU_LAZY_MODE = 0) must be selected for
# torch.compile support
is_lazy = os.environ.get('PT_HPU_LAZY_MODE', '1') == '1'
if is_lazy:
# see https://github.com/pytorch/pytorch/blob/43c5f59/torch/_dynamo/config.py#L158
torch._dynamo.config.disable = True
# NOTE(kzawora) multi-HPU inference with HPUGraphs (lazy-only)
# requires enabling lazy collectives
# see https://docs.habana.ai/en/latest/PyTorch/Inference_on_PyTorch/Inference_Using_HPU_Graphs.html # noqa: E501
os.environ['PT_HPU_ENABLE_LAZY_COLLECTIVES'] = 'true'

global plugins_loaded
if plugins_loaded:
return
Expand Down

0 comments on commit 67df918

Please sign in to comment.