Skip to content

Commit f6b992a

Browse files
fyabcminpeter
authored andcommitted
[Bugfix] Fix distributed bug again in Qwen2.5-VL & Qwen2.5-Omni (vllm-project#16974)
Signed-off-by: fyabc <suyang.fy@alibaba-inc.com> Signed-off-by: minpeter <kali2005611@gmail.com>
1 parent 62effc7 commit f6b992a

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

vllm/model_executor/models/qwen2_5_vl.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,8 +198,11 @@ def forward(self, x: torch.Tensor):
198198

199199
def all_gather_interleave(local_tensor, hidden_size: int, tp_size: int):
200200
"""All-gather the input tensor interleavely across model parallel group."""
201+
import torch.distributed as dist
201202
gathered_tensors = [torch.zeros_like(local_tensor) for _ in range(tp_size)]
202-
parallel_state.get_tp_group().all_gather(gathered_tensors, local_tensor)
203+
dist.all_gather(gathered_tensors,
204+
local_tensor,
205+
group=parallel_state.get_tp_group().device_group)
203206

204207
gathered_tensors_split = [
205208
torch.split(tensor, hidden_size // tp_size, -1)

0 commit comments

Comments
 (0)