Skip to content

[MISC] Remove useless patch #1366

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 24, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 0 additions & 10 deletions vllm_ascend/patch/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,16 +56,6 @@
# Need a PR to vllm to support get port from environment.
# Future Plan:
# Remove those patch when vllm merged them
# 3. `vllm.config.ParallelConfig.ParallelConfig.stateless_init_dp_group`
# Why:
# vLLM use gloo backend by default to initialize stateless dp process gourp, but we want to use hccl here to
# get better performance
# How:
# adopt nccl backend to init process group.(Now we still use gloo, it's just a placeholder, we'll use nccl in the future)
# Related PR (if no, explain why):
# Need a PR to vllm to support more backend.
# Future Plan:
# Remove those patch when vllm support more backend.
#
# * Worker Patch:
# ===============
Expand Down
20 changes: 0 additions & 20 deletions vllm_ascend/patch/platform/patch_common/patch_distributed.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,7 @@
import vllm
import vllm.distributed
import vllm.envs as envs
from torch.distributed import ProcessGroup
from vllm.config import ParallelConfig
from vllm.distributed.utils import \
stateless_init_torch_distributed_process_group

from vllm_ascend.utils import NullHandle, is_310p

Expand Down Expand Up @@ -65,25 +62,8 @@ def parallel_config_get_dp_port(self) -> int:
return port


def stateless_init_dp_group(self) -> "ProcessGroup":
# TODO(Yizhou): Currently we have to set the backend to gloo
# because in vllm.config.ParallelConfig.has_unfinished_dp the
# device is set to cpu. We need to fix this in the future.
# We need to compare the performance of gloo and hccl and then
# decide which one to use.
dp_group = stateless_init_torch_distributed_process_group(
self.data_parallel_master_ip,
self.get_next_dp_init_port(),
self.data_parallel_rank,
self.data_parallel_size,
backend="gloo")

return dp_group


vllm.distributed.parallel_state.destroy_model_parallel = ascend_destroy_model_parallel
ParallelConfig.get_next_dp_init_port = parallel_config_get_dp_port
ParallelConfig.stateless_init_dp_group = stateless_init_dp_group


def communication_adaptation_310p():
Expand Down
Loading