Skip to content

[DP] Support external DP Load Balancer mode #19790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

njhill
Copy link
Member

@njhill njhill commented Jun 18, 2025

See https://docs.google.com/document/d/1mSYsWQEbp4Oq50ghFWUun7OgagbuUJivAC8Ds-pu-xU/edit?tab=t.0#heading=h.ggq72ssewhm3

Some utility classes/functions related to engine process management moved from v1/utils.py to v1/engine/utils.py to avoid circular imports.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @njhill, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for an external data parallel load balancer mode, allowing vLLM deployments to integrate more seamlessly with external orchestration systems. It refines the internal communication patterns, configures instances for direct local engine interaction, and adjusts stats reporting to align with an externally managed load balancing strategy.

Highlights

  • External Data Parallel Load Balancer Mode: Introduced a new data_parallel_external_lb configuration option and a --data-parallel-rank CLI argument to enable an external load balancer mode for data parallelism. When enabled, vLLM instances are expected to be managed by an external system for request distribution.
  • Decoupled Coordinator and Engine Communication: In external LB mode, the data parallel coordinator (responsible for internal load balancing and stats) now only runs on the rank 0 instance. Non-rank-0 instances will communicate directly with their local engine, streamlining the setup for external orchestration.
  • Refined Engine Handshake Process: The engine startup handshake mechanism has been updated to support the new external LB mode. This includes a two-stage handshake for non-rank-0 instances to retrieve both global coordinator information (from rank 0) and local client I/O addresses.
  • Adjusted Stats Reporting: Engines operating in external LB mode will no longer publish detailed request queue statistics to the coordinator. Instead, they will only report high-level wave completion and running state changes, reducing overhead when an external system handles load metrics.
  • Client-Side Adaptations for DP Modes: The MPClient hierarchy has been refactored to differentiate between internal (load-balancing) and external (direct-to-local-engine) data parallel modes. A new DPLBAsyncMPClient class was introduced to specifically handle the internal load-balancing logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for an external Data Parallel (DP) Load Balancer mode. The changes are comprehensive, touching configuration, argument parsing, server entrypoints, and core engine components. The introduction of a dual handshake mechanism for non-rank-0 external LB instances and the refactoring of client types (DPAsyncMPClient vs. DPLBAsyncMPClient) are notable improvements for handling different DP scenarios. The overall logic appears sound. One minor issue with a docstring in vllm/config.py needs addressing, and a check for data_parallel_rank in EngineArgs.

Copy link

mergify bot commented Jun 24, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @njhill.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 24, 2025
@mergify mergify bot removed the needs-rebase label Jun 24, 2025
@njhill njhill marked this pull request as ready for review June 25, 2025 04:41
@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 25, 2025
njhill added 5 commits June 26, 2025 13:40
wip

Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
@mergify mergify bot added the ci/build label Jun 26, 2025
Signed-off-by: Nick Hill <nhill@redhat.com>
njhill added 3 commits June 26, 2025 21:13
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build frontend ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant