-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
[PD][Nixl] Remote consumer READ timeout for clearing request blocks #20139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @NickLucche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've addressed a critical edge case in the disaggregated decode-prefill setup where KV cache blocks on the remote producer (P) could become permanently stranded if the consumer (D) failed to communicate request abortion or completion. This pull request introduces a robust timeout mechanism that automatically frees these blocks after a configurable period, significantly improving resource utilization and system stability in failure scenarios.
Highlights
- KV Cache Block Timeout: I've implemented a timeout mechanism to automatically clear KV cache blocks on the remote producer (P) if the consumer (D) fails to read them within a specified duration. This prevents resource starvation in disaggregated decode-prefill setups.
- Configurable Timeout: A new environment variable,
VLLM_NIXL_ABORT_REQUEST_TIMEOUT
, has been introduced, allowing users to configure the timeout duration for remote consumer reads (defaulting to 120 seconds). - NixlConnector Enhancements: I've modified the
NixlConnector
andNixlConnectorWorker
to track requests that have completed prefill on the producer and are awaiting consumption by the decoder, enabling the new timeout logic to be applied. - Unit Test Coverage: A dedicated unit test (
test_abort_timeout_on_prefiller
) has been added to validate the end-to-end functionality of the remote consumer read timeout, simulating a scenario where communication fails and blocks are eventually cleared.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This PR introduces a timeout mechanism to clear request blocks in the remote producer when the router fails to communicate request abortion, addressing an edge case in the NixlConnector. The changes include adding a TTL to requests, updating metadata, and implementing timeout handling in the worker. The code also includes a new unit test to verify the timeout functionality.
for req_id, finish_time in self._reqs_to_send.items(): | ||
if finish_time < 0: | ||
# Request just finished, start timeout. | ||
self._reqs_to_send[req_id] = now | ||
elif now - finish_time >= envs.VLLM_NIXL_ABORT_REQUEST_TIMEOUT: | ||
# Timeout exceed, clear the request blocks. | ||
timed_out_requests.append(req_id) | ||
|
||
for req_id in timed_out_requests: | ||
# Skip communication with other ranks, but | ||
if self.tp_rank == 0: | ||
self._done_sending_count[req_id] += self.world_size | ||
done_sending.add(req_id) | ||
del self._reqs_to_send[req_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The timeout mechanism implemented here relies on time.perf_counter()
, which is susceptible to system clock adjustments. If the system clock is adjusted backward, it could cause requests to timeout prematurely. Consider using a monotonic clock source that is not affected by system clock changes.
import time
# Use time.monotonic() instead of time.perf_counter()
now = time.monotonic()
timed_out_requests: list[str] = []
for req_id, finish_time in self._reqs_to_send.items():
if finish_time < 0:
# Request just finished, start timeout.
self._reqs_to_send[req_id] = now
elif now - finish_time >= envs.VLLM_NIXL_ABORT_REQUEST_TIMEOUT:
# Timeout exceed, clear the request blocks.
timed_out_requests.append(req_id)
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
b52b3d5
to
2616d32
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @NickLucche
meta.reqs_to_send = copy.copy(self._reqs_need_send) | ||
# Clear the list once workers start the transfers | ||
self._reqs_need_recv.clear() | ||
self._reqs_need_send.clear() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can avoid copying
meta.reqs_to_send = copy.copy(self._reqs_need_send) | |
# Clear the list once workers start the transfers | |
self._reqs_need_recv.clear() | |
self._reqs_need_send.clear() | |
# Clear the list once workers start the transfers | |
self._reqs_need_recv.clear() | |
# Transfer reqs to send to the metadata | |
meta.reqs_to_send = self._reqs_need_send | |
self._reqs_need_send = set() |
elif params is not None and params.get("do_remote_decode"): | ||
# Prefill request on remote. It will be read from D upon completion | ||
self._reqs_need_send.add(request.request_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should go in request_finished()
, and only do it if we return True
from that.
We can also set the absolute deadline at this point (set can be of tuple(req_id, deadline)), and include it in the transfer params that are returned (so the D worker can check it in it's get_num_matched_tokens
method).
And clearer to set a deadline than the finished time... but should include some buffer to allow for transfer time and slightly misaligned clocks .. e.g. 60sec deadline for D side, 90 sec expiry on P side.
# Track the request that are waiting to be read and abort on timeout. | ||
# Set to -1 so that timeout does not depend on model latency. | ||
for req_id in metadata.reqs_to_send: | ||
self._reqs_to_send[req_id] = -1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this would be needed per my other comments
# Handle timeout to avoid stranding blocks on remote. | ||
now = time.monotonic() | ||
timed_out_requests: list[str] = [] | ||
for req_id, finish_time in self._reqs_to_send.items(): | ||
if finish_time < 0: | ||
# Request just finished, start timeout. | ||
self._reqs_to_send[req_id] = now | ||
elif now - finish_time >= envs.VLLM_NIXL_ABORT_REQUEST_TIMEOUT: | ||
# Timeout exceed, clear the request blocks. | ||
timed_out_requests.append(req_id) | ||
|
||
for req_id in timed_out_requests: | ||
# Skip communication with other ranks, but | ||
if self.tp_rank == 0: | ||
self._done_sending_count[req_id] += self.world_size | ||
done_sending.add(req_id) | ||
del self._reqs_to_send[req_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's better to keep things simple and omit the TP optimization. I think we'll likely make this logic generic and move it outside of the connector impl anyhow (aggregating the finished events in TP case).
Dicts are ordered so we only need to peek the oldest entry.
# Handle timeout to avoid stranding blocks on remote. | |
now = time.monotonic() | |
timed_out_requests: list[str] = [] | |
for req_id, finish_time in self._reqs_to_send.items(): | |
if finish_time < 0: | |
# Request just finished, start timeout. | |
self._reqs_to_send[req_id] = now | |
elif now - finish_time >= envs.VLLM_NIXL_ABORT_REQUEST_TIMEOUT: | |
# Timeout exceed, clear the request blocks. | |
timed_out_requests.append(req_id) | |
for req_id in timed_out_requests: | |
# Skip communication with other ranks, but | |
if self.tp_rank == 0: | |
self._done_sending_count[req_id] += self.world_size | |
done_sending.add(req_id) | |
del self._reqs_to_send[req_id] | |
# Handle timeout to avoid stranding blocks on remote. | |
now = time.time() | |
while self._reqs_to_send: | |
req_id, expires = next(iter(self._reqs_to_send.items())) | |
if now < expires: | |
break | |
del self._reqs_to_send[req_id] | |
done_sending.add(req_id) |
With #19223, we're addressing most of the cases where P request blocks may be left starving.
However, there are still cases where if the router fails to communicate request abortion for whatever reason (eg in-flight request lost, router down..) while the request has not yet reached D or D fails to communicate the abortion to P, where the remote producer may be left with blocks that won't be cleared.
This PR addresses these final edge-cases by attaching a simple TTL to every request that needs to be read from local(D)<-remote (P).
cc @njhill