Skip to content

Conversation

Jeffwan
Copy link
Collaborator

@Jeffwan Jeffwan commented Aug 9, 2025

  • Add kv_transfer_params configuration to prefill requests and decode requests

Pull Request Description

this is a follow up PR of #1425, to address the problems in #1407

Related Issues

python3 benchmark_serving.py --port 8000 --host 192.168.0.138  --seed $(date +%s) \
>       --model qwen3-8B \
>       --tokenizer /models/Qwen3-8B \
>       --dataset-name random --random-input-len 8000 --random-output-len 200 \
>       --num-prompts 200 --burstiness 100 --request-rate 1 --metric-percentiles 95 \
>       --backend openai-chat --endpoint /v1/chat/completions --ignore-eos

1st try - without the change - aibrix router

Traffic request rate: 1.0 RPS.
Burstiness factor: 100.0 (Gamma distribution)
Maximum request concurrency: None
100%|██████████████████████████████████████████████████████████████████████████████████████████| 200/200 [03:56<00:00,  1.18s/it]
============ Serving Benchmark Result ============
Successful requests:                     200
Benchmark duration (s):                  236.22
Total input tokens:                      1599488
Total generated tokens:                  40000
Request throughput (req/s):              0.85
Output token throughput (tok/s):         169.33
Total Token throughput (tok/s):          6940.49
---------------Time to First Token----------------
Mean TTFT (ms):                          14193.76
Median TTFT (ms):                        12422.17
P95 TTFT (ms):                           33107.99
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          301.22
Median TPOT (ms):                        349.75
P95 TPOT (ms):                           353.99
---------------Inter-token Latency----------------
Mean ITL (ms):                           299.71
Median ITL (ms):                         54.00
P95 ITL (ms):                            1053.71
==================================================

2nd try - aibrix-container-registry-cn-beijing.cr.volces.com/aibrix/gateway-plugins:012f7dd20d281df36c4964eb5d455d40ce0abc7e

Traffic request rate: 1.0 RPS.
Burstiness factor: 100.0 (Gamma distribution)
Maximum request concurrency: None
100%|██████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [03:20<00:00,  1.00s/it]
============ Serving Benchmark Result ============
Successful requests:                     200
Benchmark duration (s):                  200.21
Total input tokens:                      1599642
Total generated tokens:                  40000
Request throughput (req/s):              1.00
Output token throughput (tok/s):         199.79
Total Token throughput (tok/s):          8189.69
---------------Time to First Token----------------
Mean TTFT (ms):                          1256.15
Median TTFT (ms):                        1211.70
P95 TTFT (ms):                           1568.65
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          9.00
Median TPOT (ms):                        9.01
P95 TPOT (ms):                           9.36
---------------Inter-token Latency----------------
Mean ITL (ms):                           8.96
Median ITL (ms):                         8.98
P95 ITL (ms):                            10.09
==================================================

another sample - python router version

============ Serving Benchmark Result ============
Successful requests:                     200
Benchmark duration (s):                  201.63
Total input tokens:                      1599434
Total generated tokens:                  40000
Request throughput (req/s):              0.99
Output token throughput (tok/s):         198.38
Total Token throughput (tok/s):          8130.74
---------------Time to First Token----------------
Mean TTFT (ms):                          1232.50
Median TTFT (ms):                        1185.72
P95 TTFT (ms):                           1515.11
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          9.10
Median TPOT (ms):                        9.05
P95 TPOT (ms):                           10.13
---------------Inter-token Latency----------------
Mean ITL (ms):                           9.06
Median ITL (ms):                         9.06
P95 ITL (ms):                            10.26
==================================================

Important: Before submitting, please complete the description above and review the checklist below.


Contribution Guidelines (Expand for Details)

We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:

Pull Request Title Format

Your PR title should start with one of these prefixes to indicate the nature of the change:

  • [Bug]: Corrections to existing functionality
  • [CI]: Changes to build process or CI pipeline
  • [Docs]: Updates or additions to documentation
  • [API]: Modifications to aibrix's API or interface
  • [CLI]: Changes or additions to the Command Line Interface
  • [Misc]: For changes not covered above (use sparingly)

Note: For changes spanning multiple categories, use multiple prefixes in order of importance.

Submission Checklist

  • PR title includes appropriate prefix(es)
  • Changes are clearly explained in the PR description
  • New and existing tests pass successfully
  • Code adheres to project style and best practices
  • Documentation updated to reflect changes (if applicable)
  • Thorough testing completed, no regressions introduced

By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Jeffwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I'm addressing an issue in the P/D disaggregation router to correctly handle kv_transfer_params as specified by Nixl. This change primarily involves configuring and processing these parameters during prefill and decode requests, especially for the vLLM engine, which is crucial for enabling efficient disaggregated inference.

Highlights

  • Enhanced vLLM Prefill Handling: I've updated the doPrefillRequest function to specifically manage vLLM prefill requests. This now includes receiving and processing kv_transfer_params from the prefill response, which are essential for transferring KV cache state between disaggregated components.
  • Dynamic KV Transfer Parameter Injection: The preparePrefillPayload function now dynamically injects kv_transfer_params into the request payload for vLLM. This ensures that the prefill request properly signals its intent for remote decoding and prefill.
  • Improved HTTP Request Utility: I've refactored executeHTTPRequest to return the parsed JSON response body, allowing subsequent functions to extract necessary information like kv_transfer_params. I also added the X-Request-Id header for better traceability.
  • Dedicated KV Transfer Context Update: A new helper function, updateRoutingContextWithKVTransferParams, has been introduced. This function is responsible for extracting kv_transfer_params from the prefill response, merging them into the original request body, and including the prefill pod's IP as remote_host for subsequent decode requests.
  • Comprehensive Test Coverage: I've added new unit tests to pd_disaggregation_test.go to thoroughly validate the correct behavior of preparePrefillPayload and updateRoutingContextWithKVTransferParams, ensuring that kv_transfer_params are handled as expected for vLLM and other engines.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Jeffwan Jeffwan marked this pull request as draft August 9, 2025 08:12
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly implements the logic to handle kv_transfer_params for P/D disaggregation with the vLLM engine. The changes involve modifying the prefill request to include these parameters, and then processing them from the prefill response to update the routing context for the subsequent decode request. The new functions are well-structured and error handling is properly implemented. The accompanying tests provide good coverage for the new functionality. I've pointed out a minor issue in one of the new tests that could cause confusion, with a suggestion for improvement.

@Jeffwan Jeffwan force-pushed the jiaxin/fix-nixl-vllm-router branch 8 times, most recently from 012f7dd to 066ff5e Compare August 15, 2025 02:25
@Jeffwan Jeffwan marked this pull request as ready for review August 15, 2025 02:25
- Add kv_transfer_params configuration to prefill requests and decode requests

Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>
@Jeffwan Jeffwan force-pushed the jiaxin/fix-nixl-vllm-router branch from 066ff5e to 0222247 Compare August 15, 2025 02:32
Copy link
Collaborator

@DwyaneShi DwyaneShi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Jeffwan Jeffwan merged commit 6979abd into vllm-project:main Aug 15, 2025
14 checks passed
@Jeffwan Jeffwan deleted the jiaxin/fix-nixl-vllm-router branch August 15, 2025 03:41
Jeffwan added a commit to Jeffwan/aibrix that referenced this pull request Aug 18, 2025
…-project#1429)

- Add kv_transfer_params configuration to prefill requests and decode requests

Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>
Jeffwan added a commit that referenced this pull request Aug 18, 2025
…se-0.4 branch (#1468)

* Select PD workers in same roleset (#1409)

* Select PD workers in same roleset
* nit
* update ut
---------

Signed-off-by: Varun Gupta <varungup90@gmail.com>

* [Bug] fix webhook config output when using make manifests (#1412)

fix webhook config output when using make manifests

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Fix] Fix vLLM NIXL-based P/D samples (#1425)

Signed-off-by: Haiyang Shi <haiyang.shi@bytedance.com>
Co-authored-by: Haiyang Shi <haiyang.shi@bytedance.com>

* [Fix] Disable GGA in NIXL samples (#1436)

[Fix] Fix NIXL samples

Explicitly set UCX_TLS to let UCX not use GGA (GPU Direct) transport

Signed-off-by: Haiyang Shi <haiyang.shi@bytedance.com>
Co-authored-by: Haiyang Shi <haiyang.shi@bytedance.com>

* Fix P/D disaggregation router to follow Nixl kv_transfer_params (#1429)

- Add kv_transfer_params configuration to prefill requests and decode requests

Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>

* [Bug] Corrected naming convention for AIBRIX_MODEL_GPU_PROFILE_CACHING_FLAG (#1427)

Corrected naming convention for AIBRIX_MODEL_GPU_PROFILE_CACHING_FLAG

Signed-off-by: Jonathon Shea <sheajonathon0@gmail.com>

* [Bug] stormservice's headless service not set ownerRef (#1442)

* fix: stormservice's headless service not set ownerRef

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

* fix: patch ut test for service sync

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

---------

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

* [Bug] stormservice's headless service need set PublishNotReadyAddresses (#1441)

* fix: stormservice's headless service need set PublishNotReadyAddresses

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

* fix: isServiceEqual check PublishNotReadyAddresses

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

---------

Signed-off-by: dajun.cui <dajun.cui@bytedance.com>

---------

Signed-off-by: Varun Gupta <varungup90@gmail.com>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: Haiyang Shi <haiyang.shi@bytedance.com>
Signed-off-by: Jiaxin Shan <seedjeffwan@gmail.com>
Signed-off-by: Jonathon Shea <sheajonathon0@gmail.com>
Signed-off-by: dajun.cui <dajun.cui@bytedance.com>
Co-authored-by: Varun Gupta <varungup90@gmail.com>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Haiyang Shi <dwyane.shi@gmail.com>
Co-authored-by: Haiyang Shi <haiyang.shi@bytedance.com>
Co-authored-by: Jonathon Shea <35823125+JonathonShea@users.noreply.github.com>
Co-authored-by: cuidajun <dajun.cui@bytedance.com>
@Jeffwan
Copy link
Collaborator Author

Jeffwan commented Aug 18, 2025

in the development, there're some issues bug me.

ERROR 08-11 18:58:29 [core.py:588]     self._free_blocks(self.requests[req_id])
ERROR 08-11 18:58:29 [core.py:588]                       ~~~~~~~~~~~~~^^^^^^^^
ERROR 08-11 18:58:29 [core.py:588] KeyError: 'chatcmpl-07579ba5-5ef6-481e-9116-8487f8a066fd'
Process EngineCore_0:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 590, in run_engine_core
    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 579, in run_engine_core
    engine_core.run_busy_loop()
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 606, in run_busy_loop
    self._process_engine_step()
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 631, in _process_engine_step
    outputs, model_executed = self.step_fn()
                              ^^^^^^^^^^^^^^
ERROR 08-11 18:58:29 [async_llm.py:419] AsyncLLM output_handler failed.
ERROR 08-11 18:58:29 [async_llm.py:419] Traceback (most recent call last):
ERROR 08-11 18:58:29 [async_llm.py:419]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 378, in output_handler
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py", line 236, in step
    engine_core_outputs = self.scheduler.update_from_output(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/core/sched/scheduler.py", line 875, in update_from_output
    self._update_from_kv_xfer_finished(model_runner_output)
  File "/usr/local/lib/python3.12/dist-packages/vllm/v1/core/sched/scheduler.py", line 1115, in _update_from_kv_xfer_finished
    self._free_blocks(self.requests[req_id])
                      ~~~~~~~~~~~~~^^^^^^^^
KeyError: 'chatcmpl-07579ba5-5ef6-481e-9116-8487f8a066fd'
ERROR 08-11 18:58:29 [async_llm.py:419]     outputs = await engine_core.get_output_async()
ERROR 08-11 18:58:29 [async_llm.py:419]               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 08-11 18:58:29 [async_llm.py:419]   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py", line 740, in get_output_async
ERROR 08-11 18:58:29 [async_llm.py:419]     raise self._format_exception(outputs) from None
ERROR 08-11 18:58:29 [async_llm.py:419] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.
INFO 08-11 18:58:29 [async_llm.py:345] Request chatcmpl-0355bb9e-3731-49d3-9e02-45f6a2ffcbcd failed (engine dead).
INFO 08-11 18:58:29 [async_llm.py:345] Request chatcmpl-308c0d24-03c2-472e-ad46-f210ac593851 failed (engine dead).

Symptom

  1. 1st request always succeed
  2. 2nd request always failed.

We actually can not find the key.. that was confusing

  1. vLLM client request use x-request-id we passed.
  2. remote_engine_id is from nixl connector initialization. not related
image

Initially, I thought that's the prefill reconnect issues, I tried to optimized the httpclient to disable keepalive, close the connection etc, but it doesn't work. wasting lots of time there.

Eventually, we figure out that the key is from the decode machine. I set the x-request-id to routingContext.Headers but forget to make the change in real envoy request headers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants