Skip to content

[V1] TPU - Add tensor parallel support via Ray #13618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 8, 2025
Merged

Conversation

alexm-redhat
Copy link
Collaborator

@alexm-redhat alexm-redhat commented Feb 20, 2025

This PR adds tensor-parallel support to [V1] TPU via Ray executor without changing the SMPD and Ray compile flags that are used for the NVIDIA codepath. As a result, NVIDIA's Ray executor is mostly reused for the TPU codepath. Verified correctness via:

VLLM_USE_V1=1 pytest -s -v tests/entrypoints/llm/test_accuracy.py::test_lm_eval_accuracy_v1_engine

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link

mergify bot commented Feb 20, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @alexm-redhat.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Feb 20, 2025
@alexm-redhat
Copy link
Collaborator Author

@bvrockwell this is the TP PR. I did not fully mimick the logic from V0, since I wanted to preserve the Ray DAG compilation which is actually used here. The PR is basically done, just needs to become green and remove debug cruft.

@alexm-redhat
Copy link
Collaborator Author

/ready

@alexm-redhat
Copy link
Collaborator Author

@mgoin what's nicolo's username?

@mergify mergify bot removed the needs-rebase label Feb 20, 2025
@alexm-redhat
Copy link
Collaborator Author

@mgoin The PR is ready for review.

@mgoin
Copy link
Member

mgoin commented Feb 20, 2025

cc @NickLucche

@alexm-redhat alexm-redhat self-assigned this Feb 20, 2025
@@ -583,6 +584,7 @@ def _prepare_decode(
def execute_model(
self,
scheduler_output: "SchedulerOutput",
intermediate_tensors: Optional[IntermediateTensors] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this necessary? I thought intermediate_tensors was just needed for PP

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not used, but it is part of the API, else it errors.

@robertgshaw2-redhat robertgshaw2-redhat added the ready ONLY add when PR is ready to merge/full CI is needed label Feb 20, 2025
@brittrock
Copy link

@bvrockwell this is the TP PR. I did not fully mimick the logic from V0, since I wanted to preserve the Ray DAG compilation which is actually used here. The PR is basically done, just needs to become green and remove debug cruft.

Thanks @alexm-redhat ! Tagging @lsy323 who is working on this section of the code currently.

@@ -6,11 +6,13 @@
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple, Union

import msgspec
import torch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be a lazy import

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexm-redhat can you move the torch import back down?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved down

Copy link
Contributor

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!
Any chance we can add a tiny unit test under tests/v1/tpu to make sure tp is working/using gpus as intended without running the whole correctness test?

assert self.worker is not None, "Worker is not initialized"
if not self.compiled_dag_cuda_device_set:
torch.cuda.set_device(self.worker.device)
if current_platform.is_tpu():
# TODO: [AlexM] Verify if set_device is necessary here
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this been verified?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah

@alexm-redhat
Copy link
Collaborator Author

@NickLucche thanks for the suggestion, added a new test in tests/v1/tpu/test_basic.py that performs a quick correctness sanity check without running the full evaluation suite.

Copy link
Contributor

@NickLucche NickLucche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the test!
I wasn't able to run it on my tpu pod (tp=4) despite tuning down all args that would cause oom. I end up with

INFO 03-07 09:53:04 [kv_cache_utils.py:537] GPU KV cache size: 1,106,000 tokens
INFO 03-07 09:53:04 [kv_cache_utils.py:540] Maximum concurrency for 128 tokens per request: 8640.62x
INFO 03-07 09:53:04 [kv_cache_utils.py:537] GPU KV cache size: 1,106,000 tokens
INFO 03-07 09:53:04 [kv_cache_utils.py:540] Maximum concurrency for 128 tokens per request: 8640.62x
INFO 03-07 09:53:04 [kv_cache_utils.py:537] GPU KV cache size: 1,106,000 tokens
INFO 03-07 09:53:04 [kv_cache_utils.py:540] Maximum concurrency for 128 tokens per request: 8640.62x
INFO 03-07 09:53:04 [kv_cache_utils.py:537] GPU KV cache size: 1,106,000 tokens
INFO 03-07 09:53:04 [kv_cache_utils.py:540] Maximum concurrency for 128 tokens per request: 8640.62x
INFO 03-07 09:53:04 [core.py:116] init engine (profile, create kv cache, warmup model) took 19.98 seconds
Processed prompts:   0%|                                                                                                                  | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]INFO 03-07 09:53:04 [ray_distributed_executor.py:534] VLLM_USE_RAY_COMPILED_DAG_NCCL_CHANNEL = False
INFO 03-07 09:53:04 [ray_distributed_executor.py:536] VLLM_USE_RAY_COMPILED_DAG_OVERLAP_COMM = False
ERROR 03-07 09:53:14 [core.py:303] EngineCore hit an exception: Traceback (most recent call last):
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/dag/compiled_dag_node.py", line 2344, in _execute_until
ERROR 03-07 09:53:14 [core.py:303]     result = self._dag_output_fetcher.read(timeout)
ERROR 03-07 09:53:14 [core.py:303]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/common.py", line 318, in read
ERROR 03-07 09:53:14 [core.py:303]     outputs = self._read_list(timeout)
ERROR 03-07 09:53:14 [core.py:303]               ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/common.py", line 409, in _read_list
ERROR 03-07 09:53:14 [core.py:303]     raise e
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/common.py", line 391, in _read_list
ERROR 03-07 09:53:14 [core.py:303]     result = c.read(min(remaining_timeout, iteration_timeout))
ERROR 03-07 09:53:14 [core.py:303]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/shared_memory_channel.py", line 776, in read
ERROR 03-07 09:53:14 [core.py:303]     return self._channel_dict[self._resolve_actor_id()].read(timeout)
ERROR 03-07 09:53:14 [core.py:303]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/shared_memory_channel.py", line 612, in read
ERROR 03-07 09:53:14 [core.py:303]     output = self._buffers[self._next_read_index].read(timeout)
ERROR 03-07 09:53:14 [core.py:303]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/channel/shared_memory_channel.py", line 480, in read
ERROR 03-07 09:53:14 [core.py:303]     ret = self._worker.get_objects(
ERROR 03-07 09:53:14 [core.py:303]           ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/_private/worker.py", line 893, in get_objects
ERROR 03-07 09:53:14 [core.py:303]     ] = self.core_worker.get_objects(
ERROR 03-07 09:53:14 [core.py:303]         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "python/ray/_raylet.pyx", line 3189, in ray._raylet.CoreWorker.get_objects
ERROR 03-07 09:53:14 [core.py:303]   File "python/ray/includes/common.pxi", line 106, in ray._raylet.check_status
ERROR 03-07 09:53:14 [core.py:303] ray.exceptions.RayChannelTimeoutError: System error: Timed out waiting for object available to read. ObjectID: 00025bfe7d0aed89fa1684762ecae37e05ee43550100000002e1f505
ERROR 03-07 09:53:14 [core.py:303] 
ERROR 03-07 09:53:14 [core.py:303] The above exception was the direct cause of the following exception:
ERROR 03-07 09:53:14 [core.py:303] 
ERROR 03-07 09:53:14 [core.py:303] Traceback (most recent call last):
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/vllm/v1/engine/core.py", line 296, in run_engine_core
ERROR 03-07 09:53:14 [core.py:303]     engine_core.run_busy_loop()
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/vllm/v1/engine/core.py", line 339, in run_busy_loop
ERROR 03-07 09:53:14 [core.py:303]     outputs = step_fn()
ERROR 03-07 09:53:14 [core.py:303]               ^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/vllm/v1/engine/core.py", line 154, in step
ERROR 03-07 09:53:14 [core.py:303]     output = self.model_executor.execute_model(scheduler_output)
ERROR 03-07 09:53:14 [core.py:303]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/vllm/v1/executor/ray_distributed_executor.py", line 57, in execute_model
ERROR 03-07 09:53:14 [core.py:303]     return refs[0].get()
ERROR 03-07 09:53:14 [core.py:303]            ^^^^^^^^^^^^^
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/experimental/compiled_dag_ref.py", line 124, in get
ERROR 03-07 09:53:14 [core.py:303]     self._dag._execute_until(
ERROR 03-07 09:53:14 [core.py:303]   File "/home/nick/vllm/.venv/lib/python3.11/site-packages/ray/dag/compiled_dag_node.py", line 2350, in _execute_until
ERROR 03-07 09:53:14 [core.py:303]     raise RayChannelTimeoutError(
ERROR 03-07 09:53:14 [core.py:303] ray.exceptions.RayChannelTimeoutError: System error: If the execution is expected to take a long time, increase RAY_CGRAPH_get_timeout which is currently 10 seconds. Otherwise, this may indicate that the execution is hanging.
ERROR 03-07 09:53:14 [core.py:303] 
INFO 03-07 09:53:14 [ray_distributed_executor.py:108] Shutting down Ray distributed executor. If you see error log from logging.cc regarding SIGTERM received, please ignore because this is the expected termination process in Ray.

Is this working fine for you guys with ray==2.43.0?
Might even be some issue with libtpu, but just posting it here to double check.

@pytest.mark.skipif(not current_platform.is_tpu(),
reason="This is a basic test for TPU only")
@pytest.mark.parametrize("model", MODELS)
@pytest.mark.parametrize("dtype", ["half"])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should set this to bfloat16 to avoid
The TPU backend currently does not support torch.float16. Using bfloat16 instead.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I removed the dtype, since it is not really necessary

@pytest.mark.parametrize("tensor_parallel_size", TENSOR_PARALLEL_SIZES)
def test_models(
monkeypatch,
hf_runner,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hf_runner is not used

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, removed

@alexm-redhat
Copy link
Collaborator Author

@NickLucche I have rebased over the new ragged paged attn v2, however, it does not work now for TP==4 due to new kernel's limitation: "ValueError: Not implemented: num_kv_heads=1 can not be XLA fully tiled.”. TP==1 works fine.

@alexm-redhat
Copy link
Collaborator Author

@yaochengji is aware of the TP==4 issue. Anyway, I think the PR can be merged so that Chengji and Jevin can reproduce the issue locally.

@alexm-redhat
Copy link
Collaborator Author

@NickLucche btw, I was using use_kernel=True to test this PR before because I was getting constant OOM.

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@mgoin mgoin added the tpu Related to Google TPUs label Mar 7, 2025
@alexm-redhat
Copy link
Collaborator Author

@mgoin @NickLucche I was able to verify correctness for TP==4 for llama 3.1 8B (the new kernel works for this model). The other issue is Qwen specific.

Signed-off-by: Alexander Matveev <amatveev@redhat.com>
@robertgshaw2-redhat robertgshaw2-redhat merged commit cb8bdfa into main Mar 8, 2025
39 checks passed
@robertgshaw2-redhat robertgshaw2-redhat deleted the v1_tpu_tp branch March 8, 2025 13:19
@NickLucche
Copy link
Contributor

great job here @alexm-redhat ! I didn't know about use_kernel=True perhaps pallas.py could use a few more comments.

Alexei-V-Ivanov-AMD added a commit to ROCm/vllm that referenced this pull request Mar 11, 2025
* Fix `head_dim` not existing in all model configs (Transformers backend) (vllm-project#14141)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V0][Metrics] Remove unimplemented `vllm:tokens_total` (vllm-project#14134)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [V0][Metrics] Deprecate some KV/prefix cache metrics (vllm-project#14136)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [V1] Simplify stats logging (vllm-project#14082)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [WIP][[V1][Metrics] Implement max_num_generation_tokens,  request_params_n, and request_params_max_tokens metrics (vllm-project#14055)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [Bugfix] Allow shared_experts skip quantization for DeepSeekV2/V3 (vllm-project#14100)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Kernel] Optimize moe intermediate_cache usage (vllm-project#13625)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Docs] Add GPTQModel (vllm-project#14056)

Signed-off-by: mgoin <mgoin64@gmail.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* [v1] Add comments to the new ragged paged attention Pallas kernel (vllm-project#14155)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>

* [Model] Add support for GraniteMoeShared models (vllm-project#13313)

Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [core] moe fp8 block quant tuning support (vllm-project#14068)

Signed-off-by: Divakar Verma <divakar.verma@amd.com>

* [Misc] Remove lru_cache in NvmlCudaPlatform (vllm-project#14156)

Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>

* [core] Pass all driver env vars to ray workers unless excluded (vllm-project#14099)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* Use math.prod instead of np.prod for trivial ops (vllm-project#14142)

* Fix benchmark_moe.py tuning for CUDA devices (vllm-project#14164)

* [platform] add debug logging during inferring the device type (vllm-project#14195)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [sleep mode] error out with expandable_segments (vllm-project#14189)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [doc] add "Failed to infer device type" to faq (vllm-project#14200)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Bugfix] Restrict MacOS CPU detection (vllm-project#14210)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [V1][BugFix] Fix remaining sync engine client shutdown errors/hangs (vllm-project#13869)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V0][Metrics] Deprecate some questionable request time metrics (vllm-project#14135)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [V1][Molmo] Fix get_multimodal_embeddings() in molmo.py (vllm-project#14161)

* add cutlass support for blackwell fp8 gemm (vllm-project#13798)

* [TPU][Profiler] Support start_profile/stop_profile in TPU worker (vllm-project#13988)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* Fix performance when `--generation-config` is not `None` (vllm-project#14223)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Frontend] Do `prompt_logprobs` clamping for chat as well as completions (vllm-project#14225)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Docs] Update Dockerfile dependency image (vllm-project#14215)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [v1][Metrics] Add design doc (vllm-project#12745)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>

* [Security] Serialize using safetensors instead of pickle in Mooncake Pipe (vllm-project#14228)

Signed-off-by: KuntaiDu <kuntai@uchicago.edu>

* Clean up unused padding_idx variables across many model definitions (vllm-project#13240)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [ROCm] Disable a few more kernel tests that are broken on ROCm (vllm-project#14145)

Signed-off-by: Sage Moore <sage@neuralmagic.com>

* [V1][TPU] TPU multimodal model support for ragged attention (vllm-project#14158)

Signed-off-by: Michael Goin <mgoin64@gmail.com>

* [misc] announce china meetup (vllm-project#14248)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Moved numba from common requirements to cuda/rocm specific requirements (vllm-project#14199)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>

* Disable GPTQ AllSpark kernels for CUDA Compiler < 12.0 (vllm-project#14157)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Fix gptq_marlin for deepseek-v3 (vllm-project#13750)

Signed-off-by: dangshunya <dangshunya@baichuan-inc.com>
Co-authored-by: dangshunya <dangshunya@baichuan-inc.com>

* [V1][Bugfix] Do not reset prefix caching metrics (vllm-project#14235)

* [Model] New model support for Phi-4-multimodal-instruct (vllm-project#14119)

* [V1] EP/TP MoE + DP Attention (vllm-project#13931)

* [platforms] improve rocm debugging info (vllm-project#14257)

* Temporarily disable test_awq_gemm_opcheck (vllm-project#14251)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Frontend] Allow return_tokens_as_token_ids to be passed as a request param (vllm-project#14066)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [Misc][V1] Avoid using `envs.VLLM_USE_V1` in mm processing (vllm-project#14256)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [Bugfix][V1] Fix allowed_token_ids for v1 Sampler (vllm-project#14169)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Doc] Update nginx guide: remove privileged from vllm container run and add target GPU ID (vllm-project#14217)

Signed-off-by: Iacopo Poli <iacopo@lighton.ai>

* [Doc] [3/N] Refer code examples for common cases in dev multimodal processor (vllm-project#14278)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Small update for external_launcher backend docs (vllm-project#14288)

* [V1][Frontend] Add Testing For V1 Runtime Parameters (vllm-project#14159)

Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>

* [LoRA] Remove linear hack outside transformers backend (vllm-project#14177)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Misc] Add Qwen2MoeForCausalLM moe tuning support  (vllm-project#14276)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* prefix_caching.md: Fixed typo (vllm-project#14293)

Signed-off-by: Daivid Savernin-Frenk <daivid.frank@TurboNext.ai>

* [Bugfix] Fix broken vision language example (vllm-project#14292)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Docs] Add Meta Slides (vllm-project#14297)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [V1][Minor] Remove obsolete FIXME comment (vllm-project#14304)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Deprecate `best_of` Sampling Parameter in anticipation for vLLM V1 (vllm-project#13997)

Signed-off-by: vincent-4 <vincentzhongy+githubvincent4@gmail.com>
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [V1][BugFix] Fix for mixed top_k batch (vllm-project#14301)

Signed-off-by: Nick Hill <nhill@redhat.com>


Co-authored-by: Ye Cao <caoye.cao@alibaba-inc.com>

* [misc] Add FlashMLA as a new option of VLLM_ATTENTION_BACKEND env (vllm-project#14267)

* [V1][Easy] Add empty allowed_token_ids in the v1 sampler test (vllm-project#14308)

Signed-off-by: Lu Fang <lufang@fb.com>

* init

Signed-off-by: Sage Moore <sage@neuralmagic.com>

* [Bugfix] Fix DeepSeek MTP crash when using TP1ModelRunner with CUDA graph due to shape mismatch (vllm-project#14237)

Signed-off-by: pyc96 <pychen96@gmail.com>

* [Bugfix] Remove num_tokens_across_dp (vllm-project#14302)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [BugFix] Fix prefix caching V0 MLA (vllm-project#14255)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Co-authored-by: Ying Zhong <zhongyingmatrix@gmail.com>

* [CI/Build] Use spawn multiprocessing mode for V1 test pipeline (vllm-project#14243)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Add benchmark for DeepGEMM and vLLM Block FP8 Dense GEMM (vllm-project#13917)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Build] Add UV_HTTP_TIMEOUT to avoid timeout during installation (vllm-project#13850)

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>

* [BugFix] MLA + V1, illegal memory access and accuracy issues (vllm-project#14253)

Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>

* [misc] Mention `ray list nodes` command to troubleshoot ray issues (vllm-project#14318)

Signed-off-by: Rui Qiao <ruisearch42@gmail.com>

* [Bugfix][Structured Output] Support outlines engine with reasoning outputs for DeepSeek R1 (vllm-project#14114)

* [V1] LoRA - Enable more V1 tests (vllm-project#14315)

Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>

* [Bugfix][CI] ALiBi test case in xformers multi_query_kv_attention (vllm-project#11301)

* [Hardware] Update the flash attn tag to support Blackwell (vllm-project#14244)

* [Model] Update Paligemma multimodal processing with PromptUpdate  (vllm-project#14015)

Signed-off-by: Kyle Huang <kylhuang@nvidia.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>

* [V1][VLM][Pixtral-HF] Support Pixtral-HF on V1 (vllm-project#14275)

Signed-off-by: Linkun Chen <github@lkchen.net>

* [Core] Optimizing cross-attention `QKVParallelLinear` computation (vllm-project#12325)

Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nick@nlucches-4xa100.c.openshift-330514.internal>
Co-authored-by: NickLucche <nick@nlucches-4xa100.c.openshift-330514.internal>

* [Frontend][Docs] Transcription API streaming (vllm-project#13301)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Doc] Update reasoning with stream example to use OpenAI library (vllm-project#14077)

Signed-off-by: liuyanyi <wolfsonliu@163.com>

* [Doc] Correct beam_search using in generative_models.md (vllm-project#14363)

* [Kernel] [V1] Improved performance for V1 Triton (ROCm) backend  (vllm-project#14152)

* [Bugfix][Core] fix abort_seq_group and memory leak when n>1 (vllm-project#14326)

Signed-off-by: courage17340 <courage17340@163.com>

* [Core] Don't use cache during multi-modal profiling (vllm-project#14336)

* [Doc] Fix date typo in README.md (vllm-project#14366)

Signed-off-by: Jitse Klomp <jitse.klomp@conclusionxforce.nl>

* [RLHF] use worker_extension_cls for compatibility with V0 and V1 (vllm-project#14185)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* Reinstate `best_of` for V0 (vllm-project#14356)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Adding cpu inference with VXE ISA for s390x architecture (vllm-project#12613)

Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>
Signed-off-by: Rishika Kedia <rishika.kedia@in.ibm.com>
Co-authored-by: Rishika Kedia <rishika.kedia@in.ibm.com>

* Add authors to license header. (vllm-project#14371)

Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Burkhard Ringlein <ngl@zurich.ibm.com>
Co-authored-by: Jan van Lunteren <jvl@zurich.ibm.com>

* Fix mla prefill context performance (vllm-project#13897)

Signed-off-by: ZhongYingMatrix <zhongyingmatrix@gmail.com>

* [V1] Do not detokenize if sampling param detokenize is False (vllm-project#14224)

Signed-off-by: Himanshu Jaju <hj@mistral.ai>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Distributed] Add enable_expert_parallel arg (vllm-project#14305)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [CI/Build] Use uv python for docker rather than ppa:deadsnakes/ppa (vllm-project#13569)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [CI] Disable spawn when running V1 Test (vllm-project#14345)

Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>

* [Kernel] Add needs_fixed_stride_order tag to most GEMMs (vllm-project#14306)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [Bugfix] Fix use_direct_call condition in FusedMoE layer for  (vllm-project#14382)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [Bug] Fix Attention when ignored in by quant_method (vllm-project#14313)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [V1][Bugfix] Standardize quantized kv cache rejection for attention backends (vllm-project#14221)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Docs] Add nsight guide to profiling docs (vllm-project#14298)

Signed-off-by: mgoin <mgoin64@gmail.com>

* cleanup boolean logic

Signed-off-by: Sage Moore <sage@neuralmagic.com>

* [Hardware][TPU]Enable ragged paged attention kernel and resolve recompilation issue (vllm-project#14310)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Doc] Fix a typo (vllm-project#14385)

* [Bugfix] Correctly call `cudaProfilerStop` in benchmarks script (vllm-project#14183)

Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>

* [Perf] Reduce MLA CPU overheads in V1 (vllm-project#14384)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>

* [FP8] Refactor apply_fp8_linear and apply_fp8_linear_generic into an object (vllm-project#14390)

Signed-off-by: luka <luka@neuralmagic.com>

* [BugFix] Illegal Memory Access in the blockwise cutlass fp8 GEMMs (vllm-project#14396)

* [Bugfix] Fix JambaForCausalLM LoRA  (vllm-project#14370)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Build] Add nightly wheel fallback when latest commit wheel unavailable (vllm-project#14358)

Signed-off-by: Isotr0py <2037008807@qq.com>

* OpenVINO: added CPU-like conditions (vllm-project#14338)

Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

* [GH] Auto-apply multi-modality label to relevant PRs (vllm-project#14402)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* correct wrong markdown syntax (vllm-project#14414)

Signed-off-by: vincent-pli <justdoit.pli@gmail.com>

* [Bugfix] Further clean up LoRA test (vllm-project#14422)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Bugfix] Clean up multi-modal processors (vllm-project#14417)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Set default value of seed to None (vllm-project#14274)

Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <smartmanoj42857@gmail.com>

* [BUGFIX] Skip tokenization support for throughput benchmark (vllm-project#12712)

Signed-off-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Signed-off-by: Aleksandr Malyshev <maleksan@amd.com>
Co-authored-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Co-authored-by: Aleksandr Malyshev <maleksan@amd.com>

* Fix missing `kv_caches` and `attn_metadata` in `OpenVINOCausalLM` (vllm-project#14271)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* Use the optimized block sizes after tuning the kernel. (vllm-project#14329)

* [V1][Core] Support for Structured Outputs (vllm-project#12388)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Doc] Update prefix_caching.md to match the example image (vllm-project#14420)

* [Benchmarks] Make detokenization optional in benchmark scripts (vllm-project#11697)

Signed-off-by: Jeremy Arnold <Jeremy.Arnold@amd.com>

* comments

Signed-off-by: Sage Moore <sage@neuralmagic.com>

* [Kernel] optimize performance of gptq marlin kernel when n is small (vllm-project#14138)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>

* [Misc] Add Phi4-MM example (vllm-project#14343)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [v1] torch.compile integration explanation (vllm-project#14437)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [V1] Eagerly remove finished requests from the batch (vllm-project#14388)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V1][Metrics] Fix traceback with preemptions+LoRA (vllm-project#14220)

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* [Bugfix] Fix torch_xla which can't handle None seed introduced in vllm-project#14274 (vllm-project#14459)

Signed-off-by: Yarong Mu <ymu@google.com>

* [V1] Prompt logprobs + APC compatibility; prompt logprobs reqs cannot fill APC (vllm-project#13949)

* [Bugfix][V1] Handle MLA in kv_cache_interface (vllm-project#14462)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Revert "[Perf] Reduce MLA CPU overheads in V1 (vllm-project#14384)" (vllm-project#14471)

* [Bugfix][Disaggregated] Add a check in send_kv_caches_and_hidden_states and fix the reshape of the KVCache (vllm-project#14369)

Signed-off-by: Mathis Felardos <mathis@mistral.ai>

* [MISC][V1] Register process killing handler only in the main thread (vllm-project#14380)

Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>

* [core] add `extra_args` to `SamplingParams` (vllm-project#13300)

Signed-off-by: Aviv Keshet <akeshet@scaledcognition.com>

* [CI/Build] refactor: set timezone of container to UTC (vllm-project#12888)

Signed-off-by: Roger Meier <r.meier@siemens.com>

* Default to `generation_config` from model (vllm-project#12622)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Doc]add doc for Qwen models tool calling (vllm-project#14478)

Signed-off-by: WangErXiao <863579016@qq.com>

* [Doc] Added QwQ-32B to the supported models list in the reasoning out… (vllm-project#14479)

Signed-off-by: WangErXiao <863579016@qq.com>

* [Bugfix] Make the deviceprofiler include LoRA memory. (vllm-project#14469)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Add training doc signposting to TRL (vllm-project#14439)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Build/BugFix] Fix hopper 12.8 build (vllm-project#14354)

Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* Add RLHF document (vllm-project#14482)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [CI/Build] Use a fixed seed to avoid flaky tests (vllm-project#14480)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1] TPU - Add tensor parallel support via Ray (vllm-project#13618)

Signed-off-by: Alexander Matveev <amatveev@redhat.com>

* [VLM] Add TP support for Phi-4-MM (vllm-project#14453)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Misc] add `use_tqdm_on_load` to reduce logs (vllm-project#14407)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [V1][Core] Fix memory issue with logits & sampling (vllm-project#13776)

Signed-off-by: Roger Wang <ywang@roblox.com>

* [benchmarks] Add option to use unique jsonschema for each request (vllm-project#14457)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Misc] Don't run ruff at all on 3rd party libs (vllm-project#14493)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Move requirements into their own directory (vllm-project#12547)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] DeepSeek Accuracy (vllm-project#14476)

Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>

* [Bugfix] Fix profiling OOM and decouple encoder multimodal profiling (vllm-project#14361)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Update CODEOWNERS for structured output (vllm-project#14496)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Misc] Upgrade to Python 3.9 typing for additional directories (vllm-project#14492)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1] Support bad_words in sampler (vllm-project#13376)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* Revert "[V1][Core] Fix memory issue with logits & sampling" (vllm-project#14504)

Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Roger Wang <ywang@roblox.com>

* [Attention] Default to FlashMLA backend for MLA (vllm-project#14451)

Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [V1][TPU] Remove unnecessary padding for running on TPU. (vllm-project#14467)

* [Feat] Support chunked prefill for LMCache connector (vllm-project#14505)

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>

* [Bugfix] Fix tqdm progress bar when SamplingParams.n > 1 (vllm-project#12428)

Signed-off-by: Yuchen Yan <740987012@qq.com>

* [Bugfix] Revert QKVCrossParallelLinear usage in Mllama to keep BNB quantization work (vllm-project#14498)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Hardware][TPU] Fix the recompiling issue in logits processor after warmup (vllm-project#14510)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Misc] Ensure out-of-tree quantization method recognize by cli args (vllm-project#14328)

Signed-off-by: liuyanyi <wolfsonliu@163.com>

* [Bugfix] Wrong requirements path - rocm (vllm-project#14527)

Signed-off-by: Martin Hoyer <mhoyer@redhat.com>

* [Feature] Consolidate performance benchmark datasets (vllm-project#14036)

Signed-off-by: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com>
Signed-off-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>

* [Misc] Add log information for handle_process_request. (vllm-project#14130)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Docs] Mention `model_impl` arg when explaining Transformers fallback (vllm-project#14552)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Frontend] support image embeds (vllm-project#13955)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Kernel] Add more dtype support for GGUF kernels (vllm-project#14043)

Signed-off-by: SzymonOzog <szymon.ozog@aleph-alpha.com>
Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>

* [Doc] Update PaliGemma note to a warning (vllm-project#14565)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* V1 rocm support (#469)

* Initial commit for V1 successfull compilation

* Small improvement for linear

* Small improvement for linear

* making use of forward_cuda for all except ROPE in llama

---------

Co-authored-by: maleksan85 <maleksan@amd.com>

* nightly_fixed_aiter_integration_final_20250305 README update (#470)

* nightly_fixed_aiter_integration_final_20250305 README update (perf results only)

* Update Docker Manifest git hash

* Update Docker Manifest and added nightly_fixed_aiter_integration_final_20250305

* some more updates

* Update AITER section with example

* Updated AITER command with larger batch size and model name

* Fixing typo

* Removed --max-model-len in AITER command

* Updating AITER instructions

* typo

* Another typo

* Whitespace

* modifying whats new section

* Another typo

---------

Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>

---------

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Mark McLoughlin <markmc@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Signed-off-by: Divakar Verma <divakar.verma@amd.com>
Signed-off-by: Cody Yu <hao.yu.cody@gmail.com>
Signed-off-by: Rui Qiao <ruisearch42@gmail.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Sage Moore <sage@neuralmagic.com>
Signed-off-by: Michael Goin <mgoin64@gmail.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: dangshunya <dangshunya@baichuan-inc.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Roger Wang <ywang@roblox.com>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: Iacopo Poli <iacopo@lighton.ai>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: rshaw@neuralmagic.com <rshaw@neuralmagic.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Daivid Savernin-Frenk <daivid.frank@TurboNext.ai>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: vincent-4 <vincentzhongy+githubvincent4@gmail.com>
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Signed-off-by: pyc96 <pychen96@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkinson@neuralmagic.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
Signed-off-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Signed-off-by: Kyle Huang <kylhuang@nvidia.com>
Signed-off-by: Linkun Chen <github@lkchen.net>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nick@nlucches-4xa100.c.openshift-330514.internal>
Signed-off-by: liuyanyi <wolfsonliu@163.com>
Signed-off-by: courage17340 <courage17340@163.com>
Signed-off-by: Jitse Klomp <jitse.klomp@conclusionxforce.nl>
Signed-off-by: Dilip Gowda Bhagavan <dilip.bhagavan@ibm.com>
Signed-off-by: Rishika Kedia <rishika.kedia@in.ibm.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: ZhongYingMatrix <zhongyingmatrix@gmail.com>
Signed-off-by: Himanshu Jaju <hj@mistral.ai>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Signed-off-by: vincent-pli <justdoit.pli@gmail.com>
Signed-off-by: மனோஜ்குமார் பழனிச்சாமி <smartmanoj42857@gmail.com>
Signed-off-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Signed-off-by: Aleksandr Malyshev <maleksan@amd.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Jeremy Arnold <Jeremy.Arnold@amd.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
Signed-off-by: Yarong Mu <ymu@google.com>
Signed-off-by: Mathis Felardos <mathis@mistral.ai>
Signed-off-by: Aviv Keshet <akeshet@scaledcognition.com>
Signed-off-by: Roger Meier <r.meier@siemens.com>
Signed-off-by: WangErXiao <863579016@qq.com>
Signed-off-by: Alexander Matveev <amatveev@redhat.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
Signed-off-by: Yuchen Yan <740987012@qq.com>
Signed-off-by: Martin Hoyer <mhoyer@redhat.com>
Signed-off-by: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: SzymonOzog <szymon.ozog@aleph-alpha.com>
Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
Co-authored-by: Qubitium-ModelCloud <qubitium@modelcloud.ai>
Co-authored-by: mgoin <mgoin64@gmail.com>
Co-authored-by: iefgnoix <isaacwxf23@gmail.com>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Divakar Verma <137818590+divakar-amd@users.noreply.github.com>
Co-authored-by: Cody Yu <hao.yu.cody@gmail.com>
Co-authored-by: Rui Qiao <161574667+ruisearch42@users.noreply.github.com>
Co-authored-by: Zhanwen Chen <phil.zhanwen.chen@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: lkchen <github@lkchen.net>
Co-authored-by: kushanam <42385577+kushanam@users.noreply.github.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Kuntai Du <kuntai@uchicago.edu>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: rainkert <93575312+rainkert@users.noreply.github.com>
Co-authored-by: dangshunya <dangshunya@baichuan-inc.com>
Co-authored-by: Congcong Chen <congcongchen@microsoft.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Iacopo Poli <iacopo@lighton.ai>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Zhe Zhang <zhz@apache.org>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: DaividFrank <49250948+DaividFrank@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: Vincent <vincentzhongy+githubvincent4@gmail.com>
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Co-authored-by: Ye Cao <caoye.cao@alibaba-inc.com>
Co-authored-by: Serena <yangsijia.614@bytedance.com>
Co-authored-by: pyc96 <pychen96@gmail.com>
Co-authored-by: Lucas Wilkinson <LucasWilkinson@users.noreply.github.com>
Co-authored-by: Ying Zhong <zhongyingmatrix@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: Ce Gao <cegao@tensorchord.ai>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varun@neuralmagic.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: kYLe <kylhuang@nvidia.com>
Co-authored-by: NickLucche <nick@nlucches-4xa100.c.openshift-330514.internal>
Co-authored-by: Yanyi Liu <wolfsonliu@163.com>
Co-authored-by: Irina Yuryeva <76484191+upayuryeva@users.noreply.github.com>
Co-authored-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: courage17340 <courage17340@users.noreply.github.com>
Co-authored-by: Jitse Klomp <jitse.klomp@conclusionxforce.nl>
Co-authored-by: Dilip Gowda Bhagavan <110233170+dilipgb@users.noreply.github.com>
Co-authored-by: Rishika Kedia <rishika.kedia@in.ibm.com>
Co-authored-by: Burkhard Ringlein <ngl@zurich.ibm.com>
Co-authored-by: Jan van Lunteren <jvl@zurich.ibm.com>
Co-authored-by: Himanshu Jaju <hj@mistral.ai>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: Daniel Li <dyli@google.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Peng Li <justdoit.pli@gmail.com>
Co-authored-by: மனோஜ்குமார் பழனிச்சாமி <smartmanoj42857@gmail.com>
Co-authored-by: Aleksandr Malyshev <164964928+maleksan85@users.noreply.github.com>
Co-authored-by: root <root@banff-cyxtera-s73-5.ctr.dcgpu>
Co-authored-by: Aleksandr Malyshev <maleksan@amd.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: York-RDWang <103811994+York-RDWang@users.noreply.github.com>
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
Co-authored-by: Jinzhen Lin <linjinzhen@hotmail.com>
Co-authored-by: yarongmu-google <150371854+yarongmu-google@users.noreply.github.com>
Co-authored-by: afeldman-nm <156691304+afeldman-nm@users.noreply.github.com>
Co-authored-by: Mathis Felardos <mathis@mistral.ai>
Co-authored-by: Aviv Keshet <akeshet@scaledcognition.com>
Co-authored-by: Roger Meier <r.meier@siemens.com>
Co-authored-by: Robin <863579016@qq.com>
Co-authored-by: Alexander Matveev <59768536+alexm-redhat@users.noreply.github.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
Co-authored-by: Jiayi Yao <82156730+YaoJiayi@users.noreply.github.com>
Co-authored-by: Yuchen Yan <50619811+yanyc428@users.noreply.github.com>
Co-authored-by: Martin Hoyer <mhoyer@redhat.com>
Co-authored-by: Jennifer Zhao <JenZhao@users.noreply.github.com>
Co-authored-by: Jennifer Zhao <7443418+JenZhao@users.noreply.github.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com>
Co-authored-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: Mcirino1 <57415822+Mcirino1@users.noreply.github.com>
Co-authored-by: arakowsk-amd <182798202+arakowsk-amd@users.noreply.github.com>
captainzmc pushed a commit to captainzmc/vllm that referenced this pull request Mar 12, 2025
Signed-off-by: Alexander Matveev <amatveev@redhat.com>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
Signed-off-by: Alexander Matveev <amatveev@redhat.com>
Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
Signed-off-by: Alexander Matveev <amatveev@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build ready ONLY add when PR is ready to merge/full CI is needed tpu Related to Google TPUs v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants