-
-
Notifications
You must be signed in to change notification settings - Fork 8.4k
Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. #17930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
@bigPYJ1151 please help to review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@louie-tsai Thanks for your good work and apology for the late response. I have some suggestions, please take a look.
BTW, can you help to update the CPU document about the PR as well? You can find it at docs/source/getting_started/installation/cpu.md
, thanks :)
vllm/worker/cpu_worker.py
Outdated
logger.info("[ERROR] NO AUTO OMP Bind support because request world size: %d is more than allowed numa_size: %d", | ||
world_size, len(node_to_cpus)) | ||
else: | ||
rank_to_cpus=str(node_to_cpus[self.rank][0]) + '-' + str(node_to_cpus[self.rank][cpu_count_per_numa - 1 - num_of_no_bind_cpu]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Building the rank_to_cpus
string assumes that CPU ids in node_to_cpus[self.rank]
are contiguous. But it could have CPUs 0-39,240-279. If so, this means that rank_to_cpus
would contain CPUs 0-279. The safest option would be to construct the CPU list without assuming anything about how CPUs are numbered inside a NUMA node. (There's a lot variation in different platforms.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@askervin
We used info.node_to_cpus(i)
to get the list of CPU ids in a numa node, so we don't assume the CPU id will be continues in a numa node.
For CPUs 0-39,240-279, we only get node_to_cpus = [0-39] since 240-279 is also in numa node 0 when HT is on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@askervin
changed the codes to address non-contiguous code id in a numa node. hope it address your feedback.
vllm/worker/cpu_worker.py
Outdated
cpu_count = psutil.cpu_count(logical=False) | ||
cpus_allow_list = psutil.Process().cpu_affinity() | ||
numa_size = info.get_num_configured_nodes() | ||
cpu_count_per_numa = cpu_count // numa_size |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some systems include NUMA nodes with CPUs and other NUMA nodes without CPUs. CPU-less NUMA nodes might be, for instance HBM (high-bandwidth memory), PMEM (persistent memory) or CXL memory blocks.
I'd suggest calculating cpu_count_per_numa
by dividing the number of CPUs with the number of NUMA nodes that contain CPUs.
Or, to be even more accurate, calculate first node_to_cpus
so that
node_to_cpus[node] = intersection of cpus on the node and the set of allowed CPUs
...and skip all nodes where the intersection is empty.
After this, if all node_to_cpus contain the same number of CPUs, there is your cpu_count_per_numa. And then your algorithm will work even if allowed CPUs would be equal-sized chunks from separate NUMA nodes. I think this would be very nice.
If CPU sets in nodes_to_cpus are of different size, it would be fine to print a warning about not doing auto affinity to ranks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@askervin
thanks for your detailed input. cpu_count_per_numa is only used to get the index of last CPU in one node. it doesn't use to find out node_to_cpus
The main logic to find CPU ids for each node are in below codes, and it should address your request "node_to_cpus[node] = intersection of cpus on the node and the set of allowed CPUs
also the info.node_to_cpus(i) should cover cpu id only in one numa node. In general, we should have same number of cpus per node. (not the case for GNR SNC=3).
I assume that info.node_to_cpus(i) will return CPU-less NUMA nodes. Please correct me if I have misunderstandings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use a two-socket Xeon MAX CPU as an example. It has high-bandwidth memory (HBM) integrated directly into CPU package, and DDR5 memory DIMMs next to each socket.
Assume that this system is configured to expose 4 NUMA nodes. (Depending on configuration, it could have even 16 nodes in real-life.) Assume that nodes 0 and 1 include both CPUs and memory (DDR5), while NUMA nodes 2 and 3 include only memory (HBM) and no CPUs.
info.get_numa_configured_nodes()
returns the number of all NUMA nodes, that is 4. This would be the numa_size
. Let's assume that
info.nodes_to_cpus(0)
returns [0,1,..., 30, 31, 64, 65, ..., 94, 95]
info.nodes_to_cpus(1)
returns []
info.nodes_to_cpus(2)
returns [32, 33, ..., 62, 63, 96, 97, ..., 126, 127]
info.nodes_to_cpus(3)
returns []
This gives you nodes_to_cpus = [[0-31, 64-95], [], [32-63, 96-127], []]
. And cpu_count_per_numa = cpu_count / numa_size = 64 / 4 = 16
Now, in the case of rank=0,
rank_to_cpus=str(node_to_cpus[self.rank][0]) + '-' + str(node_to_cpus[self.rank][cpu_count_per_numa - 1 - num_of_reserved_cpu])
gives "0-16". rank=1 crashes on IndexError.
And this example is still a kind of easy, CPU ids of thread0 and thread1 in the NUMA nodes are continuous. But there are platforms where they are not. The code should work even if
info.nodes_to_cpus(0)
returns [0,2,4,6]
info.nodes_to_cpus(1)
returns [1,3,5,7]
NUMA ordering and CPU numbering can be quite exciting, so let's make any assumptions on it.
So what I'm suggesting is:
-
Do not require
set(info.node_to_cpus(i)).issubset(cpus_allow_list)
. This requirement is very hard, because it means that the resource policy that is managing the server has been able to allocate every single CPU from a NUMA node. But there are many other containers (other vLLM containers, databases, whatever servers) that might have exclusive CPUs from every node, and therefore cpus_allow_list may not include all CPUs from any NUMA node. But it if includes 16 CPUs from one NUMA node and 16 CPUs from another NUMA node, this automatic optimization could still work. -
So what do you think, if instead of this requirement, you would construct node_to_cpus like
node_to_cpus.append(set(info.node_to_cpus(i)).intersection(cpus_allow_list))
whenever the intersection is not empty? This would automatically solve the problem of CPU-less NUMA node (the IndexError that I mentioned above). And it would enable running nicely in GNR SNC3, if only the resource policy allocates equal number of CPUs from every node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one quick input,
we won't have nodes_to_cpus = [[0-31, 64-95], [], [32-63, 96-127], []], because [] is not a subset of cpu_allow_list.
I used intersect according to your input in the PR, and I saw same test output for issubset and intersect like below diagram.
I actually don't understand the differences between using intersection and using issubset, but they look both ok to me.
Let me know whether the new changes work for you or not.
Since I don't have empty numa node, I couldn't test empty numa nodes with our codes.
However, below codes might still have issues with empty numa node after changing into intersection.
numa_size = info.get_numa_configured_nodes(); cpu_count_per_numa = cpu_count / numa_size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
because [] is not a subset of cpu_allow_list
An empty set is a subset of any set, including an empty set.
I actually don't understand the differences between using intersection and using issubset, but they look both ok to me.
These are strong indications that this logic should be written into its own function so that you can write unit tests for it. It will give understanding to everyone how the logic will behave on different systems with different CPU id numbering and different CPUs allowed sets. And it will be very helpful for ensuring that the logic will work in the future, too, when new developers touch this part of the code. Without unit tests they will very easily break it, because quite likely they change the code with only the platforms they know in mind.
Maybe something like:
def auto_cpu_binding(world_size, rank, node_cpus, cpus_allowed, cpus_reserved):
"""
Bind the process to a set of CPUs based on its rank and the available CPUs on the node.
Args:
world_size (int): number of processes.
rank (int): rank of the current process.
node_cpus (list of sets): node_cpus[i] contains CPUs on node i, can be empty.
cpus_allowed (set): set of CPUs allowed for binding.
cpus_reserved (int): number of CPUs per rank reserved for other purposes.
Returns a pair:
- set of CPUs that the process should be bound to, or empty if no binding is possible.
- message indicating the binding decision.
"""
if world_size <= 0:
return set(), "invalid world size"
if rank < 0:
return set(), "invalid rank"
if cpus_reserved < 0:
return set(), "invalid reserved CPUs"
if rank >= world_size:
return set(), "rank exceeds world size"
nodes_with_cpus = [cpus & cpus_allowed for cpus in node_cpus if cpus & cpus_allowed]
if len(nodes_with_cpus) != world_size:
return set(), f"number of nodes with allowed CPUs ({len(nodes_with_cpus)}) does not match world size ({world_size})"
min_cpus_per_node = min(len(cpus) for cpus in nodes_with_cpus)
if min_cpus_per_node - cpus_reserved <= 0:
return set(), f"not enough CPUs per node ({min_cpus_per_node}) for reserved CPUs ({cpus_reserved}) and at least one CPU per rank"
bind_cpus = sorted(nodes_with_cpus[rank])[:min_cpus_per_node - cpus_reserved]
return set(bind_cpus), "binding rank per node"
# This shows how to play with auto_cpu_binding without having
# to test the function in more exotic hardware setups. This also works
# as a non-trivial case for unit tests:
# - asymmetric NUMA node sizes
# - asymmetric cpus_allowed on different NUMAs
# - exotic CPU numbering (still fully realistic)
if __name__ == "__main__":
world_size = 2
node_cpus = [{0, 2, 4, 6, 8, 10}, set(), {1, 3, 5, 7, 9}, set()]
cpus_allowed = {2, 4, 6, 1, 5, 7, 9}
reserved_cpus = 1
for rank in range(world_size):
cpus = auto_cpu_binding(world_size, rank, node_cpus, cpus_allowed, reserved_cpus)
print(f"Rank {rank} of {world_size-1}: CPUs: {cpus[0]}, Message: {cpus[1]}")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moving the implementations into a separate function accordingly. also handled the non-continuous cpd-id cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are closing to be ready.
Please add some note to the CPU doc for the new added env. And fix the code-style checks, you can refer to this for auto format and lint.
This pull request has merge conflicts that must be resolved before it can be |
50a3d08
to
6ff056a
Compare
@bigPYJ1151 updated the cpu.md accordingly |
706b5de
to
9295509
Compare
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Hi @DarkLight1337 @Isotr0py This PR has did some review rounds and looks good to me. Can you help to take a look on this? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:15 [cpu_worker.py:145] auto thread-binding list: 0,1,2,3,4,5,6,7,8,9,10,11
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:145] auto thread-binding list: 12,13,14,15,16,17,18,19,20,21,22,23
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP threads binding of Process 1186205:
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186205, core 12
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186674, core 13
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186675, core 14
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186676, core 15
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186677, core 16
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186678, core 17
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186679, core 18
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186680, core 19
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186681, core 20
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186682, core 21
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186683, core 22
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51] OMP tid: 1186684, core 23
(VllmWorker rank=1 pid=1186205) INFO 06-05 16:59:15 [cpu_worker.py:51]
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP threads binding of Process 1186204:
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186204, core 0
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186720, core 1
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186721, core 2
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186722, core 3
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186724, core 4
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186725, core 5
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186726, core 6
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186727, core 7
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186728, core 8
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186729, core 9
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186730, core 10
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51] OMP tid: 1186731, core 11
(VllmWorker rank=0 pid=1186204) INFO 06-05 16:59:16 [cpu_worker.py:51]
Given the auto bind results locally, this PR LGTM! Thanks for this improvement!
vllm/v1/worker/cpu_worker.py
Outdated
rank_to_cpus = self.local_omp_cpuid | ||
# Setup OpenMP thread affinity based on NUMA nodes automatically | ||
world_size = self.vllm_config.parallel_config.world_size | ||
from importlib import util |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is no need to lazy import importlib
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we want to remove the numa and psutil checking assuming since it is installed via cpu.txt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep the numa and psutil check here, since we don't have them installed for MacOS.
I meant we can directly import importlib.util
at the top-level importing :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the input. addressed it accordingly.
vllm/worker/cpu_worker.py
Outdated
rank_to_cpus = self.local_omp_cpuid | ||
# Setup OpenMP thread affinity based on NUMA nodes automatically | ||
world_size = self.vllm_config.parallel_config.world_size | ||
from importlib import util |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto.
Hi @DarkLight1337 @bigPYJ1151, if the PR looks good to you, please help to approve. |
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
@@ -208,6 +208,9 @@ def check_and_update_config(cls, vllm_config: VllmConfig) -> None: | |||
# Disable torch async compiling which won't work with daemonic processes | |||
os.environ["TORCHINDUCTOR_COMPILE_THREADS"] = "1" | |||
|
|||
# Share the cpusets list among ranks by spawning process instead | |||
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we double set this. duplicate to L199. may due to rebase not auto merge this change.
* [doc] clarify windows support (vllm-project#19088) Signed-off-by: youkaichao <youkaichao@gmail.com> * [CI/Build] Remove V0 LoRA test (vllm-project#19066) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * Fix underscores in dict keys passed via CLI (vllm-project#19030) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> * [Bugfix] disable processor cache (vllm-project#19068) Signed-off-by: raushan <raushan@huggingface.co> * [Doc] Improve the Pull Request template with key components (vllm-project#19086) Signed-off-by: Lu Fang <lufang@fb.com> * [Misc] Add missing `_Backend` enums (vllm-project#19081) Signed-off-by: nicklucche <nlucches@redhat.com> * [Misc] fix: add miss best_of param validation (vllm-project#18555) Signed-off-by: googs1025 <googs1025@gmail.com> * [Misc] Add SPDX-FileCopyrightText (vllm-project#19100) Signed-off-by: simon-mo <simon.mo@hey.com> * [Doc] Readme standardization (vllm-project#18695) Co-authored-by: Soren Dreano <soren@numind.ai> * [doc] update docker version (vllm-project#19074) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [V1] Support cross-layer KV sharing (vllm-project#18212) Signed-off-by: Yong Hoon Shin <yhshin@meta.com> * [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Yikun Jiang <yikun@apache.org> * [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971) * [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411) Signed-off-by: nicklucche <nlucches@redhat.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> * [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * feat: add data parallel rank to KVEventBatch (vllm-project#18925) * [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919) * [Docs] Add developer doc about CI failures (vllm-project#18782) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [CPU] V1 support for the CPU backend (vllm-project#16441) * [Core] Cast multimodal input in hf processor (vllm-project#18862) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437) * [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059) Signed-off-by: calvin chen <120380290@qq.com> * [NVIDIA] Add Cutlass MLA backend (vllm-project#17625) * [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * Fix vllm-project#19130 (vllm-project#19132) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [TPU] Skip hanging tests (vllm-project#19115) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113) Signed-off-by: Seiji Eicher <seiji@anyscale.com> * [Misc] Add packages for benchmark as extra dependency (vllm-project#19089) Signed-off-by: Isotr0py <2037008807@qq.com> * Improve the output precision of embedding models (vllm-project#19092) * [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add DeepSeek-R1-0528 function call chat template (vllm-project#18874) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * Sm100 blockwise fp8 swap ab (vllm-project#18564) * [Doc] Update V1 Guide for embedding models (vllm-project#19141) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Bugfix][EP+DP] Fix internode check (vllm-project#19112) Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> * [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778) Signed-off-by: mgoin <mgoin64@gmail.com> * [TPU] Update dynamo dump file name in compilation test (vllm-project#19108) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121) * [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817) Signed-off-by: googs1025 <googs1025@gmail.com> * [P/D] Heterogeneous TP (vllm-project#18833) Signed-off-by: nicklucche <nlucches@redhat.com> * [doc] small fix (vllm-project#19167) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117) * [Torch Nightly]add missing dependency (vllm-project#18770) Signed-off-by: Yang Wang <elainewy@meta.com> * Handle non-serializable objects when dumping benchmark results (vllm-project#19114) * [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Build] Annotate wheel and container path for release workflow (vllm-project#19162) Signed-off-by: simon-mo <simon.mo@hey.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138) Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> * [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Frontend] improve vllm run-batch --help display (vllm-project#19187) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202) Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> * [mistral_common] Add v11 tokenizer (vllm-project#19193) Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> * Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205) * [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110) Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> * [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226) Signed-off-by: Povilas Kanapickas <povilas@radix.lt> * [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090) * [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217) * [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118) * [Model] NemotronH support (vllm-project#18863) Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> * Fix AOPerModuleConfig name changes (vllm-project#18869) Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> * [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033) Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> * [v1] Hybrid Memory Allocator (vllm-project#17996) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [TPU] update torch_xla pin (vllm-project#19231) Signed-off-by: Chengji Yao <chengjiyao@google.com> * Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143) Signed-off-by: Xu Song <xusong.vip@gmail.com> * [Chore] update CODEOWNERS (vllm-project#19247) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182) Co-authored-by: jinghui <jinghui@fb.com> * [TPU] fix kv cache dtype in model runner (vllm-project#19244) Signed-off-by: Chengji Yao <chengjiyao@google.com> * [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224) Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> * [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172) Signed-off-by: Nick Hill <nhill@redhat.com> * Fix CompilationConfig repr (vllm-project#19091) Signed-off-by: rzou <zou3519@gmail.com> * Unit Test for run_dp_sharded_vision_model (vllm-project#19103) Signed-off-by: Siqi Yan <siqi@meta.com> Co-authored-by: Siqi Yan <siqi@meta.com> * [Model] Optimize nemotron_h implementation (vllm-project#19249) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227) Signed-off-by: Jon Swenson <jmswen@gmail.com> * improve logits bias (vllm-project#19041) * Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422) Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> * [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225) Co-authored-by: Adolfo Victoria <adovi@meta.com> * [Core] Fix abrupt request abort (vllm-project#18485) Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228) Signed-off-by: Nick Hill <nhill@redhat.com> * [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163) Signed-off-by: Chenyaaang <chenyangli@google.com> * [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296) Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> * [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762) Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> * [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039) Signed-off-by: Qiliang Cui <derrhein@gmail.com> * [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253) Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> * Add FlexAttention to V1 (vllm-project#16078) Signed-off-by: drisspg <drisspguessous@gmail.com> * [Misc] refactor context extension (vllm-project#19246) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287) Signed-off-by: Isotr0py <2037008807@qq.com> * [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311) Signed-off-by: Lifan Shen <lifans@meta.com> * [AMD] Update compatible packaging version (vllm-project#19309) Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> * [BugFix][V1] Fix memory profiling bug (vllm-project#18974) Signed-off-by: luka <luka@neuralmagic.com> * [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299) Signed-off-by: Richard Zou <zou3519@gmail.com> * [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302) Signed-off-by: rzou <zou3519@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315) Signed-off-by: Xu Wenqing <xuwq1993@qq.com> * [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082) Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> * [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312) * [Multi Modal] Add an env var for message queue max chunk bytes (vllm-project#19242) Signed-off-by: yZhen <yZhen@fb.com> Co-authored-by: yZhen <yZhen@fb.com> * [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201) * [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Add documentation update reminder to PR template (vllm-project#19289) Signed-off-by: Isotr0py <2037008807@qq.com> * [Frontend] Remove unreachable code from llm.py (vllm-project#19288) Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> * [Misc] Cleanup compilation tests (vllm-project#19343) Signed-off-by: rzou <zou3519@gmail.com> * [doc] improve ci doc (vllm-project#19307) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333) Signed-off-by: cr7258 <chengzw258@163.com> * [CI/Build] Fix LoRA test (vllm-project#19350) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328) Signed-off-by: Conroy Cheers <conroy@corncheese.org> * [CI] Introduce rules for llama auto-label (vllm-project#19323) Signed-off-by: Lu Fang <lufang@fb.com> * [Docs] Fix a bullet list in usage/security.md (vllm-project#19358) Signed-off-by: windsonsea <haifeng.yao@daocloud.io> * [full_graph] Fix query_start_loc padding (vllm-project#19321) Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> * [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Quantization] Bump compressed-tensors version (vllm-project#19295) Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> * [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472) Signed-off-by: liusiqian <liusiqian@tal.com> * [TPU]Fix KV cache sharing tests (vllm-project#19371) * [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374) Signed-off-by: Pavani Majety <pmajety@nvidia.com> * [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [Bugfix] Fix benchmark_moe.py (vllm-project#19016) Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> * Use xla flag to improve the quantized model performance (vllm-project#19303) Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> * Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382) * [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Core] Use tuple for kv cache group block ids (vllm-project#19175) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix modelscope token passed in (vllm-project#19389) Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Batch multi modal input using pinned memory (vllm-project#19169) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add security warning to bug report template (vllm-project#19365) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * [Misc] refactor neuron_multimodal and profiling (vllm-project#19397) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * Add clear documentation around the impact of debugging flag (vllm-project#19369) Signed-off-by: Anna Pendleton <pendleton@google.com> * Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> * Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404) * [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134) Signed-off-by: Yunqiu Guo <guorachel@meta.com> * [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411) Signed-off-by: jiang.li <jiang1.li@intel.com> * Simplify ep kernels installation (vllm-project#19412) Signed-off-by: youkaichao <youkaichao@gmail.com> * [Misc] Slight improvement of the BNB (vllm-project#19418) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Docs] Note that alternative structured output backends are supported (vllm-project#19426) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440) Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> * [Model] use AutoWeightsLoader for commandr (vllm-project#19399) Signed-off-by: py-andy-c <pychen1017@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390) Signed-off-by: rzou <zou3519@gmail.com> * [New Model]: Support Qwen3 Embedding & Reranker (vllm-project#19260) * [BugFix] Fix docker build cpu-dev image error (vllm-project#19394) Signed-off-by: niu_he <carlton2tang@gmail.com> * Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451) Signed-off-by: Lu Fang <lufang@fb.com> * [CI] Disable failing GGUF model test (vllm-project#19454) Signed-off-by: mgoin <mgoin64@gmail.com> * [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455) Signed-off-by: Junhao Li <junhao@ubicloud.com> * Fix Typo in Documentation and Function Name (vllm-project#19442) * [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Support deep_gemm for linear methods (vllm-project#19085) Signed-off-by: artetaout <lulala341@gmail.com> * [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Fix quantization link titles (vllm-project#19478) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Support "important" and "announcement" admonitions (vllm-project#19479) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Reduce warning message introduced in env_override (vllm-project#19476) Signed-off-by: Lu Fang <lufang@fb.com> * Support non-string values in JSON keys from CLI (vllm-project#19471) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add cache to cuda get_device_capability (vllm-project#19436) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix some typo (vllm-project#19475) Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> * Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> * [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453) Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> * [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297) Signed-off-by: mgoin <mgoin64@gmail.com> * [doc] fix "Other AI accelerators" getting started page (vllm-project#19457) Signed-off-by: David Xia <david@davidxia.com> * [Misc] Fix misleading ROCm warning (vllm-project#19486) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Docs] Remove WIP features in V1 guide (vllm-project#19498) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168) Signed-off-by: Bill Nell <bnell@redhat.com> * [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505) Signed-off-by: Richard Zou <zou3519@gmail.com> * [CI] change spell checker from codespell to typos (vllm-project#18711) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518) Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> * [Frontend] Improve error message in tool_choice validation (vllm-project#19239) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449) Signed-off-by: Nick Hill <nhill@redhat.com> * [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522) Signed-off-by: strutive07 <strutive07@gmail.com> * [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * Fix typo (vllm-project#19525) Signed-off-by: 2niuhe <carlton2tang@gmail.com> * [Security] Prevent new imports of (cloud)pickle (vllm-project#18018) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492) Signed-off-by: mgoin <mgoin64@gmail.com> * [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Quantization] Improve AWQ logic (vllm-project#19431) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Doc] Add V1 column to supported models list (vllm-project#19523) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [V1][NixlConnector] Drop `num_blocks` check (vllm-project#19532) Signed-off-by: NickLucche <nlucches@redhat.com> * [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Fix TorchAOConfig skip layers (vllm-project#19265) Signed-off-by: mobicham <hicham@mobiuslabs.com> * [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756) Signed-off-by: Luka Govedič <lgovedic@redhat.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> * [doc] Make top navigation sticky (vllm-project#19540) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847) * [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506) * [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452) Signed-off-by: mgoin <mgoin64@gmail.com> * [Doc] Unify structured outputs examples (vllm-project#18196) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [V1] Resolve failed concurrent structured output requests (vllm-project#19565) Signed-off-by: Russell Bryant <rbryant@redhat.com> * Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378) * [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570) Signed-off-by: qizixi <qizixi@meta.com> * [Doc] uses absolute links for structured outputs (vllm-project#19582) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [doc] fix incorrect link (vllm-project#19586) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Misc] Correct broken docs link (vllm-project#19553) Signed-off-by: Zerohertz <ohg3417@gmail.com> * [CPU] Refine default config for the CPU backend (vllm-project#19539) Signed-off-by: jiang1.li <jiang1.li@intel.com> * [Fix] bump mistral common to support magistral (vllm-project#19533) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * use base version for version comparison (vllm-project#19587) Signed-off-by: Boyuan Feng <boyuan@meta.com> * [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064) Signed-off-by: youkaichao <youkaichao@gmail.com> * [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435) Signed-off-by: Nick Hill <nhill@redhat.com> * [Model] Fix minimax model cache & lm_head precision (vllm-project#19592) Signed-off-by: qingjun <qingjun@minimaxi.com> * [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573) Signed-off-by: yewentao256 <zhyanwentao@126.com> * [doc][mkdocs] fix the duplicate Supported features sections in GPU docs (vllm-project#19606) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581) Signed-off-by: luka <luka@neuralmagic.com> * [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377) Signed-off-by: Anna Pendleton <pendleton@google.com> * [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618) * Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508) Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624) Signed-off-by: Nick Hill <nhill@redhat.com> * [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354) Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> * [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633) * [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500) * Only build CUTLASS MoE kernels on Hopper (vllm-project#19648) * [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561) * [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262) * [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566) * [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644) * [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Enable prefix caching with full cuda graphs (vllm-project#19617) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589) * [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] Remove unused variableds in C++ (vllm-project#19609) Signed-off-by: Lu Fang <lufang@fb.com> * [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957) Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> * [Misc][Frontend] passthrough `bad_words` (vllm-project#19564) Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [TPU] support attention head dim smaller than 128 (vllm-project#19620) Signed-off-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: mgoin <mgoin64@gmail.com> * [MISC] typo fix (vllm-project#19672) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [CI] Add mteb testing for rerank models (vllm-project#19344) * [Docs] Move multiproc doc to v1 dir (vllm-project#19651) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754) Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> * [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626) Signed-off-by: Nick Hill <nhill@redhat.com> * [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557) * [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652) Signed-off-by: Shawn Tan <shawntan@ibm.com> * [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Kernels] Use empty for modular MoE workspaces (vllm-project#19667) Signed-off-by: Bill Nell <bnell@redhat.com> * [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677) Signed-off-by: QscQ <qscqesze@gmail.com> * [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446) Signed-off-by: Russell Bryant <rbryant@redhat.com> --------- Signed-off-by: youkaichao <youkaichao@gmail.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: raushan <raushan@huggingface.co> Signed-off-by: Lu Fang <lufang@fb.com> Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: googs1025 <googs1025@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com> Signed-off-by: reidliu41 <reid201711@gmail.com> Signed-off-by: Varun <vsundarr@redhat.com> Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: mgoin <mgoin64@gmail.com> Signed-off-by: Chen Zhang <zhangch99@outlook.com> Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> Signed-off-by: Russell Bryant <rbryant@redhat.com> Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> Signed-off-by: calvin chen <120380290@qq.com> Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> Signed-off-by: Siyuan Liu <lsiyuan@google.com> Signed-off-by: Seiji Eicher <seiji@anyscale.com> Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> Signed-off-by: Jon Swenson <jmswen@gmail.com> Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: Yang Wang <elainewy@meta.com> Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Signed-off-by: Povilas Kanapickas <povilas@radix.lt> Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> Signed-off-by: Chengji Yao <chengjiyao@google.com> Signed-off-by: Xu Song <xusong.vip@gmail.com> Signed-off-by: Aaron Pham <contact@aarnphm.xyz> Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> Signed-off-by: rzou <zou3519@gmail.com> Signed-off-by: Siqi Yan <siqi@meta.com> Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Chenyaaang <chenyangli@google.com> Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: Qiliang Cui <derrhein@gmail.com> Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> Signed-off-by: drisspg <drisspguessous@gmail.com> Signed-off-by: Lifan Shen <lifans@meta.com> Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> Signed-off-by: luka <luka@neuralmagic.com> Signed-off-by: Richard Zou <zou3519@gmail.com> Signed-off-by: Xu Wenqing <xuwq1993@qq.com> Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Signed-off-by: yZhen <yZhen@fb.com> Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> Signed-off-by: cr7258 <chengzw258@163.com> Signed-off-by: Conroy Cheers <conroy@corncheese.org> Signed-off-by: windsonsea <haifeng.yao@daocloud.io> Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: liusiqian <liusiqian@tal.com> Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Anna Pendleton <pendleton@google.com> Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Signed-off-by: Yunqiu Guo <guorachel@meta.com> Signed-off-by: jiang.li <jiang1.li@intel.com> Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> Signed-off-by: py-andy-c <pychen1017@gmail.com> Signed-off-by: niu_he <carlton2tang@gmail.com> Signed-off-by: Junhao Li <junhao@ubicloud.com> Signed-off-by: artetaout <lulala341@gmail.com> Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> Signed-off-by: David Xia <david@davidxia.com> Signed-off-by: Bill Nell <bnell@redhat.com> Signed-off-by: Randall Smith <Randall.Smith@amd.com> Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> Signed-off-by: strutive07 <strutive07@gmail.com> Signed-off-by: 2niuhe <carlton2tang@gmail.com> Signed-off-by: NickLucche <nlucches@redhat.com> Signed-off-by: yewentao256 <zhyanwentao@126.com> Signed-off-by: mobicham <hicham@mobiuslabs.com> Signed-off-by: Luka Govedič <lgovedic@redhat.com> Signed-off-by: qizixi <qizixi@meta.com> Signed-off-by: Zerohertz <ohg3417@gmail.com> Signed-off-by: jiang1.li <jiang1.li@intel.com> Signed-off-by: Boyuan Feng <boyuan@meta.com> Signed-off-by: qingjun <qingjun@minimaxi.com> Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> Signed-off-by: Shawn Tan <shawntan@ibm.com> Signed-off-by: QscQ <qscqesze@gmail.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz> Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com> Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com> Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com> Co-authored-by: Soren Dreano <soren@numind.ai> Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com> Co-authored-by: reidliu41 <reid201711@gmail.com> Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com> Co-authored-by: Michael Goin <mgoin64@gmail.com> Co-authored-by: Yikun Jiang <yikun@apache.org> Co-authored-by: Chen Zhang <zhangch99@outlook.com> Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com> Co-authored-by: Chauncey <chaunceyjiang@gmail.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> Co-authored-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com> Co-authored-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Co-authored-by: Li, Jiang <jiang1.li@intel.com> Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com> Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com> Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com> Co-authored-by: Kaixi Hou <kaixih@nvidia.com> Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com> Co-authored-by: Siyuan Liu <lsiyuan@google.com> Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com> Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn> Co-authored-by: wang.yuqi <noooop@126.com> Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk> Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com> Co-authored-by: Lain <fusiyuan2000@hotmail.com> Co-authored-by: jmswen <jmswen@users.noreply.github.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Kebe <mail@kebe7jun.com> Co-authored-by: Nick Hill <nhill@redhat.com> Co-authored-by: Yang Wang <elainewy@meta.com> Co-authored-by: Huy Do <huydhn@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com> Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Povilas Kanapickas <povilas@radix.lt> Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com> Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Jerry Zhang <jerryzh168@gmail.com> Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai> Co-authored-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: Aaron Pham <contact@aarnphm.xyz> Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com> Co-authored-by: jinghui <jinghui@fb.com> Co-authored-by: Richard Zou <zou3519@users.noreply.github.com> Co-authored-by: Siqi Yan <ysq0807@hotmail.com> Co-authored-by: Siqi Yan <siqi@meta.com> Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com> Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com> Co-authored-by: Adolfo Victoria <adovi@meta.com> Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com> Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com> Co-authored-by: ElizaWszola <ewszola@redhat.com> Co-authored-by: QiliangCui <derrhein@gmail.com> Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com> Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com> Co-authored-by: Lifans <draftbks@gmail.com> Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com> Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com> Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com> Co-authored-by: yZhen <yZhen@fb.com> Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com> Co-authored-by: Se7en <chengzw258@163.com> Co-authored-by: Conroy Cheers <conroy@corncheese.org> Co-authored-by: Michael Yao <haifeng.yao@daocloud.io> Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai> Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com> Co-authored-by: Pavani Majety <pmajety@nvidia.com> Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com> Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com> Co-authored-by: Li Wang <wangli858794774@gmail.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Anna Pendleton <pendleton@google.com> Co-authored-by: Louie Tsai <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com> Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com> Co-authored-by: niu_he <carlton2tang@gmail.com> Co-authored-by: Junhao Li <junhao@ubicloud.com> Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com> Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com> Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: runzhen <wangrunzhen@gmail.com> Co-authored-by: David Xia <david@davidxia.com> Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com> Co-authored-by: rasmith <Randall.Smith@amd.com> Co-authored-by: Ning Xie <andy.xning@gmail.com> Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca> Co-authored-by: wonjun Jang <strutive07@gmail.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com> Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com> Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com> Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com> Co-authored-by: Boyuan Feng <fby.1994@gmail.com> Co-authored-by: qscqesze <qingjun@minimaxi.com> Co-authored-by: Concurrensee <yida.wu@amd.com> Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com> Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com> Co-authored-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: maobaolong <baoloongmao@tencent.com> Co-authored-by: Ilya Markov <markovilya197@gmail.com> Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com> Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com> Co-authored-by: Shawn Tan <shawntan@ibm.com> Co-authored-by: qscqesze <qscqesze@gmail.com>
* [Bugfix] disable processor cache (vllm-project#19068) Signed-off-by: raushan <raushan@huggingface.co> * [Doc] Improve the Pull Request template with key components (vllm-project#19086) Signed-off-by: Lu Fang <lufang@fb.com> * [Misc] Add missing `_Backend` enums (vllm-project#19081) Signed-off-by: nicklucche <nlucches@redhat.com> * [Misc] fix: add miss best_of param validation (vllm-project#18555) Signed-off-by: googs1025 <googs1025@gmail.com> * [Misc] Add SPDX-FileCopyrightText (vllm-project#19100) Signed-off-by: simon-mo <simon.mo@hey.com> * [Doc] Readme standardization (vllm-project#18695) Co-authored-by: Soren Dreano <soren@numind.ai> * [doc] update docker version (vllm-project#19074) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [V1] Support cross-layer KV sharing (vllm-project#18212) Signed-off-by: Yong Hoon Shin <yhshin@meta.com> * [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Yikun Jiang <yikun@apache.org> * [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971) * [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411) Signed-off-by: nicklucche <nlucches@redhat.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> * [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * feat: add data parallel rank to KVEventBatch (vllm-project#18925) * [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919) * [Docs] Add developer doc about CI failures (vllm-project#18782) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [CPU] V1 support for the CPU backend (vllm-project#16441) * [Core] Cast multimodal input in hf processor (vllm-project#18862) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437) * [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059) Signed-off-by: calvin chen <120380290@qq.com> * [NVIDIA] Add Cutlass MLA backend (vllm-project#17625) * [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * Fix vllm-project#19130 (vllm-project#19132) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [TPU] Skip hanging tests (vllm-project#19115) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113) Signed-off-by: Seiji Eicher <seiji@anyscale.com> * [Misc] Add packages for benchmark as extra dependency (vllm-project#19089) Signed-off-by: Isotr0py <2037008807@qq.com> * Improve the output precision of embedding models (vllm-project#19092) * [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add DeepSeek-R1-0528 function call chat template (vllm-project#18874) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * Sm100 blockwise fp8 swap ab (vllm-project#18564) * [Doc] Update V1 Guide for embedding models (vllm-project#19141) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Bugfix][EP+DP] Fix internode check (vllm-project#19112) Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> * [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778) Signed-off-by: mgoin <mgoin64@gmail.com> * [TPU] Update dynamo dump file name in compilation test (vllm-project#19108) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121) * [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817) Signed-off-by: googs1025 <googs1025@gmail.com> * [P/D] Heterogeneous TP (vllm-project#18833) Signed-off-by: nicklucche <nlucches@redhat.com> * [doc] small fix (vllm-project#19167) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117) * [Torch Nightly]add missing dependency (vllm-project#18770) Signed-off-by: Yang Wang <elainewy@meta.com> * Handle non-serializable objects when dumping benchmark results (vllm-project#19114) * [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Build] Annotate wheel and container path for release workflow (vllm-project#19162) Signed-off-by: simon-mo <simon.mo@hey.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138) Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> * [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Frontend] improve vllm run-batch --help display (vllm-project#19187) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202) Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> * [mistral_common] Add v11 tokenizer (vllm-project#19193) Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> * Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205) * [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110) Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> * [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226) Signed-off-by: Povilas Kanapickas <povilas@radix.lt> * [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090) * [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217) * [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118) * [Model] NemotronH support (vllm-project#18863) Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> * Fix AOPerModuleConfig name changes (vllm-project#18869) Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> * [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033) Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> * [v1] Hybrid Memory Allocator (vllm-project#17996) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [TPU] update torch_xla pin (vllm-project#19231) Signed-off-by: Chengji Yao <chengjiyao@google.com> * Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143) Signed-off-by: Xu Song <xusong.vip@gmail.com> * [Chore] update CODEOWNERS (vllm-project#19247) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182) Co-authored-by: jinghui <jinghui@fb.com> * [TPU] fix kv cache dtype in model runner (vllm-project#19244) Signed-off-by: Chengji Yao <chengjiyao@google.com> * [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224) Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> * [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172) Signed-off-by: Nick Hill <nhill@redhat.com> * Fix CompilationConfig repr (vllm-project#19091) Signed-off-by: rzou <zou3519@gmail.com> * Unit Test for run_dp_sharded_vision_model (vllm-project#19103) Signed-off-by: Siqi Yan <siqi@meta.com> Co-authored-by: Siqi Yan <siqi@meta.com> * [Model] Optimize nemotron_h implementation (vllm-project#19249) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227) Signed-off-by: Jon Swenson <jmswen@gmail.com> * improve logits bias (vllm-project#19041) * Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422) Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> * [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225) Co-authored-by: Adolfo Victoria <adovi@meta.com> * [Core] Fix abrupt request abort (vllm-project#18485) Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228) Signed-off-by: Nick Hill <nhill@redhat.com> * [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163) Signed-off-by: Chenyaaang <chenyangli@google.com> * [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296) Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> * [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762) Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> * [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039) Signed-off-by: Qiliang Cui <derrhein@gmail.com> * [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253) Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> * Add FlexAttention to V1 (vllm-project#16078) Signed-off-by: drisspg <drisspguessous@gmail.com> * [Misc] refactor context extension (vllm-project#19246) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287) Signed-off-by: Isotr0py <2037008807@qq.com> * [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311) Signed-off-by: Lifan Shen <lifans@meta.com> * [AMD] Update compatible packaging version (vllm-project#19309) Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> * [BugFix][V1] Fix memory profiling bug (vllm-project#18974) Signed-off-by: luka <luka@neuralmagic.com> * [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299) Signed-off-by: Richard Zou <zou3519@gmail.com> * [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302) Signed-off-by: rzou <zou3519@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315) Signed-off-by: Xu Wenqing <xuwq1993@qq.com> * [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082) Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> * [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312) * [Multi Modal] Add an env var for message queue max chunk bytes (vllm-project#19242) Signed-off-by: yZhen <yZhen@fb.com> Co-authored-by: yZhen <yZhen@fb.com> * [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201) * [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Add documentation update reminder to PR template (vllm-project#19289) Signed-off-by: Isotr0py <2037008807@qq.com> * [Frontend] Remove unreachable code from llm.py (vllm-project#19288) Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> * [Misc] Cleanup compilation tests (vllm-project#19343) Signed-off-by: rzou <zou3519@gmail.com> * [doc] improve ci doc (vllm-project#19307) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333) Signed-off-by: cr7258 <chengzw258@163.com> * [CI/Build] Fix LoRA test (vllm-project#19350) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328) Signed-off-by: Conroy Cheers <conroy@corncheese.org> * [CI] Introduce rules for llama auto-label (vllm-project#19323) Signed-off-by: Lu Fang <lufang@fb.com> * [Docs] Fix a bullet list in usage/security.md (vllm-project#19358) Signed-off-by: windsonsea <haifeng.yao@daocloud.io> * [full_graph] Fix query_start_loc padding (vllm-project#19321) Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> * [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Quantization] Bump compressed-tensors version (vllm-project#19295) Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> * [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472) Signed-off-by: liusiqian <liusiqian@tal.com> * [TPU]Fix KV cache sharing tests (vllm-project#19371) * [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374) Signed-off-by: Pavani Majety <pmajety@nvidia.com> * [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [Bugfix] Fix benchmark_moe.py (vllm-project#19016) Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> * Use xla flag to improve the quantized model performance (vllm-project#19303) Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> * Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382) * [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Core] Use tuple for kv cache group block ids (vllm-project#19175) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix modelscope token passed in (vllm-project#19389) Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Batch multi modal input using pinned memory (vllm-project#19169) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add security warning to bug report template (vllm-project#19365) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * [Misc] refactor neuron_multimodal and profiling (vllm-project#19397) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * Add clear documentation around the impact of debugging flag (vllm-project#19369) Signed-off-by: Anna Pendleton <pendleton@google.com> * Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> * Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404) * [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134) Signed-off-by: Yunqiu Guo <guorachel@meta.com> * [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411) Signed-off-by: jiang.li <jiang1.li@intel.com> * Simplify ep kernels installation (vllm-project#19412) Signed-off-by: youkaichao <youkaichao@gmail.com> * [Misc] Slight improvement of the BNB (vllm-project#19418) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Docs] Note that alternative structured output backends are supported (vllm-project#19426) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440) Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> * [Model] use AutoWeightsLoader for commandr (vllm-project#19399) Signed-off-by: py-andy-c <pychen1017@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390) Signed-off-by: rzou <zou3519@gmail.com> * [New Model]: Support Qwen3 Embedding & Reranker (vllm-project#19260) * [BugFix] Fix docker build cpu-dev image error (vllm-project#19394) Signed-off-by: niu_he <carlton2tang@gmail.com> * Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451) Signed-off-by: Lu Fang <lufang@fb.com> * [CI] Disable failing GGUF model test (vllm-project#19454) Signed-off-by: mgoin <mgoin64@gmail.com> * [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455) Signed-off-by: Junhao Li <junhao@ubicloud.com> * Fix Typo in Documentation and Function Name (vllm-project#19442) * [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Support deep_gemm for linear methods (vllm-project#19085) Signed-off-by: artetaout <lulala341@gmail.com> * [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Fix quantization link titles (vllm-project#19478) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Support "important" and "announcement" admonitions (vllm-project#19479) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Reduce warning message introduced in env_override (vllm-project#19476) Signed-off-by: Lu Fang <lufang@fb.com> * Support non-string values in JSON keys from CLI (vllm-project#19471) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add cache to cuda get_device_capability (vllm-project#19436) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix some typo (vllm-project#19475) Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> * Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> * [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453) Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> * [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297) Signed-off-by: mgoin <mgoin64@gmail.com> * [doc] fix "Other AI accelerators" getting started page (vllm-project#19457) Signed-off-by: David Xia <david@davidxia.com> * [Misc] Fix misleading ROCm warning (vllm-project#19486) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Docs] Remove WIP features in V1 guide (vllm-project#19498) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168) Signed-off-by: Bill Nell <bnell@redhat.com> * [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505) Signed-off-by: Richard Zou <zou3519@gmail.com> * [CI] change spell checker from codespell to typos (vllm-project#18711) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518) Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> * [Frontend] Improve error message in tool_choice validation (vllm-project#19239) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449) Signed-off-by: Nick Hill <nhill@redhat.com> * [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522) Signed-off-by: strutive07 <strutive07@gmail.com> * [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * Fix typo (vllm-project#19525) Signed-off-by: 2niuhe <carlton2tang@gmail.com> * [Security] Prevent new imports of (cloud)pickle (vllm-project#18018) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492) Signed-off-by: mgoin <mgoin64@gmail.com> * [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Quantization] Improve AWQ logic (vllm-project#19431) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Doc] Add V1 column to supported models list (vllm-project#19523) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [V1][NixlConnector] Drop `num_blocks` check (vllm-project#19532) Signed-off-by: NickLucche <nlucches@redhat.com> * [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Fix TorchAOConfig skip layers (vllm-project#19265) Signed-off-by: mobicham <hicham@mobiuslabs.com> * [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756) Signed-off-by: Luka Govedič <lgovedic@redhat.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> * [doc] Make top navigation sticky (vllm-project#19540) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847) * [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506) * [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452) Signed-off-by: mgoin <mgoin64@gmail.com> * [Doc] Unify structured outputs examples (vllm-project#18196) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [V1] Resolve failed concurrent structured output requests (vllm-project#19565) Signed-off-by: Russell Bryant <rbryant@redhat.com> * Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378) * [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570) Signed-off-by: qizixi <qizixi@meta.com> * [Doc] uses absolute links for structured outputs (vllm-project#19582) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [doc] fix incorrect link (vllm-project#19586) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Misc] Correct broken docs link (vllm-project#19553) Signed-off-by: Zerohertz <ohg3417@gmail.com> * [CPU] Refine default config for the CPU backend (vllm-project#19539) Signed-off-by: jiang1.li <jiang1.li@intel.com> * [Fix] bump mistral common to support magistral (vllm-project#19533) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * use base version for version comparison (vllm-project#19587) Signed-off-by: Boyuan Feng <boyuan@meta.com> * [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064) Signed-off-by: youkaichao <youkaichao@gmail.com> * [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435) Signed-off-by: Nick Hill <nhill@redhat.com> * [Model] Fix minimax model cache & lm_head precision (vllm-project#19592) Signed-off-by: qingjun <qingjun@minimaxi.com> * [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573) Signed-off-by: yewentao256 <zhyanwentao@126.com> * [doc][mkdocs] fix the duplicate Supported features sections in GPU docs (vllm-project#19606) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581) Signed-off-by: luka <luka@neuralmagic.com> * [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377) Signed-off-by: Anna Pendleton <pendleton@google.com> * [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618) * Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508) Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624) Signed-off-by: Nick Hill <nhill@redhat.com> * [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354) Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> * [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633) * [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500) * Only build CUTLASS MoE kernels on Hopper (vllm-project#19648) * [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561) * [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262) * [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566) * [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644) * [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Enable prefix caching with full cuda graphs (vllm-project#19617) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589) * [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] Remove unused variableds in C++ (vllm-project#19609) Signed-off-by: Lu Fang <lufang@fb.com> * [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957) Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> * [Misc][Frontend] passthrough `bad_words` (vllm-project#19564) Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [TPU] support attention head dim smaller than 128 (vllm-project#19620) Signed-off-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: mgoin <mgoin64@gmail.com> * [MISC] typo fix (vllm-project#19672) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [CI] Add mteb testing for rerank models (vllm-project#19344) * [Docs] Move multiproc doc to v1 dir (vllm-project#19651) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754) Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> * [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626) Signed-off-by: Nick Hill <nhill@redhat.com> * [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557) * [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652) Signed-off-by: Shawn Tan <shawntan@ibm.com> * [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Kernels] Use empty for modular MoE workspaces (vllm-project#19667) Signed-off-by: Bill Nell <bnell@redhat.com> * [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677) Signed-off-by: QscQ <qscqesze@gmail.com> * [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446) Signed-off-by: Russell Bryant <rbryant@redhat.com> * fix Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com> * remove logging Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com> --------- Signed-off-by: raushan <raushan@huggingface.co> Signed-off-by: Lu Fang <lufang@fb.com> Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: googs1025 <googs1025@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com> Signed-off-by: reidliu41 <reid201711@gmail.com> Signed-off-by: Varun <vsundarr@redhat.com> Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: mgoin <mgoin64@gmail.com> Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Chen Zhang <zhangch99@outlook.com> Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> Signed-off-by: Russell Bryant <rbryant@redhat.com> Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> Signed-off-by: calvin chen <120380290@qq.com> Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> Signed-off-by: Siyuan Liu <lsiyuan@google.com> Signed-off-by: Seiji Eicher <seiji@anyscale.com> Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> Signed-off-by: Jon Swenson <jmswen@gmail.com> Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: Yang Wang <elainewy@meta.com> Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Signed-off-by: Povilas Kanapickas <povilas@radix.lt> Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> Signed-off-by: Chengji Yao <chengjiyao@google.com> Signed-off-by: Xu Song <xusong.vip@gmail.com> Signed-off-by: Aaron Pham <contact@aarnphm.xyz> Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> Signed-off-by: rzou <zou3519@gmail.com> Signed-off-by: Siqi Yan <siqi@meta.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Chenyaaang <chenyangli@google.com> Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: Qiliang Cui <derrhein@gmail.com> Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> Signed-off-by: drisspg <drisspguessous@gmail.com> Signed-off-by: Lifan Shen <lifans@meta.com> Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> Signed-off-by: luka <luka@neuralmagic.com> Signed-off-by: Richard Zou <zou3519@gmail.com> Signed-off-by: Xu Wenqing <xuwq1993@qq.com> Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Signed-off-by: yZhen <yZhen@fb.com> Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> Signed-off-by: cr7258 <chengzw258@163.com> Signed-off-by: Conroy Cheers <conroy@corncheese.org> Signed-off-by: windsonsea <haifeng.yao@daocloud.io> Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: liusiqian <liusiqian@tal.com> Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Anna Pendleton <pendleton@google.com> Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Signed-off-by: Yunqiu Guo <guorachel@meta.com> Signed-off-by: jiang.li <jiang1.li@intel.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> Signed-off-by: py-andy-c <pychen1017@gmail.com> Signed-off-by: niu_he <carlton2tang@gmail.com> Signed-off-by: Junhao Li <junhao@ubicloud.com> Signed-off-by: artetaout <lulala341@gmail.com> Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> Signed-off-by: David Xia <david@davidxia.com> Signed-off-by: Bill Nell <bnell@redhat.com> Signed-off-by: Randall Smith <Randall.Smith@amd.com> Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> Signed-off-by: strutive07 <strutive07@gmail.com> Signed-off-by: 2niuhe <carlton2tang@gmail.com> Signed-off-by: NickLucche <nlucches@redhat.com> Signed-off-by: yewentao256 <zhyanwentao@126.com> Signed-off-by: mobicham <hicham@mobiuslabs.com> Signed-off-by: Luka Govedič <lgovedic@redhat.com> Signed-off-by: qizixi <qizixi@meta.com> Signed-off-by: Zerohertz <ohg3417@gmail.com> Signed-off-by: jiang1.li <jiang1.li@intel.com> Signed-off-by: Boyuan Feng <boyuan@meta.com> Signed-off-by: qingjun <qingjun@minimaxi.com> Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> Signed-off-by: Shawn Tan <shawntan@ibm.com> Signed-off-by: QscQ <qscqesze@gmail.com> Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com> Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz> Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com> Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com> Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com> Co-authored-by: Soren Dreano <soren@numind.ai> Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com> Co-authored-by: reidliu41 <reid201711@gmail.com> Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com> Co-authored-by: Michael Goin <mgoin64@gmail.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Yikun Jiang <yikun@apache.org> Co-authored-by: Chen Zhang <zhangch99@outlook.com> Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com> Co-authored-by: Chauncey <chaunceyjiang@gmail.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> Co-authored-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com> Co-authored-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Co-authored-by: Li, Jiang <jiang1.li@intel.com> Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com> Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com> Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com> Co-authored-by: Kaixi Hou <kaixih@nvidia.com> Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com> Co-authored-by: Siyuan Liu <lsiyuan@google.com> Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com> Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn> Co-authored-by: wang.yuqi <noooop@126.com> Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk> Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com> Co-authored-by: Lain <fusiyuan2000@hotmail.com> Co-authored-by: jmswen <jmswen@users.noreply.github.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Kebe <mail@kebe7jun.com> Co-authored-by: Nick Hill <nhill@redhat.com> Co-authored-by: Yang Wang <elainewy@meta.com> Co-authored-by: Huy Do <huydhn@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com> Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Povilas Kanapickas <povilas@radix.lt> Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com> Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Jerry Zhang <jerryzh168@gmail.com> Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai> Co-authored-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: Aaron Pham <contact@aarnphm.xyz> Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com> Co-authored-by: jinghui <jinghui@fb.com> Co-authored-by: Richard Zou <zou3519@users.noreply.github.com> Co-authored-by: Siqi Yan <ysq0807@hotmail.com> Co-authored-by: Siqi Yan <siqi@meta.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com> Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com> Co-authored-by: Adolfo Victoria <adovi@meta.com> Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com> Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com> Co-authored-by: ElizaWszola <ewszola@redhat.com> Co-authored-by: QiliangCui <derrhein@gmail.com> Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com> Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com> Co-authored-by: Lifans <draftbks@gmail.com> Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com> Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com> Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com> Co-authored-by: yZhen <yZhen@fb.com> Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com> Co-authored-by: Se7en <chengzw258@163.com> Co-authored-by: Conroy Cheers <conroy@corncheese.org> Co-authored-by: Michael Yao <haifeng.yao@daocloud.io> Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai> Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com> Co-authored-by: Pavani Majety <pmajety@nvidia.com> Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com> Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com> Co-authored-by: Li Wang <wangli858794774@gmail.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Anna Pendleton <pendleton@google.com> Co-authored-by: Louie Tsai <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com> Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com> Co-authored-by: niu_he <carlton2tang@gmail.com> Co-authored-by: Junhao Li <junhao@ubicloud.com> Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com> Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com> Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: runzhen <wangrunzhen@gmail.com> Co-authored-by: David Xia <david@davidxia.com> Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com> Co-authored-by: rasmith <Randall.Smith@amd.com> Co-authored-by: Ning Xie <andy.xning@gmail.com> Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca> Co-authored-by: wonjun Jang <strutive07@gmail.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com> Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com> Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com> Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com> Co-authored-by: Boyuan Feng <fby.1994@gmail.com> Co-authored-by: qscqesze <qingjun@minimaxi.com> Co-authored-by: Concurrensee <yida.wu@amd.com> Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com> Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com> Co-authored-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: maobaolong <baoloongmao@tencent.com> Co-authored-by: Ilya Markov <markovilya197@gmail.com> Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com> Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com> Co-authored-by: Shawn Tan <shawntan@ibm.com> Co-authored-by: qscqesze <qscqesze@gmail.com>
* [Bugfix] disable processor cache (vllm-project#19068) Signed-off-by: raushan <raushan@huggingface.co> * [Doc] Improve the Pull Request template with key components (vllm-project#19086) Signed-off-by: Lu Fang <lufang@fb.com> * [Misc] Add missing `_Backend` enums (vllm-project#19081) Signed-off-by: nicklucche <nlucches@redhat.com> * [Misc] fix: add miss best_of param validation (vllm-project#18555) Signed-off-by: googs1025 <googs1025@gmail.com> * [Misc] Add SPDX-FileCopyrightText (vllm-project#19100) Signed-off-by: simon-mo <simon.mo@hey.com> * [Doc] Readme standardization (vllm-project#18695) Co-authored-by: Soren Dreano <soren@numind.ai> * [doc] update docker version (vllm-project#19074) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [V1] Support cross-layer KV sharing (vllm-project#18212) Signed-off-by: Yong Hoon Shin <yhshin@meta.com> * [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Yikun Jiang <yikun@apache.org> * [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971) * [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411) Signed-off-by: nicklucche <nlucches@redhat.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> * [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * feat: add data parallel rank to KVEventBatch (vllm-project#18925) * [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919) * [Docs] Add developer doc about CI failures (vllm-project#18782) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [CPU] V1 support for the CPU backend (vllm-project#16441) * [Core] Cast multimodal input in hf processor (vllm-project#18862) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437) * [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059) Signed-off-by: calvin chen <120380290@qq.com> * [NVIDIA] Add Cutlass MLA backend (vllm-project#17625) * [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * Fix vllm-project#19130 (vllm-project#19132) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [TPU] Skip hanging tests (vllm-project#19115) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113) Signed-off-by: Seiji Eicher <seiji@anyscale.com> * [Misc] Add packages for benchmark as extra dependency (vllm-project#19089) Signed-off-by: Isotr0py <2037008807@qq.com> * Improve the output precision of embedding models (vllm-project#19092) * [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add DeepSeek-R1-0528 function call chat template (vllm-project#18874) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * Sm100 blockwise fp8 swap ab (vllm-project#18564) * [Doc] Update V1 Guide for embedding models (vllm-project#19141) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Bugfix][EP+DP] Fix internode check (vllm-project#19112) Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> * [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778) Signed-off-by: mgoin <mgoin64@gmail.com> * [TPU] Update dynamo dump file name in compilation test (vllm-project#19108) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121) * [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817) Signed-off-by: googs1025 <googs1025@gmail.com> * [P/D] Heterogeneous TP (vllm-project#18833) Signed-off-by: nicklucche <nlucches@redhat.com> * [doc] small fix (vllm-project#19167) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117) * [Torch Nightly]add missing dependency (vllm-project#18770) Signed-off-by: Yang Wang <elainewy@meta.com> * Handle non-serializable objects when dumping benchmark results (vllm-project#19114) * [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Build] Annotate wheel and container path for release workflow (vllm-project#19162) Signed-off-by: simon-mo <simon.mo@hey.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138) Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> * [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Frontend] improve vllm run-batch --help display (vllm-project#19187) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202) Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> * [mistral_common] Add v11 tokenizer (vllm-project#19193) Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> * Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205) * [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110) Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> * [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226) Signed-off-by: Povilas Kanapickas <povilas@radix.lt> * [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090) * [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217) * [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118) * [Model] NemotronH support (vllm-project#18863) Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> * Fix AOPerModuleConfig name changes (vllm-project#18869) Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> * [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033) Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> * [v1] Hybrid Memory Allocator (vllm-project#17996) Signed-off-by: Chen Zhang <zhangch99@outlook.com> * [TPU] update torch_xla pin (vllm-project#19231) Signed-off-by: Chengji Yao <chengjiyao@google.com> * Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143) Signed-off-by: Xu Song <xusong.vip@gmail.com> * [Chore] update CODEOWNERS (vllm-project#19247) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182) Co-authored-by: jinghui <jinghui@fb.com> * [TPU] fix kv cache dtype in model runner (vllm-project#19244) Signed-off-by: Chengji Yao <chengjiyao@google.com> * [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224) Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> * [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172) Signed-off-by: Nick Hill <nhill@redhat.com> * Fix CompilationConfig repr (vllm-project#19091) Signed-off-by: rzou <zou3519@gmail.com> * Unit Test for run_dp_sharded_vision_model (vllm-project#19103) Signed-off-by: Siqi Yan <siqi@meta.com> Co-authored-by: Siqi Yan <siqi@meta.com> * [Model] Optimize nemotron_h implementation (vllm-project#19249) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227) Signed-off-by: Jon Swenson <jmswen@gmail.com> * improve logits bias (vllm-project#19041) * Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422) Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> * [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225) Co-authored-by: Adolfo Victoria <adovi@meta.com> * [Core] Fix abrupt request abort (vllm-project#18485) Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: Nick Hill <nhill@redhat.com> Co-authored-by: Nick Hill <nhill@redhat.com> * [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228) Signed-off-by: Nick Hill <nhill@redhat.com> * [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163) Signed-off-by: Chenyaaang <chenyangli@google.com> * [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296) Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> * [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762) Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> * [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039) Signed-off-by: Qiliang Cui <derrhein@gmail.com> * [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253) Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> * Add FlexAttention to V1 (vllm-project#16078) Signed-off-by: drisspg <drisspguessous@gmail.com> * [Misc] refactor context extension (vllm-project#19246) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287) Signed-off-by: Isotr0py <2037008807@qq.com> * [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311) Signed-off-by: Lifan Shen <lifans@meta.com> * [AMD] Update compatible packaging version (vllm-project#19309) Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> * [BugFix][V1] Fix memory profiling bug (vllm-project#18974) Signed-off-by: luka <luka@neuralmagic.com> * [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299) Signed-off-by: Richard Zou <zou3519@gmail.com> * [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302) Signed-off-by: rzou <zou3519@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315) Signed-off-by: Xu Wenqing <xuwq1993@qq.com> * [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082) Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> * [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312) * [Multi Modal] Add an env var for message queue max chunk bytes (vllm-project#19242) Signed-off-by: yZhen <yZhen@fb.com> Co-authored-by: yZhen <yZhen@fb.com> * [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201) * [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Add documentation update reminder to PR template (vllm-project#19289) Signed-off-by: Isotr0py <2037008807@qq.com> * [Frontend] Remove unreachable code from llm.py (vllm-project#19288) Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> * [Misc] Cleanup compilation tests (vllm-project#19343) Signed-off-by: rzou <zou3519@gmail.com> * [doc] improve ci doc (vllm-project#19307) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333) Signed-off-by: cr7258 <chengzw258@163.com> * [CI/Build] Fix LoRA test (vllm-project#19350) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328) Signed-off-by: Conroy Cheers <conroy@corncheese.org> * [CI] Introduce rules for llama auto-label (vllm-project#19323) Signed-off-by: Lu Fang <lufang@fb.com> * [Docs] Fix a bullet list in usage/security.md (vllm-project#19358) Signed-off-by: windsonsea <haifeng.yao@daocloud.io> * [full_graph] Fix query_start_loc padding (vllm-project#19321) Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> * [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298) Signed-off-by: Varun <vsundarr@redhat.com> Co-authored-by: Varun <vsundarr@redhat.com> * [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Quantization] Bump compressed-tensors version (vllm-project#19295) Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> * [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472) Signed-off-by: liusiqian <liusiqian@tal.com> * [TPU]Fix KV cache sharing tests (vllm-project#19371) * [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374) Signed-off-by: Pavani Majety <pmajety@nvidia.com> * [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383) Signed-off-by: Siyuan Liu <lsiyuan@google.com> * [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [Bugfix] Fix benchmark_moe.py (vllm-project#19016) Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> * Use xla flag to improve the quantized model performance (vllm-project#19303) Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> * Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382) * [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Core] Use tuple for kv cache group block ids (vllm-project#19175) Signed-off-by: Nick Hill <nhill@redhat.com> * [Bugfix] Fix modelscope token passed in (vllm-project#19389) Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> * [Core] Batch multi modal input using pinned memory (vllm-project#19169) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add security warning to bug report template (vllm-project#19365) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * [Misc] refactor neuron_multimodal and profiling (vllm-project#19397) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * Add clear documentation around the impact of debugging flag (vllm-project#19369) Signed-off-by: Anna Pendleton <pendleton@google.com> * Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> * Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404) * [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134) Signed-off-by: Yunqiu Guo <guorachel@meta.com> * [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411) Signed-off-by: jiang.li <jiang1.li@intel.com> * Simplify ep kernels installation (vllm-project#19412) Signed-off-by: youkaichao <youkaichao@gmail.com> * [Misc] Slight improvement of the BNB (vllm-project#19418) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [Docs] Note that alternative structured output backends are supported (vllm-project#19426) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440) Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> * [Model] use AutoWeightsLoader for commandr (vllm-project#19399) Signed-off-by: py-andy-c <pychen1017@gmail.com> * Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401) Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> * [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390) Signed-off-by: rzou <zou3519@gmail.com> * [New Model]: Support Qwen3 Embedding & Reranker (vllm-project#19260) * [BugFix] Fix docker build cpu-dev image error (vllm-project#19394) Signed-off-by: niu_he <carlton2tang@gmail.com> * Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451) Signed-off-by: Lu Fang <lufang@fb.com> * [CI] Disable failing GGUF model test (vllm-project#19454) Signed-off-by: mgoin <mgoin64@gmail.com> * [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422) Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> * Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455) Signed-off-by: Junhao Li <junhao@ubicloud.com> * Fix Typo in Documentation and Function Name (vllm-project#19442) * [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405) Signed-off-by: Lu Fang <lufang@fb.com> * [Kernel] Support deep_gemm for linear methods (vllm-project#19085) Signed-off-by: artetaout <lulala341@gmail.com> * [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Fix quantization link titles (vllm-project#19478) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Doc] Support "important" and "announcement" admonitions (vllm-project#19479) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [Misc] Reduce warning message introduced in env_override (vllm-project#19476) Signed-off-by: Lu Fang <lufang@fb.com> * Support non-string values in JSON keys from CLI (vllm-project#19471) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * Add cache to cuda get_device_capability (vllm-project#19436) Signed-off-by: mgoin <mgoin64@gmail.com> * Fix some typo (vllm-project#19475) Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> * Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> * [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453) Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> * [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297) Signed-off-by: mgoin <mgoin64@gmail.com> * [doc] fix "Other AI accelerators" getting started page (vllm-project#19457) Signed-off-by: David Xia <david@davidxia.com> * [Misc] Fix misleading ROCm warning (vllm-project#19486) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Docs] Remove WIP features in V1 guide (vllm-project#19498) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168) Signed-off-by: Bill Nell <bnell@redhat.com> * [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501) Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> * [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505) Signed-off-by: Richard Zou <zou3519@gmail.com> * [CI] change spell checker from codespell to typos (vllm-project#18711) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518) Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> * [Frontend] Improve error message in tool_choice validation (vllm-project#19239) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449) Signed-off-by: Nick Hill <nhill@redhat.com> * [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522) Signed-off-by: strutive07 <strutive07@gmail.com> * [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509) Signed-off-by: Randall Smith <Randall.Smith@amd.com> * Fix typo (vllm-project#19525) Signed-off-by: 2niuhe <carlton2tang@gmail.com> * [Security] Prevent new imports of (cloud)pickle (vllm-project#18018) Signed-off-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492) Signed-off-by: mgoin <mgoin64@gmail.com> * [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503) Signed-off-by: Jon Swenson <jmswen@gmail.com> * [Quantization] Improve AWQ logic (vllm-project#19431) Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> * [Doc] Add V1 column to supported models list (vllm-project#19523) Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> * [V1][NixlConnector] Drop `num_blocks` check (vllm-project#19532) Signed-off-by: NickLucche <nlucches@redhat.com> * [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Fix TorchAOConfig skip layers (vllm-project#19265) Signed-off-by: mobicham <hicham@mobiuslabs.com> * [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756) Signed-off-by: Luka Govedič <lgovedic@redhat.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> * [doc] Make top navigation sticky (vllm-project#19540) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847) * [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506) * [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452) Signed-off-by: mgoin <mgoin64@gmail.com> * [Doc] Unify structured outputs examples (vllm-project#18196) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [V1] Resolve failed concurrent structured output requests (vllm-project#19565) Signed-off-by: Russell Bryant <rbryant@redhat.com> * Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378) * [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515) Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> * [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570) Signed-off-by: qizixi <qizixi@meta.com> * [Doc] uses absolute links for structured outputs (vllm-project#19582) Signed-off-by: Aaron Pham <contact@aarnphm.xyz> * [doc] fix incorrect link (vllm-project#19586) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Misc] Correct broken docs link (vllm-project#19553) Signed-off-by: Zerohertz <ohg3417@gmail.com> * [CPU] Refine default config for the CPU backend (vllm-project#19539) Signed-off-by: jiang1.li <jiang1.li@intel.com> * [Fix] bump mistral common to support magistral (vllm-project#19533) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549) Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> * use base version for version comparison (vllm-project#19587) Signed-off-by: Boyuan Feng <boyuan@meta.com> * [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064) Signed-off-by: youkaichao <youkaichao@gmail.com> * [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435) Signed-off-by: Nick Hill <nhill@redhat.com> * [Model] Fix minimax model cache & lm_head precision (vllm-project#19592) Signed-off-by: qingjun <qingjun@minimaxi.com> * [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573) Signed-off-by: yewentao256 <zhyanwentao@126.com> * [doc][mkdocs] fix the duplicate Supported features sections in GPU docs (vllm-project#19606) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581) Signed-off-by: luka <luka@neuralmagic.com> * [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377) Signed-off-by: Anna Pendleton <pendleton@google.com> * [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618) * Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508) Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> * [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624) Signed-off-by: Nick Hill <nhill@redhat.com> * [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354) Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> * [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633) * [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593) Signed-off-by: reidliu41 <reid201711@gmail.com> Co-authored-by: reidliu41 <reid201711@gmail.com> * [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316) Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> * [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500) * Only build CUTLASS MoE kernels on Hopper (vllm-project#19648) * [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561) * [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262) * [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566) * [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644) * [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339) Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> * [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627) Signed-off-by: yewentao256 <zhyanwentao@126.com> * Enable prefix caching with full cuda graphs (vllm-project#19617) Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> * [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589) * [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] Remove unused variableds in C++ (vllm-project#19609) Signed-off-by: Lu Fang <lufang@fb.com> * [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957) Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> * [Misc][Frontend] passthrough `bad_words` (vllm-project#19564) Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> * [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660) Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> * [TPU] support attention head dim smaller than 128 (vllm-project#19620) Signed-off-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: mgoin <mgoin64@gmail.com> * [MISC] typo fix (vllm-project#19672) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [CI] Add mteb testing for rerank models (vllm-project#19344) * [Docs] Move multiproc doc to v1 dir (vllm-project#19651) Signed-off-by: Russell Bryant <rbryant@redhat.com> * [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754) Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> * [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626) Signed-off-by: Nick Hill <nhill@redhat.com> * [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557) * [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652) Signed-off-by: Shawn Tan <shawntan@ibm.com> * [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657) Signed-off-by: Isotr0py <2037008807@qq.com> * [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547) Signed-off-by: Andy Xie <andy.xning@gmail.com> * [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662) Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> * [Kernels] Use empty for modular MoE workspaces (vllm-project#19667) Signed-off-by: Bill Nell <bnell@redhat.com> * [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677) Signed-off-by: QscQ <qscqesze@gmail.com> * [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446) Signed-off-by: Russell Bryant <rbryant@redhat.com> * fix Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com> --------- Signed-off-by: raushan <raushan@huggingface.co> Signed-off-by: Lu Fang <lufang@fb.com> Signed-off-by: nicklucche <nlucches@redhat.com> Signed-off-by: googs1025 <googs1025@gmail.com> Signed-off-by: simon-mo <simon.mo@hey.com> Signed-off-by: reidliu41 <reid201711@gmail.com> Signed-off-by: Varun <vsundarr@redhat.com> Signed-off-by: Yong Hoon Shin <yhshin@meta.com> Signed-off-by: mgoin <mgoin64@gmail.com> Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Chen Zhang <zhangch99@outlook.com> Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com> Signed-off-by: Russell Bryant <rbryant@redhat.com> Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com> Signed-off-by: calvin chen <120380290@qq.com> Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com> Signed-off-by: Siyuan Liu <lsiyuan@google.com> Signed-off-by: Seiji Eicher <seiji@anyscale.com> Signed-off-by: Isotr0py <2037008807@qq.com> Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk> Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com> Signed-off-by: Jon Swenson <jmswen@gmail.com> Signed-off-by: Tyler Michael Smith <tysmith@redhat.com> Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com> Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: Yang Wang <elainewy@meta.com> Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com> Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com> Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com> Signed-off-by: Chiyue Wei <chiyuew@nvidia.com> Signed-off-by: Povilas Kanapickas <povilas@radix.lt> Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Signed-off-by: Jerry Zhang <jerryzh168@gmail.com> Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai> Signed-off-by: Chengji Yao <chengjiyao@google.com> Signed-off-by: Xu Song <xusong.vip@gmail.com> Signed-off-by: Aaron Pham <contact@aarnphm.xyz> Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com> Signed-off-by: rzou <zou3519@gmail.com> Signed-off-by: Siqi Yan <siqi@meta.com> Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com> Signed-off-by: Chenyaaang <chenyangli@google.com> Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com> Signed-off-by: ElizaWszola <ewszola@redhat.com> Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com> Signed-off-by: Qiliang Cui <derrhein@gmail.com> Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com> Signed-off-by: drisspg <drisspguessous@gmail.com> Signed-off-by: Lifan Shen <lifans@meta.com> Signed-off-by: pramkuma <Pramendra.Kumar@amd.com> Signed-off-by: luka <luka@neuralmagic.com> Signed-off-by: Richard Zou <zou3519@gmail.com> Signed-off-by: Xu Wenqing <xuwq1993@qq.com> Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Signed-off-by: yZhen <yZhen@fb.com> Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com> Signed-off-by: cr7258 <chengzw258@163.com> Signed-off-by: Conroy Cheers <conroy@corncheese.org> Signed-off-by: windsonsea <haifeng.yao@daocloud.io> Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai> Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn> Signed-off-by: Kyle Sayers <kylesayrs@gmail.com> Signed-off-by: liusiqian <liusiqian@tal.com> Signed-off-by: Pavani Majety <pmajety@nvidia.com> Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com> Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com> Signed-off-by: wangli <wangli858794774@gmail.com> Signed-off-by: Anna Pendleton <pendleton@google.com> Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Signed-off-by: Yunqiu Guo <guorachel@meta.com> Signed-off-by: jiang.li <jiang1.li@intel.com> Signed-off-by: youkaichao <youkaichao@gmail.com> Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com> Signed-off-by: py-andy-c <pychen1017@gmail.com> Signed-off-by: niu_he <carlton2tang@gmail.com> Signed-off-by: Junhao Li <junhao@ubicloud.com> Signed-off-by: artetaout <lulala341@gmail.com> Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com> Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com> Signed-off-by: David Xia <david@davidxia.com> Signed-off-by: Bill Nell <bnell@redhat.com> Signed-off-by: Randall Smith <Randall.Smith@amd.com> Signed-off-by: Andy Xie <andy.xning@gmail.com> Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca> Signed-off-by: strutive07 <strutive07@gmail.com> Signed-off-by: 2niuhe <carlton2tang@gmail.com> Signed-off-by: NickLucche <nlucches@redhat.com> Signed-off-by: yewentao256 <zhyanwentao@126.com> Signed-off-by: mobicham <hicham@mobiuslabs.com> Signed-off-by: Luka Govedič <lgovedic@redhat.com> Signed-off-by: qizixi <qizixi@meta.com> Signed-off-by: Zerohertz <ohg3417@gmail.com> Signed-off-by: jiang1.li <jiang1.li@intel.com> Signed-off-by: Boyuan Feng <boyuan@meta.com> Signed-off-by: qingjun <qingjun@minimaxi.com> Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu> Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai> Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Signed-off-by: SzymonOzog <szymon.ozog@gmail.com> Signed-off-by: Shawn Tan <shawntan@ibm.com> Signed-off-by: QscQ <qscqesze@gmail.com> Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com> Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz> Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com> Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com> Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com> Co-authored-by: Simon Mo <simon.mo@hey.com> Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com> Co-authored-by: Soren Dreano <soren@numind.ai> Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com> Co-authored-by: reidliu41 <reid201711@gmail.com> Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com> Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com> Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com> Co-authored-by: Michael Goin <mgoin64@gmail.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: Yikun Jiang <yikun@apache.org> Co-authored-by: Chen Zhang <zhangch99@outlook.com> Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com> Co-authored-by: Chauncey <chaunceyjiang@gmail.com> Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com> Co-authored-by: Yan Ru Pei <yanrpei@gmail.com> Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com> Co-authored-by: Russell Bryant <rbryant@redhat.com> Co-authored-by: Mark McLoughlin <markmc@redhat.com> Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com> Co-authored-by: Li, Jiang <jiang1.li@intel.com> Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com> Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com> Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com> Co-authored-by: Kaixi Hou <kaixih@nvidia.com> Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu> Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com> Co-authored-by: Siyuan Liu <lsiyuan@google.com> Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com> Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn> Co-authored-by: wang.yuqi <noooop@126.com> Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk> Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com> Co-authored-by: Lain <fusiyuan2000@hotmail.com> Co-authored-by: jmswen <jmswen@users.noreply.github.com> Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com> Co-authored-by: Kebe <mail@kebe7jun.com> Co-authored-by: Nick Hill <nhill@redhat.com> Co-authored-by: Yang Wang <elainewy@meta.com> Co-authored-by: Huy Do <huydhn@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com> Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com> Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com> Co-authored-by: Chiyue Wei <chiyuew@nvidia.com> Co-authored-by: Povilas Kanapickas <povilas@radix.lt> Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com> Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com> Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com> Co-authored-by: Jerry Zhang <jerryzh168@gmail.com> Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai> Co-authored-by: Chengji Yao <chengjiyao@google.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: Aaron Pham <contact@aarnphm.xyz> Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com> Co-authored-by: jinghui <jinghui@fb.com> Co-authored-by: Richard Zou <zou3519@users.noreply.github.com> Co-authored-by: Siqi Yan <ysq0807@hotmail.com> Co-authored-by: Siqi Yan <siqi@meta.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com> Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com> Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com> Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com> Co-authored-by: Adolfo Victoria <adovi@meta.com> Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com> Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com> Co-authored-by: ElizaWszola <ewszola@redhat.com> Co-authored-by: QiliangCui <derrhein@gmail.com> Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com> Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com> Co-authored-by: Lifans <draftbks@gmail.com> Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com> Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com> Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com> Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com> Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com> Co-authored-by: yZhen <yZhen@fb.com> Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com> Co-authored-by: Se7en <chengzw258@163.com> Co-authored-by: Conroy Cheers <conroy@corncheese.org> Co-authored-by: Michael Yao <haifeng.yao@daocloud.io> Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai> Co-authored-by: Kyle Sayers <kylesayrs@gmail.com> Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com> Co-authored-by: Pavani Majety <pmajety@nvidia.com> Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com> Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn> Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com> Co-authored-by: Li Wang <wangli858794774@gmail.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Anna Pendleton <pendleton@google.com> Co-authored-by: Louie Tsai <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com> Co-authored-by: youkaichao <youkaichao@gmail.com> Co-authored-by: Isotr0py <2037008807@qq.com> Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com> Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com> Co-authored-by: niu_he <carlton2tang@gmail.com> Co-authored-by: Junhao Li <junhao@ubicloud.com> Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com> Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com> Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com> Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com> Co-authored-by: runzhen <wangrunzhen@gmail.com> Co-authored-by: David Xia <david@davidxia.com> Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com> Co-authored-by: rasmith <Randall.Smith@amd.com> Co-authored-by: Ning Xie <andy.xning@gmail.com> Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca> Co-authored-by: wonjun Jang <strutive07@gmail.com> Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com> Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com> Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com> Co-authored-by: Sage Moore <sage@neuralmagic.com> Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com> Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com> Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com> Co-authored-by: Boyuan Feng <fby.1994@gmail.com> Co-authored-by: qscqesze <qingjun@minimaxi.com> Co-authored-by: Concurrensee <yida.wu@amd.com> Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com> Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com> Co-authored-by: Konrad Zawora <kzawora@habana.ai> Co-authored-by: maobaolong <baoloongmao@tencent.com> Co-authored-by: Ilya Markov <markovilya197@gmail.com> Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com> Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn> Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com> Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai> Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com> Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com> Co-authored-by: Shawn Tan <shawntan@ibm.com> Co-authored-by: qscqesze <qscqesze@gmail.com>
…e. (vllm-project#17930) Signed-off-by: Tsai, Louie <louie.tsai@intel.com> Co-authored-by: Li, Jiang <bigpyj64@gmail.com> Signed-off-by: minpeter <kali2005611@gmail.com>
[Current Implementation]
In order to have good performance for Tensor Parallel or Pipeline Parallel on CPU, users need to do proper CPU OMP Threads binding to avoid perf degradation causing by multiple threads on same CPU core.
Here is the current run instructions for CPU OMP THREADS BIND.
OMP_NUM_THREADS=32 VLLM_CPU_OMP_THREADS_BIND="0-31|32-63|64-95|96-127" python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --device cpu -tp=4 --distributed-executor-backend mp
[Problem Statement]
However, CPU ids might change among OSes, and different CPU SKU also have different CPU numbers.
Moreover, users might need to check cpus_allow_list also to only bind to allowed CPU cores.
It requires users to check their environment first, and set the Binding properly.
In some cases like cluster deployment using Kubernetes, users won't know the CPU ids before the deployments, so it is hard to set the deployment scripts like k8s yaml file correctly.
[Proposed Solution]
Therefore, we introduce a new feature to automatically bind CPU OMP threads of a rank to CPU ids of a allowed NUMA node according to cpus_allowed_list.
Therefore, no need to set the VLLM_CPU_OMP_THREADS_BIND according to users env, and CPU worker will set it automatically for users.
New run instructions will like below one, and it is also easier to have Tensor Parallel support for k8s deployment environment.
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3.1-8B-Instruct --device cpu -tp=4 --distributed-executor-backend mp
Overall, it will bind a rank/world to a allowed numa node, so 4 numa nodes will be used for tp=4 pp=1.
2 numa nodes will be used for tp=2 pp=1.
if current environment only allow 2 numa nodes, it will return errors.
[Related Environment variables]
VLLM_CPU_OMP_THREADS_BIND : By setting to auto, the OpenMP threads of each rank are bounds to the CPU cores in each NUMA node. Default value is auto.
VLLM_CPU_NUM_OF_RESERVED_CPU: specify the number of CPU cores which are not dedicated to the OpenMP threads for each rank. The variable only takes effect when VLLM_CPU_OMP_THREADS_BIND is set to auto. Default value is 0.
[More Details]

you could see in the below diagram.
Even vllm server doesn't get input for VLLM_CPU_OMP_THREADS_BIND, and set omp_cpuids to all.
CPU worker will automatically overwrite the local_omp_cpuid according to current system numa configuration.