Skip to content

[New Model]: Support Qwen3 Embedding & Reranker #19260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 28 commits into from
Jun 11, 2025

Conversation

noooop
Copy link
Contributor

@noooop noooop commented Jun 6, 2025

Summary

  • Qwen3 Embedding
    • Qwen/Qwen3-Embedding-0.6B
    • Qwen/Qwen3-Embedding-4B
    • Qwen/Qwen3-Embedding-8B
  • Qwen3 Reranker
    • Qwen/Qwen3-Reranker-0.6B
    • Qwen/Qwen3-Reranker-4B
    • Qwen/Qwen3-Reranker-8B
    • tomaarsen/Qwen3-Reranker-0.6B-seq-cls

Usage

  • Qwen3 Embedding
vllm serve Qwen/Qwen3-Embedding-0.6B

curl

curl http://127.0.0.1:8000/v1/embeddings \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "input": "Follow the white rabbit.",
    "model": "Qwen/Qwen3-Embedding-0.6B",
    "encoding_format": "float"
  }'
  • Qwen3 Reranker

Caution

Please use the query_template and document_template to format the query and document for better reranker results.
without template, the results are almost as random. PTAL #19344

for models that have been converted into Qwen3ForSequenceClassification, such as tomaarsen/Qwen3-Reranker-0.6B-seq-cls

vllm serve tomaarsen/Qwen3-Reranker-0.6B-seq-cls

/score

curl http://127.0.0.1:8000/score \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "text_1": "ping",
    "text_2": "pong",
    "model": "tomaarsen/Qwen3-Reranker-0.6B-seq-cls"
  }'

expected output

{"id":"score-ee337e20e932467a83792d220614a7cd","object":"list","created":1749527048,"model":"tomaarsen/Qwen3-Reranker-0.6B-seq-cls","data":[{"index":0,"object":"score","score":0.06634521484375}],"usage":{"prompt_tokens":2,"total_tokens":2,"completion_tokens":0,"prompt_tokens_details":null}}

/rerank

curl http://127.0.0.1:8000/rerank \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "query": "ping",
    "documents": ["pong"],
    "model": "tomaarsen/Qwen3-Reranker-0.6B-seq-cls"
  }'

expected output

{"id":"rerank-fe06b692387444b7a56e282944f285f9","model":"tomaarsen/Qwen3-Reranker-0.6B-seq-cls","usage":{"total_tokens":2},"results":[{"index":0,"document":{"text":"pong"},"relevance_score":0.06634521484375}]}

for the official model

vllm serve Qwen/Qwen3-Reranker-0.6B --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'

Why do we need hf_overrides:
Qwen3-Reranker is a language model that doing reranker by using the logits of "no" and "yes" tokens.
vllm converts it to Qwen3ForSequenceClassification when loaded for better performance.

  • Firstly, we need using "architectures": ["Qwen3ForSequenceClassification"], to manually route to Qwen3ForSequenceClassification.
  • Then, we will extract the vector corresponding to classifier_from_token from lm_head using "classifier_from_token": ["no", "yes"].
  • Third, we will convert these two vectors into one vector. The use of conversion logic is controlled by using "is_original_qwen3_reranker": True.

If you correctly start vllm serve, use the corresponding model name, then calling the official model is the same as the model that has been converted into Qwen3ForSequenceClassification.

/score

curl http://127.0.0.1:8000/score \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "text_1": "ping",
    "text_2": "pong",
    "model": "Qwen/Qwen3-Reranker-0.6B"
  }'

expected output

{"id":"score-7dbe101346ea4aeea4b85aa7971ddf8f","object":"list","created":1749527323,"model":"Qwen/Qwen3-Reranker-0.6B","data":[{"index":0,"object":"score","score":0.0673828125}],"usage":{"prompt_tokens":2,"total_tokens":2,"completion_tokens":0,"prompt_tokens_details":null}}

/rerank

curl http://127.0.0.1:8000/rerank \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
    "query": "ping",
    "documents": ["pong"],
    "model": "Qwen/Qwen3-Reranker-0.6B"
  }'

expected output

{"id":"rerank-43ddc0f96f174ae4a5eef07d51a8defd","model":"Qwen/Qwen3-Reranker-0.6B","usage":{"total_tokens":2},"results":[{"index":0,"document":{"text":"pong"},"relevance_score":0.0673828125}]}

Offline and formating query & document:

from vllm import LLM

model_name = "Qwen/Qwen3-Reranker-0.6B"

# What is the difference between the official original version and one
# that has been converted into a sequence classification model?
# Qwen3-Reranker is a language model that doing reranker by using the
# logits of "no" and "yes" tokens.
# It needs to computing 151669 tokens logits, making this method extremely
# inefficient, not to mention incompatible with the vllm score API.
# A method for converting the original model into a sequence classification
# model was proposed. See:https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3
# Models converted offline using this method can not only be more efficient
# and support the vllm score API, but also make the init parameters more
# concise, for example.
# model = LLM(model="tomaarsen/Qwen3-Reranker-0.6B-seq-cls", task="score")

# If you want to load the official original version, the init parameters are
# as follows.

model = LLM(
    model=model_name,
    task="score",
    hf_overrides={
        "architectures": ["Qwen3ForSequenceClassification"],
        "classifier_from_token": ["no", "yes"],
        "is_original_qwen3_reranker": True,
    },
)

# Why do we need hf_overrides for the official original version:
# vllm converts it to Qwen3ForSequenceClassification when loaded for
# better performance.
# - Firstly, we need using `"architectures": ["Qwen3ForSequenceClassification"],`
# to manually route to Qwen3ForSequenceClassification.
# - Then, we will extract the vector corresponding to classifier_from_token
# from lm_head using `"classifier_from_token": ["no", "yes"]`.
# - Third, we will convert these two vectors into one vector.  The use of
# conversion logic is controlled by `using "is_original_qwen3_reranker": True`.

# Please use the query_template and document_template to format the query and
# document for better reranker results.

prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"

query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
document_template = "<Document>: {doc}{suffix}"

if __name__ == "__main__":
    instruction = (
        "Given a web search query, retrieve relevant passages that answer the query"
    )

    queries = [
        "What is the capital of China?",
        "Explain gravity",
    ]

    documents = [
        "The capital of China is Beijing.",
        "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
    ]

    queries = [
        query_template.format(prefix=prefix, instruction=instruction, query=query)
        for query in queries
    ]
    documents = [document_template.format(doc=doc, suffix=suffix) for doc in documents]

    outputs = model.score(queries, documents)

    print([output.outputs.score for output in outputs])

requests demo + formating query & document:

import requests

url = "http://127.0.0.1:8000/score"
MODEL_NAME = "tomaarsen/Qwen3-Reranker-0.6B-seq-cls"

# Please use the query_template and document_template to format the query and
# document for better reranker results.

prefix = '<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>\n<|im_start|>user\n'
suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"

query_template = "{prefix}<Instruct>: {instruction}\n<Query>: {query}\n"
document_template = "<Document>: {doc}{suffix}"

instruction = (
    "Given a web search query, retrieve relevant passages that answer the query"
)

queries = [
    "What is the capital of China?",
    "Explain gravity",
]

documents = [
    "The capital of China is Beijing.",
    "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]

queries = [
    query_template.format(prefix=prefix, instruction=instruction, query=query)
    for query in queries
]
documents = [
    document_template.format(doc=doc, suffix=suffix) for doc in documents
]

response = requests.post(url,
                         json={
                             "model": MODEL_NAME,
                             "text_1": queries,
                             "text_2": documents,
                             "truncate_prompt_tokens": -1,
                         }).json()

print(response)

expected output

{'id': 'score-14f698f021b9434482ec3d94a5757e11', 'object': 'list', 'created': 1749786173, 'model': 'tomaarsen/Qwen3-Reranker-0.6B-seq-cls', 'data': [{'index': 0, 'object': 'score', 'score': 0.99951171875}, {'index': 1, 'object': 'score', 'score': 0.99951171875}], 'usage': {'prompt_tokens': 189, 'total_tokens': 189, 'completion_tokens': 0, 'prompt_tokens_details': None}}

Legacy

For Embedding
After merging
https://huggingface.co/Qwen/Qwen3-Embedding-0.6B/discussions/2
embeddings-benchmark/mteb#2769 (comment)
Qwen3-Embedding can already output results close to SentenceTransformers.

For Reranker

  • Qwen3ForCausalLM: Qwen3 Embedding & Reranker both use the same architecture Qwen3ForCausalLM, vllm currently has no way to allow a single architecture to support Embedding and Reranker at the same time.
  • SupportsCrossEncoding: For Reranker, The biggest problem is that the task Score will treat the Qwen3ForCausalLM model in a way that calculates the Score based on embedding models, which involves calculating embeddings and cosine distance. This is definitely not what was wanted.
  • Qwen3ForSequenceClassification: Perhaps ultimately we need something like --hf-overrides '{"architectures": ["Qwen3ForSequenceClassification"]}' to get Qwen3 Reranker to run correctly.
  • classifier_from_token: A more efficient approach is to extract token_false_id = 2152 and token_true_id = 9693 into a 2-class classification task rather than the current 151669-class classification task. We need a new interface (classifier_from_token) to implement it
  • converted to a single label classification model The 2-way classifier is actually just a 1- way head classifier. https://huggingface.co/Qwen/Qwen3-Reranker-0.6B/discussions/3
  • format_instruction, Should format_instruction be handled by users or vllm, and where should this piece of code be placed? I temporarily added a process_inputs callback function for LLM.score, but the online version (OpenAI-Compatible Server) doesn't know how to handle it. Please format the string using the method above.

FIX #19229
FIX #19252
FIX #19366

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @noooop, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello! Gemini here, providing a summary of this pull request. This PR aims to add initial support for Qwen3 Embedding and Reranker models within vLLM. Based on the PR title and description, the primary goal is to enable these specific Qwen3 model types. The author notes that the current implementation, particularly for the Reranker task, is a "dirty fix" due to underlying architectural challenges where both Embedding and Reranker models share the same Qwen3ForCausalLM architecture in Hugging Face, which vLLM isn't currently designed to handle cleanly for these distinct tasks. The PR focuses on implementing a specific scoring mechanism for the Reranker task by leveraging the existing causal language model structure.

Highlights

  • Qwen3 Model Support: Adds initial support for Qwen3 Embedding and Reranker models.
  • Reranker Scoring Logic: Implements a specific scoring method for the Reranker task by extracting logits for predefined true/false tokens (2152 and 9693) and calculating a score based on their probabilities.
  • Cross-Encoding Interface: The Qwen3ForCausalLM class now implements the SupportsCrossEncoding interface, indicating its capability for tasks like reranking.
  • Known Limitations: The author explicitly mentions several existing issues with this approach, including the challenge of supporting both Embedding and Reranker tasks with the same underlying architecture and the current scoring method being a workaround.

Changelog

  • vllm/model_executor/models/qwen3.py
    • Added imports for pooling-related classes (LastPool, PoolingMetadata, PoolerOutput, PoolingSequenceGroupOutput) and the SupportsCrossEncoding interface.
    • Updated the Qwen3ForCausalLM class definition to inherit from SupportsCrossEncoding.
    • In the Qwen3ForCausalLM constructor, initialized a LastPool layer if the model task is set to "score".
    • Added a pooler method to Qwen3ForCausalLM which implements the custom scoring logic for the reranker task using specific token logits (2152 and 9693).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

github-actions bot commented Jun 6, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR adds support for Qwen3 Embedding and Reranker models by introducing specific handling for the 'score' task within the Qwen3ForCausalLM architecture. The approach for reranking is acknowledged as a workaround and involves using logits of specific 'true'/'false' tokens. The changes are targeted and address the issues outlined in the PR description. However, the hardcoded token IDs are a concern for maintainability and generalizability.

Summary of Findings

  • Hardcoded Token IDs: The pooler method in vllm/model_executor/models/qwen3.py uses hardcoded token IDs (2152 for false, 9693 for true). This is a maintainability concern and should ideally be made configurable or derived from model/tokenizer configuration.
  • Clarity on Reranker Logic: The choice of LastPool in the __init__ method for the 'score' task, and how it relates to the subsequent logit-based scoring in the pooler method, could benefit from more explicit comments or documentation, especially given its characterization as a "dirty fix".

Merge Readiness

This pull request makes a good effort to support Qwen3 Reranker models within the existing Qwen3ForCausalLM architecture, acknowledging the current limitations. The approach taken is a pragmatic workaround.

However, the use of hardcoded token IDs is a significant concern that should be addressed. Ideally, these should be configurable or automatically derived. At a minimum, their origin and necessity for being hardcoded should be clearly documented in the code.

Given these points, and the author's own acknowledgement of this being a "dirty fix", I recommend addressing the hardcoded token ID issue and potentially clarifying the LastPool rationale before merging. I am unable to approve pull requests, but I suggest further discussion on these points. Other reviewers should assess the overall architectural implications of this workaround.

@DarkLight1337 DarkLight1337 self-assigned this Jun 6, 2025
@mergify mergify bot added the frontend label Jun 6, 2025
@noooop
Copy link
Contributor Author

noooop commented Jun 6, 2025

@DarkLight1337

quick review

@noooop noooop marked this pull request as ready for review June 6, 2025 12:17
@noooop noooop requested a review from ywang96 as a code owner June 6, 2025 12:17
@noooop
Copy link
Contributor Author

noooop commented Jun 6, 2025

tests/models/language/pooling/test_qwen3_reranker.py is a bit hastily, I am refactoring the test for score, the next PR will fix it.

@noooop
Copy link
Contributor Author

noooop commented Jun 6, 2025

The current code is a bit hacky.

Wait until next week to see if the official can convert the model into Qwen3ForSequenceClassification format.

@tomaarsen
Copy link

tomaarsen commented Jun 6, 2025

Hello!

Thank you for your useful work here @noooop. I converted the model to a sequence classification model in the meantime for me to be able to do some testing - it might come in handy for you as well: https://huggingface.co/tomaarsen/Qwen3-Reranker-0.6B-seq-cls

I can also move it to the cross-encoder organization, but I'd like to avoid that until I get models with these larger templates working more nicely with Sentence Transformers, i.e. without having to do all kinds of pre-processing manually.

  • Tom Aarsen

@lovetian1991
Copy link

quick review

@zcfrank1st
Copy link

fantastic!

@NiuBlibing
Copy link
Contributor

Hi, does vllm serve support instruction?

@noooop
Copy link
Contributor Author

noooop commented Jun 13, 2025

Hi, does vllm serve support instruction?

You can manually format the instruction when calling the API. When SentenceTransformers has an automatic instruction formatting API, we will follow up.

@NiuBlibing
Copy link
Contributor

NiuBlibing commented Jun 13, 2025

Hi, does vllm serve support instruction?

You can manually format the instruction when calling the API. When SentenceTransformers has an automatic instruction formatting API, we will follow up.

Could you provide an example by curl?

@metacryptom
Copy link
Contributor

the reranker follow the above can't work

start log

INFO 06-12 12:40:17 [cli_args.py:309] non-default args: {'host': '0.0.0.0', 'model': '/aimodels/embeddigns/Qwen3-Reranker-8B', 'enforce_eager': True, 'served_model_name': ['default'], 'hf_overrides': {'architectures': ['Qwen3ForSequenceClassification'], 'classifier_from_token': ['no', 'yes'], 'is_original_qwen3_reranker': True}}
INFO 06-12 12:40:17 [config.py:533] Overriding HF config with {'architectures': ['Qwen3ForSequenceClassification'], 'classifier_from_token': ['no', 'yes'], 'is_original_qwen3_reranker': True}
INFO 06-12 12:40:23 [config.py:823] This model supports multiple tasks: {'score', 'embed', 'classify', 'generate', 'reward'}. Defaulting to 'generate'.
INFO 06-12 12:40:23 [config.py:2195] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 06-12 12:40:23 [cuda.py:91] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used
WARNING 06-12 12:40:25 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: NVIDIA/nccl#1234
INFO 06-12 12:40:27 [init.py:244] Automatically detected platform cuda.
INFO 06-12 12:40:28 [core.py:455] Waiting for init message from front-end.
INFO 06-12 12:40:28 [core.py:70] Initializing a V1 LLM engine (v0.9.1) with config: model='/aimodels/embeddigns/Qwen3-Reranker-8B', speculative_config=None, tokenizer='/aimodels/embeddigns/Qwen3-Reranker-8B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=40960, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=default, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":0,"local_cache_dir":null}
WARNING 06-12 12:40:29 [utils.py:2737] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7b5ac877a7e0>
INFO 06-12 12:40:30 [parallel_state.py:1065] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
INFO 06-12 12:40:30 [topk_topp_sampler.py:49] Using FlashInfer for top-p & top-k sampling.
WARNING 06-12 12:40:30 [utils.py:211] Qwen3ForSequenceClassification has no vLLM implementation, falling back to Transformers implementation. Some features may not be supported and performance may not be optimal.
INFO 06-12 12:40:30 [gpu_model_runner.py:1595] Starting to load model /aimodels/embeddigns/Qwen3-Reranker-8B...
INFO 06-12 12:40:30 [gpu_model_runner.py:1600] Loading model from scratch...
INFO 06-12 12:40:30 [transformers.py:146] Using Transformers backend.
INFO 06-12 12:40:31 [cuda.py:252] Using Flash Attention backend on V1 engine.
Loading safetensors checkpoint shards: 0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 20% Completed | 1/5 [00:00<00:02, 1.90it/s]
Loading safetensors checkpoint shards: 40% Completed | 2/5 [00:01<00:01, 1.82it/s]
Loading safetensors checkpoint shards: 60% Completed | 3/5 [00:01<00:00, 2.57it/s]
Loading safetensors checkpoint shards: 80% Completed | 4/5 [00:01<00:00, 2.22it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:02<00:00, 2.26it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:02<00:00, 2.21it/s]

INFO 06-12 12:40:33 [default_loader.py:272] Loading weights took 2.29 seconds
INFO 06-12 12:40:34 [gpu_model_runner.py:1624] Model loading took 15.2546 GiB and 3.626486 seconds
INFO 06-12 12:40:35 [gpu_worker.py:227] Available KV cache memory: 24.13 GiB
INFO 06-12 12:40:35 [kv_cache_utils.py:715] GPU KV cache size: 175,696 tokens
INFO 06-12 12:40:35 [kv_cache_utils.py:719] Maximum concurrency for 40,960 tokens per request: 4.29x
INFO 06-12 12:40:35 [core.py:171] init engine (profile, create kv cache, warmup model) took 1.34 seconds
INFO 06-12 12:40:35 [loggers.py:137] Engine 000: vllm cache_config_info with initialization after num_gpu_blocks is: 10981
WARNING 06-12 12:40:35 [config.py:1363] Default sampling parameters have been overridden by the model's Hugging Face generation config recommended from the model creator. If this is not intended, please relaunch vLLM instance with --generation-config vllm.
INFO 06-12 12:40:35 [serving_chat.py:118] Using default chat sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
INFO 06-12 12:40:35 [serving_completion.py:66] Using default completion sampling params from model: {'temperature': 0.6, 'top_k': 20, 'top_p': 0.95}
INFO 06-12 12:40:35 [api_server.py:1349] Starting vLLM API server 0 on http://0.0.0.0:8000
INFO 06-12 12:40:35 [launcher.py:29] Available routes are:
INFO 06-12 12:40:35 [launcher.py:37] Route: /openapi.json, Methods: HEAD, GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /docs, Methods: HEAD, GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /redoc, Methods: HEAD, GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /health, Methods: GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /load, Methods: GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /ping, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /ping, Methods: GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /tokenize, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /detokenize, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/models, Methods: GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /version, Methods: GET
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/chat/completions, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/completions, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/embeddings, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /pooling, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /classify, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /score, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/score, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/audio/transcriptions, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /rerank, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v1/rerank, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /v2/rerank, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /invocations, Methods: POST
INFO 06-12 12:40:35 [launcher.py:37] Route: /metrics, Methods: GET
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.

when I run curl http://127.0.0.1:9211/rerank -H 'accept: application/json' -H 'Content-Type: application/json' -d '{
"query": "ping",
"documents": ["pong"],
"model": "default"
}'

It got
{"object":"error","message":"The model does not support Rerank (Score) API","type":"BadRequestError","param":null,"code":400}

when execute curl http://127.0.0.1:9211/score
-H 'accept: application/json'
-H 'Content-Type: application/json'
-d '{
"text_1": "ping",
"text_2": "pong",
"model": "default"
}'

It got
{"object":"error","message":"The model does not support Score API","type":"BadRequestError","param":null,"code":400}r

@noooop
Copy link
Contributor Author

noooop commented Jun 13, 2025

the reranker follow the above can't work

This pr didn't catch up the 0.9.1 release, so you need to install the dev version.

https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code_1

@noooop
Copy link
Contributor Author

noooop commented Jun 13, 2025

Could you provide an example by curl?

Perhaps I should add a request demo code.

done

@TPLink32
Copy link

按照上述方法重新排序无法工作

此 pr 没有赶上 0.9.1 版本,因此您需要安装开发版本。

https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code_1

按照上述方法重新排序无法工作

此 pr 没有赶上 0.9.1 版本,因此您需要安装开发版本。

https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code_1

pip show vllm
Name: vllm
Version: 0.9.2.dev96+gf40f763f1
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Home-page: https://github.com/vllm-project/vllm
Author: vLLM Team

{"object":"error","message":"The model does not support Embeddings API","type":"BadRequestError","param":null,"code":400}

amogkam added a commit to character-tech/vllm that referenced this pull request Jun 16, 2025
* [doc] clarify windows support (vllm-project#19088)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [CI/Build] Remove V0 LoRA test (vllm-project#19066)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* Fix underscores in dict keys passed via CLI (vllm-project#19030)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>

* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Docs] Note that alternative structured output backends are supported (vllm-project#19426)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Model] use AutoWeightsLoader for commandr (vllm-project#19399)

Signed-off-by: py-andy-c <pychen1017@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390)

Signed-off-by: rzou <zou3519@gmail.com>

* [New Model]: Support Qwen3 Embedding & Reranker  (vllm-project#19260)

* [BugFix] Fix docker build cpu-dev image error (vllm-project#19394)

Signed-off-by: niu_he <carlton2tang@gmail.com>

* Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451)

Signed-off-by: Lu Fang <lufang@fb.com>

* [CI] Disable failing GGUF model test (vllm-project#19454)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455)

Signed-off-by: Junhao Li <junhao@ubicloud.com>

* Fix Typo in Documentation and Function Name (vllm-project#19442)

* [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Support deep_gemm for linear methods (vllm-project#19085)

Signed-off-by: artetaout <lulala341@gmail.com>

* [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Fix quantization link titles (vllm-project#19478)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Support "important" and "announcement" admonitions (vllm-project#19479)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Reduce warning message introduced in env_override (vllm-project#19476)

Signed-off-by: Lu Fang <lufang@fb.com>

* Support non-string values in JSON keys from CLI (vllm-project#19471)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add cache to cuda get_device_capability (vllm-project#19436)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix some typo (vllm-project#19475)

Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>

* Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>

* [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453)

Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>

* [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [doc] fix "Other AI accelerators" getting started page (vllm-project#19457)

Signed-off-by: David Xia <david@davidxia.com>

* [Misc] Fix  misleading ROCm warning (vllm-project#19486)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Docs] Remove WIP features in V1 guide (vllm-project#19498)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [CI] change spell checker from codespell to typos (vllm-project#18711)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518)

Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>

* [Frontend] Improve error message in tool_choice validation (vllm-project#19239)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522)

Signed-off-by: strutive07 <strutive07@gmail.com>

* [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* Fix typo (vllm-project#19525)

Signed-off-by: 2niuhe <carlton2tang@gmail.com>

* [Security] Prevent new imports of (cloud)pickle (vllm-project#18018)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Quantization] Improve AWQ logic (vllm-project#19431)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Add V1 column to supported models list (vllm-project#19523)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][NixlConnector] Drop `num_blocks` check  (vllm-project#19532)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Fix TorchAOConfig skip layers (vllm-project#19265)

Signed-off-by: mobicham <hicham@mobiuslabs.com>

* [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756)

Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>

* [doc] Make top navigation sticky (vllm-project#19540)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847)

* [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506)

* [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Unify structured outputs examples (vllm-project#18196)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [V1] Resolve failed concurrent structured output requests (vllm-project#19565)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378)

* [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570)

Signed-off-by: qizixi <qizixi@meta.com>

* [Doc] uses absolute links for structured outputs (vllm-project#19582)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [doc] fix incorrect link (vllm-project#19586)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Correct broken docs link (vllm-project#19553)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [CPU] Refine default config for the CPU backend (vllm-project#19539)

Signed-off-by: jiang1.li <jiang1.li@intel.com>

* [Fix] bump mistral common to support magistral (vllm-project#19533)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* use base version for version comparison (vllm-project#19587)

Signed-off-by: Boyuan Feng <boyuan@meta.com>

* [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Model] Fix minimax model cache & lm_head precision (vllm-project#19592)

Signed-off-by: qingjun <qingjun@minimaxi.com>

* [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* [doc][mkdocs] fix the  duplicate Supported features sections in GPU docs (vllm-project#19606)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581)

Signed-off-by: luka <luka@neuralmagic.com>

* [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618)

* Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354)

Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>

* [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633)

* [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500)

* Only build CUTLASS MoE kernels on Hopper (vllm-project#19648)

* [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561)

* [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262)

* [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566)

* [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644)

* [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Enable prefix caching with full cuda graphs (vllm-project#19617)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589)

* [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] Remove unused variableds in C++ (vllm-project#19609)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957)

Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>

* [Misc][Frontend] passthrough `bad_words` (vllm-project#19564)

Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [TPU] support attention head dim smaller than 128 (vllm-project#19620)

Signed-off-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* [MISC] typo fix (vllm-project#19672)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [CI] Add mteb testing for rerank models (vllm-project#19344)

* [Docs] Move multiproc doc to v1 dir (vllm-project#19651)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754)

Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>

* [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557)

* [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652)

Signed-off-by: Shawn Tan <shawntan@ibm.com>

* [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Kernels] Use empty for modular MoE workspaces (vllm-project#19667)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677)

Signed-off-by: QscQ <qscqesze@gmail.com>

* [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

---------

Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: py-andy-c <pychen1017@gmail.com>
Signed-off-by: niu_he <carlton2tang@gmail.com>
Signed-off-by: Junhao Li <junhao@ubicloud.com>
Signed-off-by: artetaout <lulala341@gmail.com>
Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>
Signed-off-by: David Xia <david@davidxia.com>
Signed-off-by: Bill Nell <bnell@redhat.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Signed-off-by: strutive07 <strutive07@gmail.com>
Signed-off-by: 2niuhe <carlton2tang@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: mobicham <hicham@mobiuslabs.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Signed-off-by: qizixi <qizixi@meta.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: Boyuan Feng <boyuan@meta.com>
Signed-off-by: qingjun <qingjun@minimaxi.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>
Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>
Signed-off-by: Shawn Tan <shawntan@ibm.com>
Signed-off-by: QscQ <qscqesze@gmail.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com>
Co-authored-by: niu_he <carlton2tang@gmail.com>
Co-authored-by: Junhao Li <junhao@ubicloud.com>
Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com>
Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com>
Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: runzhen <wangrunzhen@gmail.com>
Co-authored-by: David Xia <david@davidxia.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Ning Xie <andy.xning@gmail.com>
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Co-authored-by: wonjun Jang <strutive07@gmail.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com>
Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com>
Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Boyuan Feng <fby.1994@gmail.com>
Co-authored-by: qscqesze <qingjun@minimaxi.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com>
Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: maobaolong <baoloongmao@tencent.com>
Co-authored-by: Ilya Markov <markovilya197@gmail.com>
Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com>
Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com>
Co-authored-by: Shawn Tan <shawntan@ibm.com>
Co-authored-by: qscqesze <qscqesze@gmail.com>
amogkam added a commit to character-tech/vllm that referenced this pull request Jun 16, 2025
* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Docs] Note that alternative structured output backends are supported (vllm-project#19426)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Model] use AutoWeightsLoader for commandr (vllm-project#19399)

Signed-off-by: py-andy-c <pychen1017@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390)

Signed-off-by: rzou <zou3519@gmail.com>

* [New Model]: Support Qwen3 Embedding & Reranker  (vllm-project#19260)

* [BugFix] Fix docker build cpu-dev image error (vllm-project#19394)

Signed-off-by: niu_he <carlton2tang@gmail.com>

* Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451)

Signed-off-by: Lu Fang <lufang@fb.com>

* [CI] Disable failing GGUF model test (vllm-project#19454)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455)

Signed-off-by: Junhao Li <junhao@ubicloud.com>

* Fix Typo in Documentation and Function Name (vllm-project#19442)

* [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Support deep_gemm for linear methods (vllm-project#19085)

Signed-off-by: artetaout <lulala341@gmail.com>

* [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Fix quantization link titles (vllm-project#19478)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Support "important" and "announcement" admonitions (vllm-project#19479)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Reduce warning message introduced in env_override (vllm-project#19476)

Signed-off-by: Lu Fang <lufang@fb.com>

* Support non-string values in JSON keys from CLI (vllm-project#19471)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add cache to cuda get_device_capability (vllm-project#19436)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix some typo (vllm-project#19475)

Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>

* Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>

* [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453)

Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>

* [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [doc] fix "Other AI accelerators" getting started page (vllm-project#19457)

Signed-off-by: David Xia <david@davidxia.com>

* [Misc] Fix  misleading ROCm warning (vllm-project#19486)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Docs] Remove WIP features in V1 guide (vllm-project#19498)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [CI] change spell checker from codespell to typos (vllm-project#18711)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518)

Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>

* [Frontend] Improve error message in tool_choice validation (vllm-project#19239)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522)

Signed-off-by: strutive07 <strutive07@gmail.com>

* [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* Fix typo (vllm-project#19525)

Signed-off-by: 2niuhe <carlton2tang@gmail.com>

* [Security] Prevent new imports of (cloud)pickle (vllm-project#18018)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Quantization] Improve AWQ logic (vllm-project#19431)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Add V1 column to supported models list (vllm-project#19523)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][NixlConnector] Drop `num_blocks` check  (vllm-project#19532)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Fix TorchAOConfig skip layers (vllm-project#19265)

Signed-off-by: mobicham <hicham@mobiuslabs.com>

* [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756)

Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>

* [doc] Make top navigation sticky (vllm-project#19540)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847)

* [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506)

* [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Unify structured outputs examples (vllm-project#18196)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [V1] Resolve failed concurrent structured output requests (vllm-project#19565)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378)

* [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570)

Signed-off-by: qizixi <qizixi@meta.com>

* [Doc] uses absolute links for structured outputs (vllm-project#19582)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [doc] fix incorrect link (vllm-project#19586)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Correct broken docs link (vllm-project#19553)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [CPU] Refine default config for the CPU backend (vllm-project#19539)

Signed-off-by: jiang1.li <jiang1.li@intel.com>

* [Fix] bump mistral common to support magistral (vllm-project#19533)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* use base version for version comparison (vllm-project#19587)

Signed-off-by: Boyuan Feng <boyuan@meta.com>

* [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Model] Fix minimax model cache & lm_head precision (vllm-project#19592)

Signed-off-by: qingjun <qingjun@minimaxi.com>

* [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* [doc][mkdocs] fix the  duplicate Supported features sections in GPU docs (vllm-project#19606)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581)

Signed-off-by: luka <luka@neuralmagic.com>

* [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618)

* Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354)

Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>

* [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633)

* [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500)

* Only build CUTLASS MoE kernels on Hopper (vllm-project#19648)

* [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561)

* [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262)

* [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566)

* [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644)

* [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Enable prefix caching with full cuda graphs (vllm-project#19617)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589)

* [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] Remove unused variableds in C++ (vllm-project#19609)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957)

Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>

* [Misc][Frontend] passthrough `bad_words` (vllm-project#19564)

Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [TPU] support attention head dim smaller than 128 (vllm-project#19620)

Signed-off-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* [MISC] typo fix (vllm-project#19672)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [CI] Add mteb testing for rerank models (vllm-project#19344)

* [Docs] Move multiproc doc to v1 dir (vllm-project#19651)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754)

Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>

* [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557)

* [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652)

Signed-off-by: Shawn Tan <shawntan@ibm.com>

* [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Kernels] Use empty for modular MoE workspaces (vllm-project#19667)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677)

Signed-off-by: QscQ <qscqesze@gmail.com>

* [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

* remove logging

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

---------

Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: py-andy-c <pychen1017@gmail.com>
Signed-off-by: niu_he <carlton2tang@gmail.com>
Signed-off-by: Junhao Li <junhao@ubicloud.com>
Signed-off-by: artetaout <lulala341@gmail.com>
Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>
Signed-off-by: David Xia <david@davidxia.com>
Signed-off-by: Bill Nell <bnell@redhat.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Signed-off-by: strutive07 <strutive07@gmail.com>
Signed-off-by: 2niuhe <carlton2tang@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: mobicham <hicham@mobiuslabs.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Signed-off-by: qizixi <qizixi@meta.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: Boyuan Feng <boyuan@meta.com>
Signed-off-by: qingjun <qingjun@minimaxi.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>
Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>
Signed-off-by: Shawn Tan <shawntan@ibm.com>
Signed-off-by: QscQ <qscqesze@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com>
Co-authored-by: niu_he <carlton2tang@gmail.com>
Co-authored-by: Junhao Li <junhao@ubicloud.com>
Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com>
Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com>
Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: runzhen <wangrunzhen@gmail.com>
Co-authored-by: David Xia <david@davidxia.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Ning Xie <andy.xning@gmail.com>
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Co-authored-by: wonjun Jang <strutive07@gmail.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com>
Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com>
Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Boyuan Feng <fby.1994@gmail.com>
Co-authored-by: qscqesze <qingjun@minimaxi.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com>
Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: maobaolong <baoloongmao@tencent.com>
Co-authored-by: Ilya Markov <markovilya197@gmail.com>
Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com>
Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com>
Co-authored-by: Shawn Tan <shawntan@ibm.com>
Co-authored-by: qscqesze <qscqesze@gmail.com>
@noooop
Copy link
Contributor Author

noooop commented Jun 16, 2025

按照上述方法重新排序无法工作

此 pr 没有赶上 0.9.1 版本,因此您需要安装开发版本。
https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code_1

按照上述方法重新排序无法工作

此 pr 没有赶上 0.9.1 版本,因此您需要安装开发版本。
https://docs.vllm.ai/en/latest/getting_started/installation/gpu.html#install-the-latest-code_1

pip show vllm Name: vllm Version: 0.9.2.dev96+gf40f763f1 Summary: A high-throughput and memory-efficient inference and serving engine for LLMs Home-page: https://github.com/vllm-project/vllm Author: vLLM Team

{"object":"error","message":"The model does not support Embeddings API","type":"BadRequestError","param":null,"code":400}

try

uv pip install -U vllm --extra-index-url https://wheels.vllm.ai/nightly --torch-backend=auto

amogkam added a commit to character-tech/vllm that referenced this pull request Jun 16, 2025
* [Bugfix] disable processor cache  (vllm-project#19068)

Signed-off-by: raushan <raushan@huggingface.co>

* [Doc] Improve the Pull Request template with key components (vllm-project#19086)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Misc] Add missing `_Backend` enums (vllm-project#19081)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [Misc] fix: add miss best_of param validation (vllm-project#18555)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [Misc] Add SPDX-FileCopyrightText  (vllm-project#19100)

Signed-off-by: simon-mo <simon.mo@hey.com>

* [Doc] Readme standardization (vllm-project#18695)

Co-authored-by: Soren Dreano <soren@numind.ai>

* [doc] update docker version (vllm-project#19074)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Kernel] DeepEP dispatch-combine kernel integration (vllm-project#18434)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [V1] Support cross-layer KV sharing (vllm-project#18212)

Signed-off-by: Yong Hoon Shin <yhshin@meta.com>

* [Perf] Tune `scaled_fp8_quant` by increasing vectorization (vllm-project#18844)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix interaction between `Optional` and `Annotated` in CLI typing (vllm-project#19093)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>

* [v1] Re-init input batch for multiple kv cache groups (vllm-project#18654)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [V1][Spec Decode][Ngram] 1.35x gain -> 1.95x gain on InstructCoder with prompt fix (vllm-project#18971)

* [Bugfix] get_num_blocks_to_allocate with null_block (vllm-project#19031)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [Bugfix]: Fix the incompatibility issue with tool_choice 'required' when Thinking is enabled (vllm-project#19075)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix][P/D] Fix Prefix Cache Bug (vllm-project#18411)

Signed-off-by: nicklucche <nlucches@redhat.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>

* [Bugfix] Max concurrency estimation and check_enough_kv_cache_memory for models with sliding window layers (vllm-project#19029)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* feat: add data parallel rank to KVEventBatch (vllm-project#18925)

* [Misc] Fix path and python alias errors in disagg_prefill exmaples (vllm-project#18919)

* [Docs] Add developer doc about CI failures (vllm-project#18782)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [CPU] V1 support for the CPU backend (vllm-project#16441)

* [Core] Cast multimodal input in hf processor (vllm-project#18862)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* [KERNEL] Sampler. CUDA kernel for applying repetition penalty (vllm-project#18437)

* [Cleanup][v1]:remote guided-decoding-backend for example (vllm-project#19059)

Signed-off-by: calvin chen <120380290@qq.com>

* [NVIDIA] Add Cutlass MLA backend (vllm-project#17625)

* [Bugfix] Fix FA3 full cuda graph correctness (vllm-project#19106)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* Fix vllm-project#19130 (vllm-project#19132)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [TPU] Skip hanging tests (vllm-project#19115)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* Fix ValueError: Missing value for tag key(s): model_name,engine. (vllm-project#19113)

Signed-off-by: Seiji Eicher <seiji@anyscale.com>

* [Misc] Add packages for benchmark as extra dependency (vllm-project#19089)

Signed-off-by: Isotr0py <2037008807@qq.com>

* Improve the output precision of embedding models (vllm-project#19092)

* [CI/Build][Bugfix] Ensure compatibility with transformers 4.52 (vllm-project#18678)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add DeepSeek-R1-0528 function call chat template (vllm-project#18874)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* Sm100 blockwise fp8 swap ab (vllm-project#18564)

* [Doc] Update V1 Guide for embedding models (vllm-project#19141)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Allow AsyncLLMEngine.generate to target a specific DP rank (vllm-project#19102)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Bugfix][EP+DP] Fix internode check (vllm-project#19112)

Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>

* [Perf] Tunings for SM100 FP8 CUTLASS kernel (vllm-project#18778)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [TPU] Update dynamo dump file name in compilation test (vllm-project#19108)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [Bugfix] fix v1 cpu worker fails on macOS (vllm-project#19121)

* [Kernel] Integrate batched/masked deepgemm kernel (vllm-project#19111)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Misc] refactor: simplify EngineCoreClient.make_async_mp_client in AsyncLLM (vllm-project#18817)

Signed-off-by: googs1025 <googs1025@gmail.com>

* [P/D] Heterogeneous TP (vllm-project#18833)

Signed-off-by: nicklucche <nlucches@redhat.com>

* [doc] small fix (vllm-project#19167)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix][Nixl] Fix full prefix cache hit bug (vllm-project#18632)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix port handling in make_zmq_path (vllm-project#19117)

* [Torch Nightly]add missing dependency (vllm-project#18770)

Signed-off-by: Yang Wang <elainewy@meta.com>

* Handle non-serializable objects when dumping benchmark results (vllm-project#19114)

* [BugFix][Minor] Fix full cuda graph bug when max_num_seqs < 512 (vllm-project#19171)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Bugfix]: Fix the incompatibility issue with stream when Thinking is disabled (vllm-project#19135)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Build] Annotate wheel and container path for release workflow (vllm-project#19162)

Signed-off-by: simon-mo <simon.mo@hey.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Misc] Remove unnecessary fallback to prefill-decode attention (vllm-project#19138)

Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>

* [Misc] Do not override NCCL_CUMEM_ENABLE if set explicitly (vllm-project#19105)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Frontend] improve vllm run-batch --help display (vllm-project#19187)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] properly catch PIL-related errors for vision models when incorrect data urls are provided (vllm-project#19202)

Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>

* [mistral_common] Add v11 tokenizer (vllm-project#19193)

Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for DeepSeek-R1/V3 (vllm-project#19205)

* [Hardware][NVIDIA] FP4 MoE kernel optimization (vllm-project#19110)

Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>

* [MISC][Bugfix] Use less CPU when message queue has been empty for some time (vllm-project#16226)

Signed-off-by: Povilas Kanapickas <povilas@radix.lt>

* [P/D][NixlConnector] Enable FlashInfer backend (vllm-project#19090)

* [Quantization] Skip Fp4 Test for `compressed-tensors` (vllm-project#19217)

* [V1] Use FlashInfer by default on Blackwell GPUs (vllm-project#19118)

* [Model] NemotronH support (vllm-project#18863)

Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>

* Fix AOPerModuleConfig name changes (vllm-project#18869)

Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>

* [Bugfix] Fix EAGLE vocab embedding construction for Llama 70B (vllm-project#19033)

Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>

* [v1] Hybrid Memory Allocator (vllm-project#17996)

Signed-off-by: Chen Zhang <zhangch99@outlook.com>

* [TPU] update torch_xla pin (vllm-project#19231)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* Support allowed_token_ids in ChatCompletionRequest (vllm-project#19143)

Signed-off-by: Xu Song <xusong.vip@gmail.com>

* [Chore] update CODEOWNERS (vllm-project#19247)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [v1][P/D] Fix a edge case in kv cache schedule (vllm-project#19182)

Co-authored-by: jinghui <jinghui@fb.com>

* [TPU] fix kv cache dtype in model runner (vllm-project#19244)

Signed-off-by: Chengji Yao <chengjiyao@google.com>

* [Quantization] Bump compressed-tensors version; update NVFP4A16 test model (vllm-project#19224)

Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>

* [Docs] Improve V1 KVConnector interface documentation (vllm-project#19172)

Signed-off-by: Nick Hill <nhill@redhat.com>

* Fix CompilationConfig repr (vllm-project#19091)

Signed-off-by: rzou <zou3519@gmail.com>

* Unit Test for run_dp_sharded_vision_model (vllm-project#19103)

Signed-off-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Siqi Yan <siqi@meta.com>

* [Model] Optimize nemotron_h implementation (vllm-project#19249)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Raise when non-multi-instance DP clients target a DP rank (vllm-project#19227)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* improve logits bias (vllm-project#19041)

* Fixed ppc build when it runs on non-RHEL based linux distros (vllm-project#18422)

Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>

* [BugFix] Fix MultiConnector test after HMA changes (vllm-project#19291)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix][Core] Update cancellation logic in `generate()` to handle Generator exits (vllm-project#19225)

Co-authored-by: Adolfo Victoria <adovi@meta.com>

* [Core] Fix abrupt request abort (vllm-project#18485)

Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>

Co-authored-by: Nick Hill <nhill@redhat.com>

* [BugFix] Fix tpu_model_runner block_id concatenation (vllm-project#19228)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Misc][Tools][Benchmark] Fix and improve auto tune script (vllm-project#19163)

Signed-off-by: Chenyaaang <chenyangli@google.com>

* [Build][ROCm] Update Dockerfile.rocm (vllm-project#19296)

Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>

* [Easy][Test] Simplify test_function_tool_use with multiple parametrizes (vllm-project#19269)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Integrate CUTLASS MoE kernel with PPLX (vllm-project#18762)

Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>

* [TPU][Test] Add script to run benchmark on TPU for buildkite (vllm-project#19039)

Signed-off-by: Qiliang Cui <derrhein@gmail.com>

* [CI][PowerPC] Use a more appropriate way to select testcase in tests/models/language/pooling/test_embedding.py (vllm-project#19253)

Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>

* Add FlexAttention to V1 (vllm-project#16078)

Signed-off-by: drisspg <drisspguessous@gmail.com>

* [Misc] refactor context extension (vllm-project#19246)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CI/Build] Improve Llama GGUF test robustness (vllm-project#19287)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Nit][Benchmark]Fix example in benchmark_serving_structured_output.py (vllm-project#19311)

Signed-off-by: Lifan Shen <lifans@meta.com>

* [AMD] Update compatible packaging version (vllm-project#19309)

Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>

* [BugFix][V1] Fix memory profiling bug (vllm-project#18974)

Signed-off-by: luka <luka@neuralmagic.com>

* [Bugfix]: Fix TypeError: 'float' object cannot be interpreted as an integer (vllm-project#19283)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Bugfix] Re-enable use_cudagraph in vLLM v1 (vllm-project#19299)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [Misc] Change tests/compile to use VLLM_V1 by default (vllm-project#19302)

Signed-off-by: rzou <zou3519@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B (vllm-project#19315)

Signed-off-by: Xu Wenqing <xuwq1993@qq.com>

* [Hardware][POWER] Add IBM POWER11 Support to CPU Extension Detection (vllm-project#19082)

Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>

* [Quantization] Add compressed-tensors NVFP4 support (vllm-project#18312)

* [Multi Modal] Add an env var for message queue max chunk bytes  (vllm-project#19242)

Signed-off-by: yZhen <yZhen@fb.com>
Co-authored-by: yZhen <yZhen@fb.com>

* [Bugfix] model_max_length should consider max_model_len in tokenizer_config (vllm-project#19201)

* [Deprecation] Remove `inputs` arg fallback in Engine classes (vllm-project#18799)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Add documentation update reminder to PR template (vllm-project#19289)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [Frontend] Remove unreachable code from llm.py (vllm-project#19288)

Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>

* [Misc] Cleanup compilation tests (vllm-project#19343)

Signed-off-by: rzou <zou3519@gmail.com>

* [doc] improve ci doc (vllm-project#19307)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Doc] Fix description in the Automatic Prefix Caching design doc (vllm-project#19333)

Signed-off-by: cr7258 <chengzw258@163.com>

* [CI/Build] Fix LoRA test (vllm-project#19350)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Fix] Allow kernel compilation for CUDA capability 8.7 (vllm-project#19328)

Signed-off-by: Conroy Cheers <conroy@corncheese.org>

* [CI] Introduce rules for llama auto-label (vllm-project#19323)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Docs] Fix a bullet list in usage/security.md (vllm-project#19358)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>

* [full_graph] Fix query_start_loc padding (vllm-project#19321)

Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>

* [v1] Add fp32 support to v1 engine through flex attn (vllm-project#19319)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Misc] Fixes and Optimizations for DeepEP + DeepGEMM combination. (vllm-project#19298)

Signed-off-by: Varun <vsundarr@redhat.com>
Co-authored-by: Varun <vsundarr@redhat.com>

* [Bugfix][Core] Prevent token lengths exceeding `max_model_len` in V0 (vllm-project#19348)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Quantization] Bump compressed-tensors version (vllm-project#19295)

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>

* [Frontend] Make TIMEOUT_KEEP_ALIVE configurable through env var (vllm-project#18472)

Signed-off-by: liusiqian <liusiqian@tal.com>

* [TPU]Fix KV cache sharing tests (vllm-project#19371)

* [HOT-FIX] Add `kv_sharing_target_layer_name` argument to cutlass_mla backend (vllm-project#19374)

Signed-off-by: Pavani Majety <pmajety@nvidia.com>

* [Misc] Fix a config typo in disable_hybrid_kv_cache_manager configuration (vllm-project#19383)

Signed-off-by: Siyuan Liu <lsiyuan@google.com>

* [V1] Reuse V0's memory_profiling util for gpu worker memory profiling (vllm-project#19312)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [Bugfix] Fix benchmark_moe.py (vllm-project#19016)

Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>

* Use xla flag to improve the quantized model performance (vllm-project#19303)

Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>

* Fix docs/mkdocs/hooks/remove_announcement.py (vllm-project#19382)

* [Frontend] Add tqdm_leave_pbar to control progress bar visibility (vllm-project#19357)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Core] Use tuple for kv cache group block ids (vllm-project#19175)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Bugfix] Fix modelscope token passed in (vllm-project#19389)

Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>

* [Core] Batch multi modal input using pinned memory (vllm-project#19169)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add security warning to bug report template (vllm-project#19365)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* [Misc] refactor neuron_multimodal and profiling (vllm-project#19397)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* Add clear documentation around the impact of debugging flag (vllm-project#19369)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* Automatically bind CPU OMP Threads of a rank to CPU ids of a NUMA node. (vllm-project#17930)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>

* Revert "[v1] Add fp32 support to v1 engine through flex attn" (vllm-project#19404)

* [BugFix][FlashInfer] Fix attention backend interface mismatch with unexpected keyword `use_irope` (vllm-project#19134)

Signed-off-by: Yunqiu Guo <guorachel@meta.com>

* [BugFix][CPU] Fix CPU CI by ignore collecting test_pixtral (vllm-project#19411)

Signed-off-by: jiang.li <jiang1.li@intel.com>

* Simplify ep kernels installation (vllm-project#19412)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [Misc] Slight improvement of the BNB  (vllm-project#19418)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* [Docs] Note that alternative structured output backends are supported (vllm-project#19426)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [ROCm][V1] Adding ROCm to the list of plaforms using V1 by default (vllm-project#19440)

Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>

* [Model] use AutoWeightsLoader for commandr (vllm-project#19399)

Signed-off-by: py-andy-c <pychen1017@gmail.com>

* Add H20-3e fused MoE kernel tuning configs for Qwen3-235B-A22B-FP8 (vllm-project#19401)

Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>

* [BugFix] Allow use_cudagraph to work with dynamic VLLM_USE_V1 (vllm-project#19390)

Signed-off-by: rzou <zou3519@gmail.com>

* [New Model]: Support Qwen3 Embedding & Reranker  (vllm-project#19260)

* [BugFix] Fix docker build cpu-dev image error (vllm-project#19394)

Signed-off-by: niu_he <carlton2tang@gmail.com>

* Fix test_max_model_len in tests/entrypoints/llm/test_generate.py (vllm-project#19451)

Signed-off-by: Lu Fang <lufang@fb.com>

* [CI] Disable failing GGUF model test (vllm-project#19454)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Misc] Remove unused `MultiModalHasher.hash_prompt_mm_data` (vllm-project#19422)

Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>

* Add fused MOE config for Qwen3 30B A3B on B200 (vllm-project#19455)

Signed-off-by: Junhao Li <junhao@ubicloud.com>

* Fix Typo in Documentation and Function Name (vllm-project#19442)

* [ROCm] Add rules to automatically label ROCm related PRs (vllm-project#19405)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Kernel] Support deep_gemm for linear methods (vllm-project#19085)

Signed-off-by: artetaout <lulala341@gmail.com>

* [Doc] Update V1 User Guide for Hardware and Models (vllm-project#19474)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Fix quantization link titles (vllm-project#19478)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Doc] Support "important" and "announcement" admonitions (vllm-project#19479)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [Misc] Reduce warning message introduced in env_override (vllm-project#19476)

Signed-off-by: Lu Fang <lufang@fb.com>

* Support non-string values in JSON keys from CLI (vllm-project#19471)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* Add cache to cuda get_device_capability (vllm-project#19436)

Signed-off-by: mgoin <mgoin64@gmail.com>

* Fix some typo (vllm-project#19475)

Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>

* Support no privileged mode on CPU for docker and kubernetes deployments (vllm-project#19241)

Signed-off-by: Tsai, Louie <louie.tsai@intel.com>

* [Bugfix] Update the example code, make it work with the latest lmcache (vllm-project#19453)

Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>

* [CI] Update FlashInfer to 0.2.6.post1 (vllm-project#19297)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [doc] fix "Other AI accelerators" getting started page (vllm-project#19457)

Signed-off-by: David Xia <david@davidxia.com>

* [Misc] Fix  misleading ROCm warning (vllm-project#19486)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Docs] Remove WIP features in V1 guide (vllm-project#19498)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [Kernels] Add activation chunking logic to FusedMoEModularKernel (vllm-project#19168)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [AMD] [Quantization] Add override flag for attention dtype instead of using kv_cache_dtype trigger (vllm-project#17331)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* [UX] Add Feedback During CUDAGraph Capture (vllm-project#19501)

Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>

* [CI/Build] Fix torch nightly CI dependencies (vllm-project#19505)

Signed-off-by: Richard Zou <zou3519@gmail.com>

* [CI] change spell checker from codespell to typos (vllm-project#18711)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [BugFix] Force registration of w8a8_block_fp8_matmul_deepgemm via lazy import (vllm-project#19514)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* Add Triton Fused MoE kernel config for E=16 on B200 (vllm-project#19518)

Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>

* [Frontend] Improve error message in tool_choice validation (vllm-project#19239)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [BugFix] Work-around incremental detokenization edge case error (vllm-project#19449)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [BugFix] Handle missing sep_token for Qwen3-Reranker in Score API (vllm-project#19522)

Signed-off-by: strutive07 <strutive07@gmail.com>

* [AMD][Kernel][BugFix] fix test_rocm_compressed_tensors_w8a8 for rocm (vllm-project#19509)

Signed-off-by: Randall Smith <Randall.Smith@amd.com>

* Fix typo (vllm-project#19525)

Signed-off-by: 2niuhe <carlton2tang@gmail.com>

* [Security] Prevent new imports of (cloud)pickle (vllm-project#18018)

Signed-off-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Bugfix][V1] Allow manual FlashAttention for Blackwell (vllm-project#19492)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Bugfix] Respect num-gpu-blocks-override in v1 (vllm-project#19503)

Signed-off-by: Jon Swenson <jmswen@gmail.com>

* [Quantization] Improve AWQ logic (vllm-project#19431)

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>

* [Doc] Add V1 column to supported models list (vllm-project#19523)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>

* [V1][NixlConnector] Drop `num_blocks` check  (vllm-project#19532)

Signed-off-by: NickLucche <nlucches@redhat.com>

* [Perf] Vectorize static / dynamic INT8 quant kernels (vllm-project#19233)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Fix TorchAOConfig skip layers (vllm-project#19265)

Signed-off-by: mobicham <hicham@mobiuslabs.com>

* [torch.compile][ROCm] Fuse quantization onto attention using a torch.compile pass (vllm-project#16756)

Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>

* [doc] Make top navigation sticky (vllm-project#19540)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Spec Decode][Benchmark] Generalize spec decode offline benchmark to more methods and datasets (vllm-project#18847)

* [Misc] Turn MOE_DP_CHUNK_SIZE into an env var (vllm-project#19506)

* [Bugfix] Enforce contiguous input for dynamic_per_token FP8/INT8 quant (vllm-project#19452)

Signed-off-by: mgoin <mgoin64@gmail.com>

* [Doc] Unify structured outputs examples (vllm-project#18196)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [V1] Resolve failed concurrent structured output requests (vllm-project#19565)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* Revert "[Build/CI] Add tracing deps to vllm container image (vllm-project#15224)" (vllm-project#19378)

* [BugFix] : Fix Batched DeepGemm Experts (vllm-project#19515)

Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>

* [Bugfix] Fix EAGLE vocab embedding for multimodal target model (vllm-project#19570)

Signed-off-by: qizixi <qizixi@meta.com>

* [Doc] uses absolute links for structured outputs (vllm-project#19582)

Signed-off-by: Aaron Pham <contact@aarnphm.xyz>

* [doc] fix incorrect link (vllm-project#19586)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Misc] Correct broken docs link (vllm-project#19553)

Signed-off-by: Zerohertz <ohg3417@gmail.com>

* [CPU] Refine default config for the CPU backend (vllm-project#19539)

Signed-off-by: jiang1.li <jiang1.li@intel.com>

* [Fix] bump mistral common to support magistral (vllm-project#19533)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* [Fix] The zip function in Python 3.9 does not have the strict argument (vllm-project#19549)

Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>

* use base version for version comparison (vllm-project#19587)

Signed-off-by: Boyuan Feng <boyuan@meta.com>

* [torch.compile] reorganize the cache directory to support compiling multiple models (vllm-project#19064)

Signed-off-by: youkaichao <youkaichao@gmail.com>

* [BugFix] Honor `enable_caching` in connector-delayed kvcache load case (vllm-project#19435)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [Model] Fix minimax model cache & lm_head precision (vllm-project#19592)

Signed-off-by: qingjun <qingjun@minimaxi.com>

* [Refactor] Remove unused variables in `moe_permute_unpermute_kernel.inl` (vllm-project#19573)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* [doc][mkdocs] fix the  duplicate Supported features sections in GPU docs (vllm-project#19606)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [CUDA] Enable full cudagraph for FlashMLA (vllm-project#18581)

Signed-off-by: luka <luka@neuralmagic.com>

* [Doc] Add troubleshooting section to k8s deployment (vllm-project#19377)

Signed-off-by: Anna Pendleton <pendleton@google.com>

* [torch.compile] Use custom ops when use_inductor=False (vllm-project#19618)

* Adding "AMD: Multi-step Tests" to amdproduction. (vllm-project#19508)

Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>

* [BugFix] Fix DP Coordinator incorrect debug log message (vllm-project#19624)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [V1][Metrics] Deprecate metrics with gpu_ prefix for non GPU specific metrics. (vllm-project#18354)

Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>

* [Bugfix] Fix the speculative decoding test by setting the target dtype (vllm-project#19633)

* [Misc] Modularize CLI Argument Parsing in Benchmark Scripts (vllm-project#19593)

Signed-off-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>

* [Bugfix] Fix auto dtype casting for BatchFeature (vllm-project#19316)

Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>

* [Hardware][NVIDIA][kernel] Fp4 MOE quant kernel optimization (vllm-project#19500)

* Only build CUTLASS MoE kernels on Hopper (vllm-project#19648)

* [Bugfix] Don't attempt to use triton if no driver is active (vllm-project#19561)

* [Fix] Convert kv_transfer_config from dict to KVTransferConfig (vllm-project#19262)

* [Perf] Further tunings for SM100 FP8 CUTLASS kernel (vllm-project#19566)

* [Bugfix][2/n] Fix speculative decoding CI - Fix test_ngram_e2e_greedy_correctness (vllm-project#19644)

* [Kernel] Raise verbose error and consolidate `num_heads/num_kv_heads` divisibility check (vllm-project#19339)

Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>

* [Benchmark] Refactor benchmark script for fp8 & int8 (vllm-project#19627)

Signed-off-by: yewentao256 <zhyanwentao@126.com>

* Enable prefix caching with full cuda graphs (vllm-project#19617)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>

* [CI/Build] Fix torch nightly CI dependencies part 2 (vllm-project#19589)

* [Misc] Remove duplicate multiproc method setting for CPU platform (vllm-project#19649)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] Remove unused variableds in C++ (vllm-project#19609)

Signed-off-by: Lu Fang <lufang@fb.com>

* [Bugfix][Core] Prefix caching causes incorrect outputs due to outdated ComputedBlocksTracker (vllm-project#18957)

Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>

* [Misc][Frontend] passthrough `bad_words` (vllm-project#19564)

Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>

* [Misc] Fix skipped max-model-len validation when deriving max model length from tokenizer config (vllm-project#19660)

Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>

* [TPU] support attention head dim smaller than 128 (vllm-project#19620)

Signed-off-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: mgoin <mgoin64@gmail.com>

* [MISC] typo fix (vllm-project#19672)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [CI] Add mteb testing for rerank models (vllm-project#19344)

* [Docs] Move multiproc doc to v1 dir (vllm-project#19651)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* [Kernel] GGUF MMVQ kernel for multiple input vectors (vllm-project#18754)

Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>

* [BugFix] Don't catch BaseException when dumping execute_model errors (vllm-project#19626)

Signed-off-by: Nick Hill <nhill@redhat.com>

* [DOC] Add reasoning capability to vLLM streamlit code (vllm-project#19557)

* [Feature]:Allow for Granite MoE Hybrid models with _only_ shared experts. (vllm-project#19652)

Signed-off-by: Shawn Tan <shawntan@ibm.com>

* [Bugfix] Fix TP inference for Flex attention backend (vllm-project#19657)

Signed-off-by: Isotr0py <2037008807@qq.com>

* [MISC] bump huggingface_hub pkg to 0.33.0 (vllm-project#19547)

Signed-off-by: Andy Xie <andy.xning@gmail.com>

* [Bugfix] fix missing 'finish_reason': null in streaming chat (vllm-project#19662)

Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>

* [Kernels] Use empty for modular MoE workspaces (vllm-project#19667)

Signed-off-by: Bill Nell <bnell@redhat.com>

* [Model] Add support for MiniMaxM1ForCausalLM (shares architecture with MiniMaxText01ForCausalLM) (vllm-project#19677)

Signed-off-by: QscQ <qscqesze@gmail.com>

* [V1] Change return type on get_multimodal_embeddings() (vllm-project#19446)

Signed-off-by: Russell Bryant <rbryant@redhat.com>

* fix

Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>

---------

Signed-off-by: raushan <raushan@huggingface.co>
Signed-off-by: Lu Fang <lufang@fb.com>
Signed-off-by: nicklucche <nlucches@redhat.com>
Signed-off-by: googs1025 <googs1025@gmail.com>
Signed-off-by: simon-mo <simon.mo@hey.com>
Signed-off-by: reidliu41 <reid201711@gmail.com>
Signed-off-by: Varun <vsundarr@redhat.com>
Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Signed-off-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Signed-off-by: Chen Zhang <zhangch99@outlook.com>
Signed-off-by: chaunceyjiang <chaunceyjiang@gmail.com>
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Signed-off-by: Lukas Geiger <lukas.geiger94@gmail.com>
Signed-off-by: calvin chen <120380290@qq.com>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: 汪志鹏 <wangzhipeng628@gmail.com>
Signed-off-by: Siyuan Liu <lsiyuan@google.com>
Signed-off-by: Seiji Eicher <seiji@anyscale.com>
Signed-off-by: Isotr0py <2037008807@qq.com>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: 许文卿 <xwq391974@alibaba-inc.com>
Signed-off-by: Jon Swenson <jmswen@gmail.com>
Signed-off-by: Tyler Michael Smith <tysmith@redhat.com>
Signed-off-by: rshaw@neuralmagic.com <robertgshaw2@gmail.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com>
Signed-off-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Signed-off-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Signed-off-by: Patrick von Platen <patrick.v.platen@gmail.com>
Signed-off-by: Chiyue Wei <chiyuew@nvidia.com>
Signed-off-by: Povilas Kanapickas <povilas@radix.lt>
Signed-off-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Signed-off-by: Jerry Zhang <jerryzh168@gmail.com>
Signed-off-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Signed-off-by: Chengji Yao <chengjiyao@google.com>
Signed-off-by: Xu Song <xusong.vip@gmail.com>
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
Signed-off-by: Dipika Sikka <dipikasikka1@gmail.com>
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Siqi Yan <siqi@meta.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Nishidha Panpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Signed-off-by: npanpaliya <nishidha.panpaliya@partner.ibm.com>
Signed-off-by: Chenyaaang <chenyangli@google.com>
Signed-off-by: Alexei V. Ivanov <alexei.ivanov@amd.com>
Signed-off-by: ElizaWszola <ewszola@redhat.com>
Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Qiliang Cui <derrhein@gmail.com>
Signed-off-by: Aaruni Aggarwal <aaruniagg@gmail.com>
Signed-off-by: drisspg <drisspguessous@gmail.com>
Signed-off-by: Lifan Shen <lifans@meta.com>
Signed-off-by: pramkuma <Pramendra.Kumar@amd.com>
Signed-off-by: luka <luka@neuralmagic.com>
Signed-off-by: Richard Zou <zou3519@gmail.com>
Signed-off-by: Xu Wenqing <xuwq1993@qq.com>
Signed-off-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Signed-off-by: yZhen <yZhen@fb.com>
Signed-off-by: KsuParkhamchuk <k.parkhamchuk@gmail.com>
Signed-off-by: cr7258 <chengzw258@163.com>
Signed-off-by: Conroy Cheers <conroy@corncheese.org>
Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
Signed-off-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Signed-off-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: liusiqian <liusiqian@tal.com>
Signed-off-by: Pavani Majety <pmajety@nvidia.com>
Signed-off-by: Ye (Charlotte) Qi <yeq@meta.com>
Signed-off-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Signed-off-by: Xiongfei Wei <isaacwxf23@gmail.com>
Signed-off-by: wangli <wangli858794774@gmail.com>
Signed-off-by: Anna Pendleton <pendleton@google.com>
Signed-off-by: Tsai, Louie <louie.tsai@intel.com>
Signed-off-by: Yunqiu Guo <guorachel@meta.com>
Signed-off-by: jiang.li <jiang1.li@intel.com>
Signed-off-by: youkaichao <youkaichao@gmail.com>
Signed-off-by: Gregory Shtrasberg <Gregory.Shtrasberg@amd.com>
Signed-off-by: py-andy-c <pychen1017@gmail.com>
Signed-off-by: niu_he <carlton2tang@gmail.com>
Signed-off-by: Junhao Li <junhao@ubicloud.com>
Signed-off-by: artetaout <lulala341@gmail.com>
Signed-off-by: ximing.wxm <ximing.wxm@antgroup.com>
Signed-off-by: Runzhen Wang <wangrunzhen@gmail.com>
Signed-off-by: David Xia <david@davidxia.com>
Signed-off-by: Bill Nell <bnell@redhat.com>
Signed-off-by: Randall Smith <Randall.Smith@amd.com>
Signed-off-by: Andy Xie <andy.xning@gmail.com>
Signed-off-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Signed-off-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Signed-off-by: strutive07 <strutive07@gmail.com>
Signed-off-by: 2niuhe <carlton2tang@gmail.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Signed-off-by: mobicham <hicham@mobiuslabs.com>
Signed-off-by: Luka Govedič <lgovedic@redhat.com>
Signed-off-by: qizixi <qizixi@meta.com>
Signed-off-by: Zerohertz <ohg3417@gmail.com>
Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: Boyuan Feng <boyuan@meta.com>
Signed-off-by: qingjun <qingjun@minimaxi.com>
Signed-off-by: Yida Wu <yidawu@alumni.cmu.edu>
Signed-off-by: Saheli Bhattacharjee <saheli@krai.ai>
Signed-off-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Signed-off-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Signed-off-by: SzymonOzog <szymon.ozog@gmail.com>
Signed-off-by: Shawn Tan <shawntan@ibm.com>
Signed-off-by: QscQ <qscqesze@gmail.com>
Signed-off-by: Amog Kamsetty <amogkamsetty@gmail.com>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
Co-authored-by: Lu Fang <30275821+houseroad@users.noreply.github.com>
Co-authored-by: Nicolò Lucchesi <nlucches@redhat.com>
Co-authored-by: CYJiang <86391540+googs1025@users.noreply.github.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
Co-authored-by: SorenDreano <71752785+SorenDreano@users.noreply.github.com>
Co-authored-by: Soren Dreano <soren@numind.ai>
Co-authored-by: Reid <61492567+reidliu41@users.noreply.github.com>
Co-authored-by: reidliu41 <reid201711@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <varunsundar08@gmail.com>
Co-authored-by: Varun Sundar Rabindranath <vsundarr@redhat.com>
Co-authored-by: Yong Hoon Shin <48474650+sarckk@users.noreply.github.com>
Co-authored-by: Michael Goin <mgoin64@gmail.com>
Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Co-authored-by: Yikun Jiang <yikun@apache.org>
Co-authored-by: Chen Zhang <zhangch99@outlook.com>
Co-authored-by: Ekagra Ranjan <3116519+ekagra-ranjan@users.noreply.github.com>
Co-authored-by: Chauncey <chaunceyjiang@gmail.com>
Co-authored-by: Robert Shaw <114415538+robertgshaw2-redhat@users.noreply.github.com>
Co-authored-by: Yan Ru Pei <yanrpei@gmail.com>
Co-authored-by: Jiaxin Shan <seedjeffwan@gmail.com>
Co-authored-by: Russell Bryant <rbryant@redhat.com>
Co-authored-by: Mark McLoughlin <markmc@redhat.com>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
Co-authored-by: Li, Jiang <jiang1.li@intel.com>
Co-authored-by: Lukas Geiger <lukas.geiger94@gmail.com>
Co-authored-by: Vadim Gimpelson <156319763+vadiklyutiy@users.noreply.github.com>
Co-authored-by: Calvin Chen <45745657+calvin0327@users.noreply.github.com>
Co-authored-by: Kaixi Hou <kaixih@nvidia.com>
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Co-authored-by: 汪志鹏 <wangzhipeng628@gmail.com>
Co-authored-by: Siyuan Liu <lsiyuan@google.com>
Co-authored-by: Seiji Eicher <58963096+eicherseiji@users.noreply.github.com>
Co-authored-by: Isotr0py <mozf@mail2.sysu.edu.cn>
Co-authored-by: wang.yuqi <noooop@126.com>
Co-authored-by: Cyrus Leung <tlleungac@connect.ust.hk>
Co-authored-by: Xu Wenqing <121550081+Xu-Wenqing@users.noreply.github.com>
Co-authored-by: Lain <fusiyuan2000@hotmail.com>
Co-authored-by: jmswen <jmswen@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Co-authored-by: Kebe <mail@kebe7jun.com>
Co-authored-by: Nick Hill <nhill@redhat.com>
Co-authored-by: Yang Wang <elainewy@meta.com>
Co-authored-by: Huy Do <huydhn@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: vllmellm <vllm.ellm@embeddedllm.com>
Co-authored-by: 22quinn <33176974+22quinn@users.noreply.github.com>
Co-authored-by: Guillaume Calmettes <gcalmettes@scaleway.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Chiyue Wei <92623189+dubcyfor3@users.noreply.github.com>
Co-authored-by: Chiyue Wei <chiyuew@nvidia.com>
Co-authored-by: Povilas Kanapickas <povilas@radix.lt>
Co-authored-by: Dipika Sikka <dipikasikka1@gmail.com>
Co-authored-by: Luis Vega <vegaluisjose@users.noreply.github.com>
Co-authored-by: Luis Vega <2478335+vegaluisjose@users.noreply.github.com>
Co-authored-by: Jerry Zhang <jerryzh168@gmail.com>
Co-authored-by: Benjamin Chislett <benjamin.chislett@centml.ai>
Co-authored-by: Chengji Yao <chengjiyao@google.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: Aaron Pham <contact@aarnphm.xyz>
Co-authored-by: Jinghui Zhang <jinghuizhang0804@gmail.com>
Co-authored-by: jinghui <jinghui@fb.com>
Co-authored-by: Richard Zou <zou3519@users.noreply.github.com>
Co-authored-by: Siqi Yan <ysq0807@hotmail.com>
Co-authored-by: Siqi Yan <siqi@meta.com>
Co-authored-by: Jee Jee Li <pandaleefree@gmail.com>
Co-authored-by: Yu Guo <82124926+yuguo68@users.noreply.github.com>
Co-authored-by: Nishidha <nishidha.panpaliya@partner.ibm.com>
Co-authored-by: Md. Shafi Hussain <Md.Shafi.Hussain@ibm.com>
Co-authored-by: Adolfo Victoria <adolfokarim@gmail.com>
Co-authored-by: Adolfo Victoria <adovi@meta.com>
Co-authored-by: Chenyaaang <42742451+Chenyaaang@users.noreply.github.com>
Co-authored-by: Alexei-V-Ivanov-AMD <156011006+Alexei-V-Ivanov-AMD@users.noreply.github.com>
Co-authored-by: ElizaWszola <ewszola@redhat.com>
Co-authored-by: QiliangCui <derrhein@gmail.com>
Co-authored-by: Aaruni Aggarwal <47731267+AaruniAggarwal@users.noreply.github.com>
Co-authored-by: Driss Guessous <32754868+drisspg@users.noreply.github.com>
Co-authored-by: Lifans <draftbks@gmail.com>
Co-authored-by: pramenku <7664080+pramenku@users.noreply.github.com>
Co-authored-by: Luka Govedič <ProExpertProg@users.noreply.github.com>
Co-authored-by: Akash kaothalkar <61960177+Akashcodes732@users.noreply.github.com>
Co-authored-by: Akash Kaothalkar <akash.kaothalkar@ibm.com>
Co-authored-by: jennyyyyzhen <47012288+jennyyyyzhen@users.noreply.github.com>
Co-authored-by: yZhen <yZhen@fb.com>
Co-authored-by: Kseniya Parkhamchuk <43078183+KsuParkhamchuk@users.noreply.github.com>
Co-authored-by: Se7en <chengzw258@163.com>
Co-authored-by: Conroy Cheers <conroy@corncheese.org>
Co-authored-by: Michael Yao <haifeng.yao@daocloud.io>
Co-authored-by: Yinghai Lu <yinghai@thinkingmachines.ai>
Co-authored-by: Kyle Sayers <kylesayrs@gmail.com>
Co-authored-by: liusiqian-tal <141730978+liusiqian-tal@users.noreply.github.com>
Co-authored-by: Pavani Majety <pmajety@nvidia.com>
Co-authored-by: Ye (Charlotte) Qi <yeq@meta.com>
Co-authored-by: Tianyu Guo <guoty9@mail2.sysu.edu.cn>
Co-authored-by: XiongfeiWei <isaacwxf23@gmail.com>
Co-authored-by: Li Wang <wangli858794774@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Anna Pendleton <pendleton@google.com>
Co-authored-by: Louie Tsai <louie.tsai@intel.com>
Co-authored-by: Li, Jiang <bigpyj64@gmail.com>
Co-authored-by: Rachel Guo <35738743+YUNQIUGUO@users.noreply.github.com>
Co-authored-by: youkaichao <youkaichao@gmail.com>
Co-authored-by: Isotr0py <2037008807@qq.com>
Co-authored-by: Gregory Shtrasberg <156009573+gshtras@users.noreply.github.com>
Co-authored-by: py-andy-c <37168711+py-andy-c@users.noreply.github.com>
Co-authored-by: niu_he <carlton2tang@gmail.com>
Co-authored-by: Junhao Li <junhao@ubicloud.com>
Co-authored-by: leopardracer <136604165+leopardracer@users.noreply.github.com>
Co-authored-by: artetaout <128046886+artetaout@users.noreply.github.com>
Co-authored-by: Ximingwang-09 <72070413+Ximingwang-09@users.noreply.github.com>
Co-authored-by: ximing.wxm <ximing.wxm@antgroup.com>
Co-authored-by: runzhen <wangrunzhen@gmail.com>
Co-authored-by: David Xia <david@davidxia.com>
Co-authored-by: bnellnm <49004751+bnellnm@users.noreply.github.com>
Co-authored-by: rasmith <Randall.Smith@amd.com>
Co-authored-by: Ning Xie <andy.xning@gmail.com>
Co-authored-by: Brayden Zhong <b8zhong@uwaterloo.ca>
Co-authored-by: wonjun Jang <strutive07@gmail.com>
Co-authored-by: Aaron Pham <Aaronpham0103@gmail.com>
Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com>
Co-authored-by: mobicham <37179323+mobicham@users.noreply.github.com>
Co-authored-by: Sage Moore <sage@neuralmagic.com>
Co-authored-by: kourosh hakhamaneshi <31483498+kouroshHakha@users.noreply.github.com>
Co-authored-by: qizixi <22851944+zixi-qi@users.noreply.github.com>
Co-authored-by: Hyogeun Oh (오효근) <ohg3417@gmail.com>
Co-authored-by: Boyuan Feng <fby.1994@gmail.com>
Co-authored-by: qscqesze <qingjun@minimaxi.com>
Co-authored-by: Concurrensee <yida.wu@amd.com>
Co-authored-by: Saheli Bhattacharjee <47847054+sahelib25@users.noreply.github.com>
Co-authored-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: Konrad Zawora <kzawora@habana.ai>
Co-authored-by: maobaolong <baoloongmao@tencent.com>
Co-authored-by: Ilya Markov <markovilya197@gmail.com>
Co-authored-by: quanliu <33453350+quanliu1991@users.noreply.github.com>
Co-authored-by: 刘全 <quan.liu2@dbappsecurity.com.cn>
Co-authored-by: Francesco Bertolotti <f14.bertolotti@gmail.com>
Co-authored-by: Francesco Bertolotti <francesco.bertolotti@igenius.ai>
Co-authored-by: Szymon Ożóg <58388001+SzymonOzog@users.noreply.github.com>
Co-authored-by: Navanit Dubey <98005188+Navanit-git@users.noreply.github.com>
Co-authored-by: Shawn Tan <shawntan@ibm.com>
Co-authored-by: qscqesze <qscqesze@gmail.com>
@TPLink32
Copy link

NFO 06-17 11:00:04 [init.py:244] Automatically detected platform cuda.
INFO 06-17 11:00:08 [api_server.py:1287] vLLM API server version 0.9.2.dev110+g119f68394
INFO 06-17 11:00:08 [cli_args.py:309] non-default args: {'host': '0.0.0.0', 'port': 7778, 'model': 'Qwen3-Reranker-0.6B-seq-cls', 'max_model_len': 12000, 'served_model_name': ['rer'], 'gpu_memory_utilization': 0.4, 'max_num_seqs': 8}
INFO 06-17 11:00:16 [config.py:831] This model supports multiple tasks: {'embed', 'classify', 'score', 'reward'}. Defaulting to 'score'.
INFO 06-17 11:00:16 [config.py:3270] Downcasting torch.float32 to torch.float16.
INFO 06-17 11:00:16 [config.py:1444] Using max model len 12000
WARNING 06-17 11:00:16 [arg_utils.py:1665] --task score is not supported by the V1 Engine. Falling back to V0.
INFO 06-17 11:00:16 [api_server.py:265] Started engine process with PID 3851273
INFO 06-17 11:00:19 [init.py:244] Automatically detected platform cuda.
INFO 06-17 11:00:22 [llm_engine.py:230] Initializing a V0 LLM engine (v0.9.2.dev110+g119f68394) with config: model='Qwen3-Reranker-0.6B-seq-cls', speculative_config=None, tokenizer='Qwen3-Reranker-0.6B-seq-cls', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=12000, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=rer, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=False, pooler_config=PoolerConfig(pooling_type=None, normalize=None, softmax=None, step_tag_id=None, returned_token_ids=None), compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":[],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":8,"local_cache_dir":null}, use_cached_outputs=True,
INFO 06-17 11:00:24 [cuda.py:336] Using Flash Attention backend.
[W617 11:00:34.947629619 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
[W617 11:00:44.958527474 socket.cpp:200] [c10d] The hostname of the client socket cannot be retrieved. err=-3
INFO 06-17 11:00:44 [parallel_state.py:1072] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
INFO 06-17 11:00:44 [model_runner.py:1171] Starting to load model Qwen3-Reranker-0.6B-seq-cls...

tomaarsen/Qwen3-Reranker-0.6B-seq-cls work
curl http://127.0.0.1:7778/rerank -H 'accept: application/json' -H 'Content-Type: application/json' -d '{
"query": "ping",
"documents": ["pong"],
"model": "rer"
}'
{"id":"rerank-18af6afa320b490681ec6daecdac909a","model":"rer","usage":{"total_tokens":2},"results":[{"index":0,"document":{"text":"pong"},"relevance_score":0.06632687151432037}]}

Qwen/Qwen3-Reranker-0.6B
{"object":"error","message":"The model does not support Rerank (Score) API","type":"BadRequestError","param":null,"code":400}

@CallmeZhangChenchen
Copy link

@noooop
Hi! the speed of Qwen3-Embedding-0.6B using vllm is very slow

GPU 4090


vllm:0.9.1

vllm serve /models/Qwen3-Embedding-0.6B --trust-remote-code

begin = time.time()
    responses = client.embeddings.create(
        input=["Hi! Follow the white rabbit."],
        model=model,
    )
    print(time.time() - begin)

100ms


sentence_transformers

model = SentenceTransformer("/models/Qwen3-Embedding-0.6B")
tensor = model.encode("Hi! Follow the white rabbit.")

33ms

@noooop
Copy link
Contributor Author

noooop commented Jun 17, 2025

@noooop Hi! the speed of Qwen3-Embedding-0.6B using vllm is very slow

GPU 4090

vllm:0.9.1

vllm serve /models/Qwen3-Embedding-0.6B --trust-remote-code

begin = time.time()
    responses = client.embeddings.create(
        input=["Hi! Follow the white rabbit."],
        model=model,
    )
    print(time.time() - begin)

100ms

sentence_transformers

model = SentenceTransformer("/models/Qwen3-Embedding-0.6B")
tensor = model.encode("Hi! Follow the white rabbit.")

33ms

Using batch operations might be faster,
vllm's advantage is in throughput rather than latency

@noooop
Copy link
Contributor Author

noooop commented Jun 17, 2025

Qwen/Qwen3-Reranker-0.6B {"object":"error","message":"The model does not support Rerank (Score) API","type":"BadRequestError","param":null,"code":400}

Have you try the command below?

for the official model

vllm serve Qwen/Qwen3-Reranker-0.6B --hf_overrides '{"architectures": ["Qwen3ForSequenceClassification"],"classifier_from_token": ["no", "yes"],"is_original_qwen3_reranker": true}'

@noob-ctrl
Copy link

@noooop It seems not support tp_size > 1

@noooop
Copy link
Contributor Author

noooop commented Jun 17, 2025

@noooop It seems not support tp_size > 1

I can't test tp>1 temporarily, welcome to contributing.

@TPLink32
Copy link

en3ForSequenceClassification"],"cla

thinks
New problem with embedding
INFO 06-17 16:37:49 [engine.py:317] Added request embd-be953f7ccae14a96b5f9f8e13b16b4d7-61.
INFO 06-17 16:37:49 [engine.py:317] Added request embd-be953f7ccae14a96b5f9f8e13b16b4d7-62.
INFO 06-17 16:37:49 [engine.py:317] Added request embd-be953f7ccae14a96b5f9f8e13b16b4d7-63.
ERROR 06-17 16:37:49 [engine.py:165] AttributeError("'PlaceholderBlockSpaceManager' object has no attribute 'remove_seq_from_computed_blocks_tracker'")
ERROR 06-17 16:37:49 [engine.py:165] Traceback (most recent call last):
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 163, in start
ERROR 06-17 16:37:49 [engine.py:165] self.run_engine_loop()
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 226, in run_engine_loop
ERROR 06-17 16:37:49 [engine.py:165] request_outputs = self.engine_step()
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 252, in engine_step
ERROR 06-17 16:37:49 [engine.py:165] raise e
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 235, in engine_step
ERROR 06-17 16:37:49 [engine.py:165] return self.engine.step()
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 1296, in step
ERROR 06-17 16:37:49 [engine.py:165] ) = self.scheduler[virtual_engine].schedule()
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1553, in schedule
ERROR 06-17 16:37:49 [engine.py:165] scheduler_outputs: SchedulerOutputs = self._schedule()
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1512, in _schedule
ERROR 06-17 16:37:49 [engine.py:165] return self._schedule_default()
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1277, in _schedule_default
ERROR 06-17 16:37:49 [engine.py:165] prefills = self._schedule_prefills(budget,
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1195, in _schedule_prefills
ERROR 06-17 16:37:49 [engine.py:165] self.remove_seq_from_computed_blocks_tracker(
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1715, in remove_seq_from_computed_blocks_tracker
ERROR 06-17 16:37:49 [engine.py:165] self._remove_seq_from_computed_blocks_tracker(seq)
ERROR 06-17 16:37:49 [engine.py:165] File "/data/code/llm_test/vllm_env/lib/python3.11/site-packages/vllm/core/scheduler.py", line 1722, in _remove_seq_from_computed_blocks_tracker
ERROR 06-17 16:37:49 [engine.py:165] self.block_manager.remove_seq_from_computed_blocks_tracker(seq)
ERROR 06-17 16:37:49 [engine.py:165] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 06-17 16:37:49 [engine.py:165] AttributeError: 'PlaceholderBlockSpaceManager' object has no attribute 'remove_seq_from_computed_blocks_tracker'

@noooop
Copy link
Contributor Author

noooop commented Jun 17, 2025

ERROR 06-17 16:37:49 [engine.py:165] AttributeError: 'PlaceholderBlockSpaceManager' object has no attribute 'remove_seq_from_computed_blocks_tracker'

fixed by #19686

@dengcao
Copy link

dengcao commented Jun 20, 2025

minpeter pushed a commit to minpeter/vllm that referenced this pull request Jun 24, 2025
Signed-off-by: minpeter <kali2005611@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: support to qwen3 embedding and rerank via vllm serve command [Bug]: Support Qwen3 Reranker [Feature]: Support Qwen3 Embedding & Reranker