@@ -39,9 +39,9 @@ This living user guide outlines a few known **important changes and limitations*
3939For each item, our progress towards V1 support falls into one of the following states:
4040
4141- ** 🚀 Optimized** : Nearly fully optimized, with no further work currently planned.
42- - ** 🟢 Functional** : Fully operational, with ongoing optimizations.
43- - ** 🚧 WIP** : Under active development.
44- - ** 🟡 Planned** : Scheduled for future implementation (some may have open PRs/RFCs).
42+ - ** 🟢 Functional** : Fully operational, with ongoing optimizations.
43+ - ** 🚧 WIP** : Under active development.
44+ - ** 🟡 Planned** : Scheduled for future implementation (some may have open PRs/RFCs).
4545- ** 🟠 Delayed** : Temporarily dropped in V1 but planned to be re-introduced later.
4646- ** 🔴 Deprecated** : Not planned for V1 unless there is strong demand.
4747
@@ -70,7 +70,7 @@ For each item, our progress towards V1 support falls into one of the following s
7070| -----------------------------| ------------------------------------------------------------------------------------|
7171| ** Decoder-only Models** | <nobr >🚀 Optimized</nobr > |
7272| ** Encoder-Decoder Models** | <nobr >🟠 Delayed</nobr > |
73- | ** Embedding Models** | <nobr >🚧 WIP ( [ PR # 16188 ] ( https://github.com/vllm-project/vllm/pull/16188 ) ) </nobr > |
73+ | ** Embedding Models** | <nobr >🟢 Functional </nobr > |
7474| ** Mamba Models** | <nobr >🚧 WIP ([ PR #19327 ] ( https://github.com/vllm-project/vllm/pull/19327 ) )</nobr > |
7575| ** Multimodal Models** | <nobr >🟢 Functional</nobr > |
7676
@@ -80,11 +80,11 @@ vLLM V1 currently excludes model architectures with the `SupportsV0Only` protoco
8080
8181 This corresponds to the V1 column in our [list of supported models][supported-models].
8282
83- See below for the status of models that are still not yet supported in V1.
83+ See below for the status of models that are not yet supported or have more features planned in V1.
8484
8585#### Embedding Models
8686
87- The initial support will be provided by [ PR # 16188 ] ( https://github.com/vllm-project/vllm/pull/16188 ) .
87+ The initial basic support is now functional .
8888
8989Later, we will consider using [ hidden states processor] ( https://github.com/vllm-project/vllm/issues/12249 ) ,
9090which is based on [ global logits processor] ( https://github.com/vllm-project/vllm/pull/13360 )
0 commit comments