Closed
Description
Update:
- Please see [RFC]: Performance Roadmap #6801 for major items in performance sprint.
- Please see vLLM's V1 Engine Architecture #8779 for major items in a new architecture aim at simplicity and performance.
- We are in the feedback gathering phase for Q4 roadmap!
This document includes the features in vLLM's roadmap for Q3 2024. Please feel free to discuss and contribute, as this roadmap is shaped by the vLLM community.
Themes.
As before, we categorized our roadmap into 6 broad themes:
- Broad model support: vLLM should support a wide range of transformer based models. It should be kept up to date as much as possible. This includes new auto-regressive decoder models, encoder-decoder models, hybrid architectures, and models supporting multi-modal inputs.
- Excellent hardware coverage: vLLM should run on a wide range of accelerators for production AI workload. This includes GPUs, tensor accelerators, and CPUs. We will work closely with hardware vendors to ensure vLLM utilizes the greatest performance out of the chip.
- Performance optimization:vLLM should be kept up to date with the latest performance optimization techniques. Users of vLLM can trust its performance to be competitive and strong.
- Production level engine: vLLM should be the go-to choice for production level serving engine with a suite of features bridging the gaps from single forward pass to 24/7 service.
- Strong OSS product: vLLM is and will be a true community project. We want it to be a healthy project with regular release cadence, good documentation, and adding new reviewers to the codebase.
- Extensible architectures: For vLLM to grow at an even faster pace, it needs good abstractions to support a wide range of scheduling policies, hardware backends, and inference optimizations. We will work on refactoring the codebase to support that.
Broad Model Support
- Support Large Models (Arctic, Nemotron4, Llama3 400B+ when released)
- Via Pipeline Parallelism [Core] Pipeline Parallel Support #4412
- Via FP8
- New Attention Mechanism (Jamba, Phi3-Small, etc)
- Encoder Decoder ([Core] Cross-attention KV caching and memory-management (towards eventual encoder/decoder model support) #4837, [Kernel] Correctly invoke prefill & decode kernels for cross-attention (towards eventual encoder/decoder model support) #4888, [Core] Subclass ModelRunner to support cross-attention & encoder sequences (towards eventual encoder/decoder model support) #4942)
- Multi-Modal [RFC]: Multi-modality Support on vLLM #4194
Help wanted:
- Whisper and the audio API
- Arbitrary HF model
- Chameleon ([Model] Initial Support for Chameleon #5770)
- Multi token prediction
- Reward model API
- Embedding Model Expansion (Bert, XLMRoberta) ([Model] Bert Embedding Model #5447)
Hardware Support
- A feature matrix for all the hardware that vLLM supports, and their maturity level
- Enhanced performance benchmark across hardwares
- Expanding features support on various hardwares
- PagedAttention and Chunked Prefill on Inferentia
- Chunked Prefill on Intel CPU/GPU
- PagedAttention on Intel Gaudi
- TP and INT8 on TPU
- Bug fixes and GEMM tuning on AMD GPUs
Performance Optimizations
- Spec Decode Optimization (tracker)
- APC Optimizations
- Guided Decode Optimizations
- API server performance
- Quantization
- FP8/INT8 quantization improvements
- Quantized MoEs
- AWQ Performance
- Fused GEMM/all-reduce
- Scheduler overhead removal
- Optimize prepare input, sampling, process output
Production Features
- Chunked Prefill on by default
- APC on by default
- N-gram prompt lookup spec decode on by default
- Tool use
- Request prioritization framework
Help wanted
- Support multiple models in the same server
- [Feedback wanted] Disaggregated prefill: please discuss with us your use case and in what scenario it is preferred over chunked prefill.
OSS Community
- Reproducible performance benchmark on realistic workload
- CI enhancements
- Release process: minimize breaking changes and include deprecations
Help wanted
- Documentation enhancements in general (styling, UI, explainers, tutorials, examples, etc)
Extensible Architecture
- KV cache transfer [RFC]: Implement disaggregated prefilling via KV cache transfer #5557
- Distributed execution [RFC]: A Flexible Architecture for Distributed Inference #5775
- Improvements to scheduler and memory manager supporting new attention mechanisms
- Performance enhancement for multi-modal processing
If any of the item you wanted is not on the roadmap, your suggestion and contribution is still welcomed! Please feel free to comment in this thread, open feature request, or create an RFC.
Metadata
Metadata
Assignees
Labels
No labels