| Documentation | Blog |
vLLM is a fast and easy-to-use library for LLM inference and serving.
- [2023/06] We officially released vLLM! vLLM has powered LMSYS Vicuna and Chatbot Arena since mid April. Check out our blog post.
Visit our documentation to get started.
- Installation:
pip install vllm
- Quickstart
- Supported Models
vLLM comes with many powerful features that include:
- State-of-the-art performance in serving throughput
- Efficient management of attention key and value memory with PagedAttention
- Seamless integration with popular HuggingFace models
- Dynamic batching of incoming requests
- Optimized CUDA kernels
- High-throughput serving with various decoding algorithms, including parallel sampling and beam search
- Tensor parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput. For details, check out our blog post.
Serving throughput when each request asks for 1 output completion.
Serving throughput when each request asks for 3 output completions.
We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.