Skip to content
forked from vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

License

Notifications You must be signed in to change notification settings

wwydmanski/vllm

Repository files navigation

vLLM: Easy, Fast, and Cheap LLM Serving for Everyone

| Documentation | Blog |

vLLM is a fast and easy-to-use library for LLM inference and serving.

Latest News 🔥

Getting Started

Visit our documentation to get started.

Key Features

vLLM comes with many powerful features that include:

  • State-of-the-art performance in serving throughput
  • Efficient management of attention key and value memory with PagedAttention
  • Seamless integration with popular HuggingFace models
  • Dynamic batching of incoming requests
  • Optimized CUDA kernels
  • High-throughput serving with various decoding algorithms, including parallel sampling and beam search
  • Tensor parallelism support for distributed inference
  • Streaming outputs
  • OpenAI-compatible API server

Performance

vLLM outperforms HuggingFace Transformers (HF) by up to 24x and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput. For details, check out our blog post.


Serving throughput when each request asks for 1 output completion.


Serving throughput when each request asks for 3 output completions.

Contributing

We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.

About

A high-throughput and memory-efficient inference and serving engine for LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 77.4%
  • Cuda 19.4%
  • C++ 2.1%
  • Shell 0.5%
  • Dockerfile 0.3%
  • C 0.2%
  • Jinja 0.1%