Large-scale LLM inference engine
-
Updated
Aug 12, 2025 - C++
Large-scale LLM inference engine
PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
Add a description, image, and links to the speculative-decoding topic page so that developers can more easily learn about it.
To associate your repository with the speculative-decoding topic, visit your repo's landing page and select "manage topics."