kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.
-
Updated
Sep 18, 2025 - Python
kvcached: Elastic KV cache for dynamic GPU sharing and efficient multi-LLM inference.
Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestration
GPUs unite using secure and private crypto transactions to distribute compute to decentralized nodes.
Add a description, image, and links to the gpu-sharing topic page so that developers can more easily learn about it.
To associate your repository with the gpu-sharing topic, visit your repo's landing page and select "manage topics."