Dockerized vLLM serving for Kimi-Linear-48B-A3B (AWQ-4bit), from 128K to 1M context.
-
Updated
Dec 1, 2025 - Python
Dockerized vLLM serving for Kimi-Linear-48B-A3B (AWQ-4bit), from 128K to 1M context.
🚀 Explore Kimi Linear, an efficient attention architecture designed for expressive performance in natural language processing tasks.
Add a description, image, and links to the kimi-linear topic page so that developers can more easily learn about it.
To associate your repository with the kimi-linear topic, visit your repo's landing page and select "manage topics."