|
| 1 | +--- |
| 2 | +layout: blog |
| 3 | +title: "介绍 Gateway API 推理扩展" |
| 4 | +date: 2025-06-05 |
| 5 | +slug: introducing-gateway-api-inference-extension |
| 6 | +draft: false |
| 7 | +author: > |
| 8 | + Daneyon Hansen (Solo.io), |
| 9 | + Kaushik Mitra (Google), |
| 10 | + Jiaxin Shan (Bytedance), |
| 11 | + Kellen Swain (Google) |
| 12 | +translator: > |
| 13 | + Michael Yao (DaoCloud) |
| 14 | +--- |
| 15 | +<!-- |
| 16 | +layout: blog |
| 17 | +title: "Introducing Gateway API Inference Extension" |
| 18 | +date: 2025-06-05 |
| 19 | +slug: introducing-gateway-api-inference-extension |
| 20 | +draft: false |
| 21 | +author: > |
| 22 | + Daneyon Hansen (Solo.io), |
| 23 | + Kaushik Mitra (Google), |
| 24 | + Jiaxin Shan (Bytedance), |
| 25 | + Kellen Swain (Google) |
| 26 | +--> |
| 27 | + |
| 28 | +<!-- |
| 29 | +Modern generative AI and large language model (LLM) services create unique traffic-routing challenges |
| 30 | +on Kubernetes. Unlike typical short-lived, stateless web requests, LLM inference sessions are often |
| 31 | +long-running, resource-intensive, and partially stateful. For example, a single GPU-backed model server |
| 32 | +may keep multiple inference sessions active and maintain in-memory token caches. |
| 33 | +
|
| 34 | +Traditional load balancers focused on HTTP path or round-robin lack the specialized capabilities needed |
| 35 | +for these workloads. They also don’t account for model identity or request criticality (e.g., interactive |
| 36 | +chat vs. batch jobs). Organizations often patch together ad-hoc solutions, but a standardized approach |
| 37 | +is missing. |
| 38 | +--> |
| 39 | +现代生成式 AI 和大语言模型(LLM)服务在 Kubernetes 上带来独特的流量路由挑战。 |
| 40 | +与典型的短生命期的无状态 Web 请求不同,LLM 推理会话通常是长时间运行的、资源密集型的,并且具有一定的状态性。 |
| 41 | +例如,单个由 GPU 支撑的模型服务器可能会保持多个推理会话处于活跃状态,并保留内存中的令牌缓存。 |
| 42 | + |
| 43 | +传统的负载均衡器注重 HTTP 路径或轮询,缺乏处理这类工作负载所需的专业能力。 |
| 44 | +传统的负载均衡器通常无法识别模型身份或请求关键性(例如交互式聊天与批处理任务的区别)。 |
| 45 | +各个组织往往拼凑出临时解决方案,但一直缺乏标准化的做法。 |
| 46 | + |
| 47 | +<!-- |
| 48 | +## Gateway API Inference Extension |
| 49 | +
|
| 50 | +[Gateway API Inference Extension](https://gateway-api-inference-extension.sigs.k8s.io/) was created to address |
| 51 | +this gap by building on the existing [Gateway API](https://gateway-api.sigs.k8s.io/), adding inference-specific |
| 52 | +routing capabilities while retaining the familiar model of Gateways and HTTPRoutes. By adding an inference |
| 53 | +extension to your existing gateway, you effectively transform it into an **Inference Gateway**, enabling you to |
| 54 | +self-host GenAI/LLMs with a “model-as-a-service” mindset. |
| 55 | +--> |
| 56 | +## Gateway API 推理扩展 {#gateway-api-inference-extension} |
| 57 | + |
| 58 | +[Gateway API 推理扩展](https://gateway-api-inference-extension.sigs.k8s.io/)正是为了填补这一空白而创建的, |
| 59 | +它基于已有的 [Gateway API](https://gateway-api.sigs.k8s.io/) 进行构建, |
| 60 | +添加了特定于推理的路由能力,同时保留了 Gateway 与 HTTPRoute 的熟悉模型。 |
| 61 | +通过为现有 Gateway 添加推理扩展,你就能将其转变为一个**推理网关(Inference Gateway)**, |
| 62 | +从而以“模型即服务”的理念自托管 GenAI/LLM 应用。 |
| 63 | + |
| 64 | +<!-- |
| 65 | +The project’s goal is to improve and standardize routing to inference workloads across the ecosystem. Key |
| 66 | +objectives include enabling model-aware routing, supporting per-request criticalities, facilitating safe model |
| 67 | +roll-outs, and optimizing load balancing based on real-time model metrics. By achieving these, the project aims |
| 68 | +to reduce latency and improve accelerator (GPU) utilization for AI workloads. |
| 69 | +
|
| 70 | +## How it works |
| 71 | +
|
| 72 | +The design introduces two new Custom Resources (CRDs) with distinct responsibilities, each aligning with a |
| 73 | +specific user persona in the AI/ML serving workflow: |
| 74 | +--> |
| 75 | +此项目的目标是在整个生态系统中改进并标准化对推理工作负载的路由。 |
| 76 | +关键目标包括实现模型感知路由、支持每个请求的关键性标识、促进安全的模型发布, |
| 77 | +以及基于实时模型指标来优化负载均衡。为了实现这些目标,此项目希望降低延迟并提高 AI 负载中的加速器(如 GPU)利用率。 |
| 78 | + |
| 79 | +## 工作原理 {#how-it-works} |
| 80 | + |
| 81 | +功能设计时引入了两个具有不同职责的全新定制资源(CRD),每个 CRD 对应 AI/ML 服务流程中的一个特定用户角色: |
| 82 | + |
| 83 | +<!-- |
| 84 | +{{< figure src="inference-extension-resource-model.png" alt="Resource Model" class="diagram-large" clicktozoom="true" >}} |
| 85 | +--> |
| 86 | +{{< figure src="inference-extension-resource-model.png" alt="资源模型" class="diagram-large" clicktozoom="true" >}} |
| 87 | + |
| 88 | +<!-- |
| 89 | +1. [InferencePool](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencepool/) |
| 90 | + Defines a pool of pods (model servers) running on shared compute (e.g., GPU nodes). The platform admin can |
| 91 | + configure how these pods are deployed, scaled, and balanced. An InferencePool ensures consistent resource |
| 92 | + usage and enforces platform-wide policies. An InferencePool is similar to a Service but specialized for AI/ML |
| 93 | + serving needs and aware of the model-serving protocol. |
| 94 | +
|
| 95 | +2. [InferenceModel](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencemodel/) |
| 96 | + A user-facing model endpoint managed by AI/ML owners. It maps a public name (e.g., "gpt-4-chat") to the actual |
| 97 | + model within an InferencePool. This lets workload owners specify which models (and optional fine-tuning) they |
| 98 | + want served, plus a traffic-splitting or prioritization policy. |
| 99 | +--> |
| 100 | +1. [InferencePool](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencepool/) |
| 101 | + 定义了一组在共享计算资源(如 GPU 节点)上运行的 Pod(模型服务器)。 |
| 102 | + 平台管理员可以配置这些 Pod 的部署、扩缩容和负载均衡策略。 |
| 103 | + InferencePool 确保资源使用情况的一致性,并执行平台级的策略。 |
| 104 | + InferencePool 类似于 Service,但专为 AI/ML 推理服务定制,能够感知模型服务协议。 |
| 105 | + |
| 106 | +2. [InferenceModel](https://gateway-api-inference-extension.sigs.k8s.io/api-types/inferencemodel/) |
| 107 | + 是面向用户的模型端点,由 AI/ML 拥有者管理。 |
| 108 | + 它将一个公共名称(如 "gpt-4-chat")映射到 InferencePool 内的实际模型。 |
| 109 | + 这使得负载拥有者可以指定要服务的模型(及可选的微调版本),并配置流量拆分或优先级策略。 |
| 110 | + |
| 111 | +<!-- |
| 112 | +In summary, the InferenceModel API lets AI/ML owners manage what is served, while the InferencePool lets platform |
| 113 | +operators manage where and how it’s served. |
| 114 | +--> |
| 115 | +简而言之,InferenceModel API 让 AI/ML 拥有者管理“提供什么服务”,而 |
| 116 | +InferencePool 则让平台运维人员管理“在哪儿以及如何提供服务”。 |
| 117 | + |
| 118 | +<!-- |
| 119 | +## Request flow |
| 120 | +
|
| 121 | +The flow of a request builds on the Gateway API model (Gateways and HTTPRoutes) with one or more extra inference-aware |
| 122 | +steps (extensions) in the middle. Here’s a high-level example of the request flow with the |
| 123 | +[Endpoint Selection Extension (ESE)](https://gateway-api-inference-extension.sigs.k8s.io/#endpoint-selection-extension): |
| 124 | +--> |
| 125 | +## 请求流程 {#request-flow} |
| 126 | + |
| 127 | +请求的处理流程基于 Gateway API 模型(Gateway 和 HTTPRoute),在其中插入一个或多个对推理有感知的步骤(扩展)。 |
| 128 | +以下是一个使用[端点选择扩展(Endpoint Selection Extension, ESE)](https://gateway-api-inference-extension.sigs.k8s.io/#endpoint-selection-extension) |
| 129 | +的高级请求流程示意图: |
| 130 | + |
| 131 | +<!-- |
| 132 | +{{< figure src="inference-extension-request-flow.png" alt="Request Flow" class="diagram-large" clicktozoom="true" >}} |
| 133 | +--> |
| 134 | +{{< figure src="inference-extension-request-flow.png" alt="请求流程" class="diagram-large" clicktozoom="true" >}} |
| 135 | + |
| 136 | +<!-- |
| 137 | +1. **Gateway Routing** |
| 138 | + A client sends a request (e.g., an HTTP POST to /completions). The Gateway (like Envoy) examines the HTTPRoute |
| 139 | + and identifies the matching InferencePool backend. |
| 140 | +
|
| 141 | +2. **Endpoint Selection** |
| 142 | + Instead of simply forwarding to any available pod, the Gateway consults an inference-specific routing extension— |
| 143 | + the Endpoint Selection Extension—to pick the best of the available pods. This extension examines live pod metrics |
| 144 | + (queue lengths, memory usage, loaded adapters) to choose the ideal pod for the request. |
| 145 | +--> |
| 146 | +1. **Gateway 路由** |
| 147 | + |
| 148 | + 客户端发送请求(例如向 `/completions` 发起 HTTP POST)。 |
| 149 | + Gateway(如 Envoy)会检查 HTTPRoute,并识别出匹配的 InferencePool 后端。 |
| 150 | + |
| 151 | +2. **端点选择** |
| 152 | + |
| 153 | + Gateway 不会简单地将请求转发到任意可用的 Pod, |
| 154 | + 而是调用一个特定于推理的路由扩展(端点选择扩展)从多个可用 Pod 中选出最优者。 |
| 155 | + 此扩展根据实时 Pod 指标(如队列长度、内存使用量、加载的适配器等)来选择最适合请求的 Pod。 |
| 156 | + |
| 157 | +<!-- |
| 158 | +3. **Inference-Aware Scheduling** |
| 159 | + The chosen pod is the one that can handle the request with the lowest latency or highest efficiency, given the |
| 160 | + user’s criticality or resource needs. The Gateway then forwards traffic to that specific pod. |
| 161 | +--> |
| 162 | +3. **推理感知调度** |
| 163 | + |
| 164 | + 所选 Pod 是基于用户关键性或资源需求下延迟最低或效率最高的。 |
| 165 | + 随后 Gateway 将流量转发到这个特定的 Pod。 |
| 166 | + |
| 167 | +<!-- |
| 168 | +{{< figure src="inference-extension-epp-scheduling.png" alt="Endpoint Extension Scheduling" class="diagram-large" clicktozoom="true" >}} |
| 169 | +--> |
| 170 | +{{< figure src="inference-extension-epp-scheduling.png" alt="端点扩展调度" class="diagram-large" clicktozoom="true" >}} |
| 171 | + |
| 172 | +<!-- |
| 173 | +This extra step provides a smarter, model-aware routing mechanism that still feels like a normal single request to |
| 174 | +the client. Additionally, the design is extensible—any Inference Gateway can be enhanced with additional inference-specific |
| 175 | +extensions to handle new routing strategies, advanced scheduling logic, or specialized hardware needs. As the project |
| 176 | +continues to grow, contributors are encouraged to develop new extensions that are fully compatible with the same underlying |
| 177 | +Gateway API model, further expanding the possibilities for efficient and intelligent GenAI/LLM routing. |
| 178 | +--> |
| 179 | +这个额外步骤提供了一种更智能的模型感知路由机制,但对于客户端来说感觉就像一个普通的请求。 |
| 180 | +此外,此设计具有良好的可扩展性,任何推理网关都可以通过添加更多推理特定的扩展来处理新的路由策略、高级调度逻辑或特定硬件需求。 |
| 181 | +随着此项目的持续发展,欢迎社区贡献者开发与底层 Gateway API 模型完全兼容的新扩展,进一步拓展高效、智能的 GenAI/LLM 路由能力。 |
| 182 | + |
| 183 | +<!-- |
| 184 | +## Benchmarks |
| 185 | +
|
| 186 | +We evaluated this extension against a standard Kubernetes Service for a [vLLM](https://docs.vllm.ai/en/latest/)‐based model |
| 187 | +serving deployment. The test environment consisted of multiple H100 (80 GB) GPU pods running vLLM ([version 1](https://blog.vllm.ai/2025/01/27/v1-alpha-release.html)) |
| 188 | +on a Kubernetes cluster, with 10 Llama2 model replicas. The [Latency Profile Generator (LPG)](https://github.com/AI-Hypercomputer/inference-benchmark) |
| 189 | +tool was used to generate traffic and measure throughput, latency, and other metrics. The |
| 190 | +[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json) |
| 191 | +dataset served as the workload, and traffic was ramped from 100 Queries per Second (QPS) up to 1000 QPS. |
| 192 | +--> |
| 193 | +## 基准测试 {#benchmarks} |
| 194 | + |
| 195 | +我们将此扩展与标准 Kubernetes Service 进行了对比测试,基于 |
| 196 | +[vLLM](https://docs.vllm.ai/en/latest/) 部署模型服务。 |
| 197 | +测试环境是在 Kubernetes 集群中运行 vLLM([v1](https://blog.vllm.ai/2025/01/27/v1-alpha-release.html)) |
| 198 | +的多个 H100(80 GB)GPU Pod,并部署了 10 个 Llama2 模型副本。 |
| 199 | +本次测试使用了 [Latency Profile Generator (LPG)](https://github.com/AI-Hypercomputer/inference-benchmark) |
| 200 | +工具生成流量,测量吞吐量、延迟等指标。采用的工作负载数据集为 |
| 201 | +[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json), |
| 202 | +流量从 100 QPS 提升到 1000 QPS。 |
| 203 | + |
| 204 | +<!-- |
| 205 | +### Key results |
| 206 | +
|
| 207 | +{{< figure src="inference-extension-benchmark.png" alt="Endpoint Extension Scheduling" class="diagram-large" clicktozoom="true" >}} |
| 208 | +--> |
| 209 | +### 主要结果 {#key-results} |
| 210 | + |
| 211 | +{{< figure src="inference-extension-benchmark.png" alt="端点扩展调度" class="diagram-large" clicktozoom="true" >}} |
| 212 | + |
| 213 | +<!-- |
| 214 | +- **Comparable Throughput**: Throughout the tested QPS range, the ESE delivered throughput roughly on par with a standard |
| 215 | + Kubernetes Service. |
| 216 | +--> |
| 217 | +- **吞吐量相当**:在整个测试的 QPS 范围内,ESE 达到的吞吐量基本与标准 Kubernetes Service 持平。 |
| 218 | + |
| 219 | +<!-- |
| 220 | +- **Lower Latency**: |
| 221 | + - **Per‐Output‐Token Latency**: The ESE showed significantly lower p90 latency at higher QPS (500+), indicating that |
| 222 | + its model-aware routing decisions reduce queueing and resource contention as GPU memory approaches saturation. |
| 223 | + - **Overall p90 Latency**: Similar trends emerged, with the ESE reducing end‐to‐end tail latencies compared to the |
| 224 | + baseline, particularly as traffic increased beyond 400–500 QPS. |
| 225 | +--> |
| 226 | +- **延迟更低**: |
| 227 | + - **每个输出令牌的延迟**:在高负载(QPS 500 以上)时,ESE 显示了 p90 延迟明显更低, |
| 228 | + 这表明随着 GPU 显存达到饱和,其模型感知路由决策可以减少排队等待和资源争用。 |
| 229 | + - **整体 p90 延迟**:出现类似趋势,ESE 相比基线降低了端到端尾部延迟,特别是在 QPS 超过 400–500 时更明显。 |
| 230 | + |
| 231 | +<!-- |
| 232 | +These results suggest that this extension's model‐aware routing significantly reduced latency for GPU‐backed LLM |
| 233 | +workloads. By dynamically selecting the least‐loaded or best‐performing model server, it avoids hotspots that can |
| 234 | +appear when using traditional load balancing methods for large, long‐running inference requests. |
| 235 | +
|
| 236 | +## Roadmap |
| 237 | +
|
| 238 | +As the Gateway API Inference Extension heads toward GA, planned features include: |
| 239 | +--> |
| 240 | +这些结果表明,此扩展的模型感知路由显著降低了 GPU 支撑的 LLM 负载的延迟。 |
| 241 | +此扩展通过动态选择负载最轻或性能最优的模型服务器,避免了传统负载均衡方法在处理较大的、长时间运行的推理请求时会出现的热点问题。 |
| 242 | + |
| 243 | +## 路线图 {#roadmap} |
| 244 | + |
| 245 | +随着 Gateway API 推理扩展迈向 GA(正式发布),计划中的特性包括: |
| 246 | + |
| 247 | +<!-- |
| 248 | +1. **Prefix-cache aware load balancing** for remote caches |
| 249 | +2. **LoRA adapter pipelines** for automated rollout |
| 250 | +3. **Fairness and priority** between workloads in the same criticality band |
| 251 | +4. **HPA support** for scaling based on aggregate, per-model metrics |
| 252 | +5. **Support for large multi-modal inputs/outputs** |
| 253 | +6. **Additional model types** (e.g., diffusion models) |
| 254 | +7. **Heterogeneous accelerators** (serving on multiple accelerator types with latency- and cost-aware load balancing) |
| 255 | +8. **Disaggregated serving** for independently scaling pools |
| 256 | +--> |
| 257 | +1. **前缀缓存感知负载均衡**以支持远程缓存 |
| 258 | +2. **LoRA 适配器流水线**方便自动化上线 |
| 259 | +3. 同一关键性等级下负载之间的**公平性和优先级** |
| 260 | +4. **HPA 支持**基于模型聚合指标扩缩容 |
| 261 | +5. **支持大规模多模态输入/输出** |
| 262 | +6. **更多模型类型**(如扩散模型) |
| 263 | +7. **异构加速器**(支持多个加速器类型,并具备延迟和成本感知的负载均衡) |
| 264 | +8. **解耦式服务架构**,以独立扩缩资源池 |
| 265 | + |
| 266 | +<!-- |
| 267 | +## Summary |
| 268 | +
|
| 269 | +By aligning model serving with Kubernetes-native tooling, Gateway API Inference Extension aims to simplify |
| 270 | +and standardize how AI/ML traffic is routed. With model-aware routing, criticality-based prioritization, and |
| 271 | +more, it helps ops teams deliver the right LLM services to the right users—smoothly and efficiently. |
| 272 | +--> |
| 273 | +## 总结 {#summary} |
| 274 | + |
| 275 | +通过将模型服务对齐到 Kubernetes 原生工具链,Gateway API 推理扩展致力于简化并标准化 AI/ML 流量的路由方式。 |
| 276 | +此扩展引入模型感知路由、基于关键性的优先级等能力,帮助运维团队平滑高效地将合适的 LLM 服务交付给合适的用户。 |
| 277 | + |
| 278 | +<!-- |
| 279 | +**Ready to learn more?** Visit the [project docs](https://gateway-api-inference-extension.sigs.k8s.io/) to dive deeper, |
| 280 | +give an Inference Gateway extension a try with a few [simple steps](https://gateway-api-inference-extension.sigs.k8s.io/guides/), |
| 281 | +and [get involved](https://gateway-api-inference-extension.sigs.k8s.io/contributing/) if you’re interested in |
| 282 | +contributing to the project! |
| 283 | +--> |
| 284 | +**想进一步学习?** |
| 285 | +参阅[项目文档](https://gateway-api-inference-extension.sigs.k8s.io/)深入学习, |
| 286 | +只需[简单几步](https://gateway-api-inference-extension.sigs.k8s.io/guides/)试用推理网关扩展。 |
| 287 | +如果你想对此项目作贡献,欢迎[参与其中](https://gateway-api-inference-extension.sigs.k8s.io/contributing/)! |
0 commit comments