Skip to content

Commit 1a4713f

Browse files
nicolexinkfswain
authored andcommitted
Add initial implementer's guide (kubernetes-sigs#635)
* Add initial implementer's guide * Add line break to fix the list formatting * Add line break to fix the list formatting * Address code review comments * Fix formatting for conformance tests
1 parent 9b4a01e commit 1a4713f

File tree

1 file changed

+111
-1
lines changed

1 file changed

+111
-1
lines changed

site-src/guides/implementers.md

Lines changed: 111 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,113 @@
11
# Implementer's Guide
22

3-
TODO
3+
This guide is intended for developers looking to implement support for the InferencePool custom resources within their Gateway API controller. It outlines how InferencePool fits into the existing resource model, discusses implementation options, explains how to interact with extensions, and provides guidance on testing.
4+
5+
## InferencePool as a Gateway Backend
6+
Before we dive into the implementation, let’s recap how an InferencePool works.
7+
8+
<img src="/images/inference-overview.svg" alt="Overview of API integration" class="center" width="1000" />
9+
10+
**InferencePool** represents a set of Inference-focused Pods and an extension that will be used to route to them. The InferencePool introduces a new type of backend within the Gateway API resource model. Instead of targeting Services, a Gateway can route traffic to an InferencePool. This InferencePool then becomes responsible for intelligent routing to the underlying model server pods based on the associated InferenceModel configurations.
11+
12+
Here is an example of how to route traffic to an InferencePool using an HTTPRoute:
13+
```
14+
apiVersion: gateway.networking.k8s.io/v1
15+
kind: HTTPRoute
16+
metadata:
17+
name: llm-route
18+
spec:
19+
parentRefs:
20+
- group: gateway.networking.k8s.io
21+
kind: Gateway
22+
name: inference-gateway
23+
rules:
24+
- backendRefs:
25+
- group: inference.networking.x-k8s.io
26+
kind: InferencePool
27+
name: base-model
28+
matches:
29+
- path:
30+
type: PathPrefix
31+
value: /
32+
```
33+
34+
Note that the `rules.backendRefs` describes which InferencePool should receive the forwarded traffic when the path matches the corresponding path prefix. This is very similar to how we configure a Gateway with an HTTPRoute that directs traffic to a Service (a way to select Pods and specify a port). By using the InferencePool, it provides an abstraction over a set of compute resources (model server pods), and allows the controller to implement specialized routing strategies for these inference workloads.
35+
36+
## Building the Gateway controller
37+
The general idea of implementing a Gateway controller supporting the InferencePool involves two major steps:
38+
39+
1. Tracking the endpoints for InferencePool backends
40+
2. Callout to an extension to make intelligent routing decisions
41+
42+
### Endpoint Tracking
43+
Consider a simple inference pool like this:
44+
```
45+
apiVersion: inference.networking.x-k8s.io/v1alpha2
46+
kind: InferencePool
47+
metadata:
48+
name: vllm-llama3-8b-instruct
49+
spec:
50+
targetPortNumber: 8000
51+
selector:
52+
app: vllm-llama3-8b-instruct
53+
extensionRef:
54+
name: vllm-llama3-8b-instruct-epp
55+
```
56+
57+
There are mainly two options for how to treat the Inference Pool in your controller.
58+
59+
**Option 1: Shadow Service Creation**
60+
61+
If your Gateway controller already handles Service as a backend, you can choose to create a headless Service that mirrors the endpoints defined by the InferencePool, like this:
62+
63+
```
64+
apiVersion: v1
65+
kind: Service
66+
metadata:
67+
name: vllm-llama3-8b-instruct-shadow-service
68+
spec:
69+
ports:
70+
- port: 54321
71+
protocol: TCP
72+
targetPort: 8000
73+
selector:
74+
app: vllm-llama3-8b-instruct
75+
type: ClusterIP
76+
clusterIP: None
77+
```
78+
79+
The gateway controller would then treat this shadow service just like any other backend service it routes traffic to.
80+
81+
This approach likely allows you to leverage existing service discovery, healthcheck infrastructure, and load balancing mechanisms that your controller already supports. However, it does come with the overhead of managing additional Service objects, and hence may affect the overall latency of the reconciliation of the Gateways.
82+
83+
**Option 2: Tracking InferencePool Endpoints Separately**
84+
85+
You can also choose to directly select and monitor the endpoints belonging to the InferencePool. For the simple inference pool example we have above, the controller would use the label `app: vllm-llama3-8b-instruct` to discover the pods matching the criteria, and get their endpoints (i.e. IP and port number). It would then need to monitor these pods for health and availability.
86+
87+
With this approach, you can tailor the endpoint tracking and routing logic specifically to the characteristics and requirements of your InferencePool.
88+
89+
### Callout Extension
90+
91+
The [Endpoint Picker](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp), or EPP, is a core component of the inference extension. The primary interaction for routing requests is defined between the proxy (e.g., Envoy) and the EPP using the Envoy [external processing service protocol](https://www.envoyproxy.io/docs/envoy/latest/api-v3/service/ext_proc/v3/external_processor.proto). See the [Endpoint Picker Protocol](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/docs/proposals/004-endpoint-picker-protocol) for more information.
92+
93+
#### How to Callout to EPP
94+
95+
For each HTTP request, the proxy CAN communicate the subset of endpoints the EPP MUST pick from by setting `x-gateway-destination-endpoint-subset` key in the filter metadata field of the ext-proc request. If this key is set, the EPP must select from this endpoint list. If the list is empty or no endpoints are eligible, it should return a 503 error. If the key isn't set, the EPP selects from the endpoints defined by the InferencePool selector.
96+
97+
#### Response from the extension
98+
99+
The EPP communicates the chosen endpoint to the proxy via the `x-gateway-destination-endpoint` HTTP header and the `dynamic_metadata` field of the ext-proc response. Failure to communicate the endpoint using both methods results in a 503 error if no endpoints are ready, or a 429 error if the request should be dropped. The header and metadata values must match. In addition to the chosen endpoint, a single fallback endpoint CAN be set using the key `x-gateway-destination-endpoint-fallback` in the same metadata namespace as one used for `x-gateway-destination-endpoint`.
100+
101+
## Testing Tips
102+
103+
Here are some tips for testing your controller end-to-end:
104+
105+
- **Focus on Key Scenarios**: Add common scenarios like creating, updating, and deleting InferencePool resources, as well as different routing rules that target InferencePool backends.
106+
- **Verify Routing Behaviors**: Design more complex routing scenarios and verify that requests are correctly routed to the appropriate model server pods within the InferencePool based on the InferenceModel configuration.
107+
- **Test Error Handling**: Verify that the controller correctly handles scenarios like unsupported model names or resource constraints (if criticality-based shedding is implemented). Test with state transitions (such as constant requests while Pods behind EPP are being replaced and Pods behind InferencePool are being replaced) to ensure that the system is resilient to failures and can automatically recover by redirecting traffic to healthy Pods.
108+
- **Using Reference EPP Implementation + Echoserver**: You can use the [reference EPP implementation](https://github.com/kubernetes-sigs/gateway-api-inference-extension/tree/main/pkg/epp) for testing your controller end-to-end. Instead of a full-fledged model server, a simple mock server (like the [echoserver](https://github.com/kubernetes-sigs/ingress-controller-conformance/tree/master/images/echoserver)) can be very useful for verifying routing to ensure the correct pod received the request.
109+
- **Performance Test**: Run end-to-end [benchmarks](https://gateway-api-inference-extension.sigs.k8s.io/performance/benchmark/) to make sure that your inference gateway can achieve the latency target that is desired.
110+
111+
### Conformance Tests
112+
113+
A set of conformance tests will be developed soon to help verify that a controller is working as expected. This guide will be updated once we have more information. Stay tuned!

0 commit comments

Comments
 (0)