@@ -17,7 +17,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
17
17
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
18
18
``` bash
19
19
kubectl create secret generic hf-token --from-literal=token=$HF_TOKEN # Your Hugging Face Token with access to Llama2
20
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/vllm/deployment.yaml
20
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/vllm/deployment.yaml
21
21
```
22
22
23
23
1 . ** Install the Inference Extension CRDs:**
@@ -31,22 +31,22 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
31
31
Deploy the sample InferenceModel which is configured to load balance traffic between the ` tweet-summary-0 ` and ` tweet-summary-1 `
32
32
[ LoRA adapters] ( https://docs.vllm.ai/en/latest/features/lora.html ) of the sample model server.
33
33
``` bash
34
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/inferencemodel.yaml
34
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/inferencemodel.yaml
35
35
```
36
36
37
37
1 . ** Update Envoy Gateway Config to enable Patch Policy**
38
38
39
39
Our custom LLM Gateway ext-proc is patched into the existing envoy gateway via ` EnvoyPatchPolicy ` . To enable this feature, we must extend the Envoy Gateway config map. To do this, simply run:
40
40
``` bash
41
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/gateway/enable_patch_policy.yaml
41
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/gateway/enable_patch_policy.yaml
42
42
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
43
43
```
44
44
Additionally, if you would like to enable the admin interface, you can uncomment the admin lines and run this again.
45
45
46
46
1 . ** Deploy Gateway**
47
47
48
48
``` bash
49
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/gateway/gateway.yaml
49
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/gateway/gateway.yaml
50
50
```
51
51
> ** _ NOTE:_ ** This file couples together the gateway infra and the HTTPRoute infra for a convenient, quick startup. Creating additional/different InferencePools on the same gateway will require an additional set of: ` Backend ` , ` HTTPRoute ` , the resources included in the ` ./manifests/gateway/ext-proc.yaml ` file, and an additional ` ./manifests/gateway/patch_policy.yaml ` file. *** Should you choose to experiment, familiarity with xDS and Envoy are very useful.***
52
52
@@ -60,14 +60,14 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
60
60
1 . ** Deploy the Inference Extension and InferencePool**
61
61
62
62
``` bash
63
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/ext_proc.yaml
63
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/ext_proc.yaml
64
64
```
65
65
66
66
1 . ** Deploy Envoy Gateway Custom Policies**
67
67
68
68
``` bash
69
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/gateway/extension_policy.yaml
70
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/gateway/patch_policy.yaml
69
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/gateway/extension_policy.yaml
70
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/gateway/patch_policy.yaml
71
71
```
72
72
> ** _ NOTE:_ ** This is also per InferencePool, and will need to be configured to support the new pool should you wish to experiment further.
73
73
@@ -76,7 +76,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
76
76
For high-traffic benchmarking you can apply this manifest to avoid any defaults that can cause timeouts/errors.
77
77
78
78
``` bash
79
- kubectl apply -f https://github. com/kubernetes-sigs/gateway-api-inference-extension/raw/main /pkg/manifests/gateway/traffic_policy.yaml
79
+ kubectl apply -f https://raw.githubusercontent. com/kubernetes-sigs/gateway-api-inference-extension/refs/tags/v0.1.0-rc.1 /pkg/manifests/gateway/traffic_policy.yaml
80
80
```
81
81
82
82
1 . ** Try it out**
0 commit comments