diff --git a/site-src/guides/index.md b/site-src/guides/index.md index b7b31000a..8bcee6e24 100644 --- a/site-src/guides/index.md +++ b/site-src/guides/index.md @@ -3,7 +3,7 @@ This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get a first, single InferencePool up and running! ## **Prerequisites** - - Envoy Gateway [v1.2.1](https://gateway.envoyproxy.io/docs/install/install-yaml/#install-with-yaml) or higher + - Envoy Gateway [v1.3.0](https://gateway.envoyproxy.io/docs/install/install-yaml/#install-with-yaml) or higher - A cluster with: - Support for services of typs `LoadBalancer`. (This can be validated by ensuring your Envoy Gateway is up and running). For example, with Kind, you can follow [these steps](https://kind.sigs.k8s.io/docs/user/loadbalancer).