Skip to content

Commit 08e2175

Browse files
Configure the vllm deployment with best practices for startup
We want to recommend best practices for deployments of model servers under an InferencePool. Use the need to gracefully drain without client visible errors during rollout ("hitless" updates) to annotate the yaml with strong opinions on best practices. This configuration was experimentally verified on the GKE Inference Gateway configuration which should be longer than other servers.
1 parent 03d8584 commit 08e2175

File tree

1 file changed

+150
-8
lines changed

1 file changed

+150
-8
lines changed

config/manifests/vllm/gpu-deployment.yaml

Lines changed: 150 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -46,26 +46,111 @@ spec:
4646
- containerPort: 8000
4747
name: http
4848
protocol: TCP
49+
lifecycle:
50+
preStop:
51+
# vLLM stops accepting connections when it receives SIGTERM, so we need to sleep
52+
# to give upstream gateways a chance to take us out of rotation. The time we wait
53+
# is dependent on the time it takes for all upstreams to completely remove us from
54+
# rotation. Older or simpler load balancers might take upwards of 30s, but we expect
55+
# our deployment to run behind a modern gateway like Envoy which is designed to
56+
# probe for readiness aggressively.
57+
sleep:
58+
# Upstream gateway probers for health should be set on a low period, such as 5s,
59+
# and the shorter we can tighten that bound the faster that we release
60+
# accelerators during controlled shutdowns. However, we should expect variance,
61+
# as load balancers may have internal delays, and we don't want to drop requests
62+
# normally, so we're often aiming to set this value to a p99 propagation latency
63+
# of readiness -> load balancer taking backend out of rotation, not the average.
64+
#
65+
# This value is generally stable and must often be experimentally determined on
66+
# for a given load balancer and health check period. We set the value here to
67+
# the highest value we observe on a supported load balancer, and we recommend
68+
# tuning this value down and verifying no requests are dropped.
69+
#
70+
# If this value is updated, be sure to update terminationGracePeriodSeconds.
71+
#
72+
seconds: 30
73+
#
74+
# IMPORTANT: preStop.sleep is beta as of Kubernetes 1.30 - for older versions
75+
# replace with this exec action.
76+
#exec:
77+
# command:
78+
# - /usr/bin/sleep
79+
# - 30
4980
livenessProbe:
50-
failureThreshold: 240
5181
httpGet:
5282
path: /health
5383
port: http
5484
scheme: HTTP
55-
initialDelaySeconds: 5
56-
periodSeconds: 5
85+
# vLLM's health check is simple, so we can more aggressively probe it. Liveness
86+
# check endpoints should always be suitable for aggressive probing.
87+
periodSeconds: 1
5788
successThreshold: 1
89+
# vLLM has a very simple health implementation, which means that any failure is
90+
# likely significant. However, any liveness triggered restart requires the very
91+
# large core model to be reloaded, and so we should bias towards ensuring the
92+
# server is definitely unhealthy vs immediately restarting. Use 5 attempts as
93+
# evidence of a serious problem.
94+
failureThreshold: 5
5895
timeoutSeconds: 1
5996
readinessProbe:
60-
failureThreshold: 600
6197
httpGet:
6298
path: /health
6399
port: http
64100
scheme: HTTP
65-
initialDelaySeconds: 5
66-
periodSeconds: 5
101+
# vLLM's health check is simple, so we can more aggressively probe it. Readiness
102+
# check endpoints should always be suitable for aggressive probing, but may be
103+
# slightly more expensive than readiness probes.
104+
periodSeconds: 1
67105
successThreshold: 1
106+
# vLLM has a very simple health implementation, which means that any failure is
107+
# likely significant,
108+
failureThreshold: 1
68109
timeoutSeconds: 1
110+
# We set a startup probe so that we don't begin directing traffic to this instance
111+
# until the model is loaded.
112+
startupProbe:
113+
# Failure threshold is when we believe startup will not happen at all, and is set
114+
# to the maximum possible time we believe loading a model will take. In our
115+
# default configuration we are downloading a model from HuggingFace, which may
116+
# take a long time, then the model must load into the accelerator. We choose
117+
# 10 minutes as a reasonable maximum startup time before giving up and attempting
118+
# to restart the pod.
119+
#
120+
# IMPORTANT: If the core model takes more than 10 minutes to load, pods will crash
121+
# loop forever. Be sure to set this appropriately.
122+
failureThreshold: 600
123+
# Set delay to start low so that if the base model changes to something smaller
124+
# or an optimization is deployed, we don't wait unneccesarily.
125+
initialDelaySeconds: 2
126+
# As a startup probe, this stops running and so we can more aggressively probe
127+
# even a moderately complex startup - this is a very important workload.
128+
periodSeconds: 1
129+
exec:
130+
# Verify that our core model is loaded before we consider startup successful.
131+
# /health starts returning true very early in vLLM startup, but we want to
132+
# only consider ourselves as started up once the model has been loaded.
133+
#
134+
# vLLM should implement a readiness check that is only true once the model
135+
# can begin serving, and then this can be switched to an httpGet probe.
136+
# https://github.com/kubernetes-sigs/gateway-api-inference-extension/issues/558
137+
command:
138+
- /bin/bash
139+
- -c
140+
- |
141+
set -eu
142+
if ! models="$( curl -q http://0.0.0.0:8000/v1/models )"; then
143+
echo "server not responding"
144+
exit 1
145+
fi
146+
echo "${models}" | grep -q "$1"
147+
if [[ $? -ne 0 ]]; then
148+
echo "model not found"
149+
exit 1
150+
fi
151+
echo "ok"
152+
- ''
153+
- '"id":"meta-llama/Llama-2-7b-hf"'
69154
resources:
70155
limits:
71156
nvidia.com/gpu: 1
@@ -92,8 +177,65 @@ spec:
92177
- name: config-volume
93178
mountPath: /config
94179
restartPolicy: Always
95-
schedulerName: default-scheduler
96-
terminationGracePeriodSeconds: 30
180+
181+
# Generally, the termination grace period needs to last longer than the slowest request
182+
# we expect to serve plus any extra time spent waiting for load balancers to take the
183+
# model server out of rotation.
184+
#
185+
# An easy starting point is the p99 or max request latency measured for your workload,
186+
# although LLM request latencies vary significantly if clients send longer inputs or
187+
# trigger longer outputs. Since steady state p99 will be higher than the latency
188+
# to drain a server, you may wish to slightly this value either experimentally or
189+
# via the calculation below.
190+
#
191+
# For most models you can derive an upper bound for the maximum drain latency as
192+
# follows:
193+
#
194+
# 1. Identify the maximum context length the model was trained on, or the maximum
195+
# allowed length of output tokens configured on vLLM (llama2-7b was trained to
196+
# 4k context length, while llama3-8b was trained to 128k).
197+
# 2. Output tokens are the more compute intensive to calculate and the accelerator
198+
# will have a maximum concurrency (batch size) - the time per output token at
199+
# maximum batch with no prompt tokens being processed is the slowest an output
200+
# token can be generated (for this model it would be about 100ms TPOT at a max
201+
# batch size around 50)
202+
# 3. Calculate the worst case request duration if a request starts immediately
203+
# before the server stops accepting new connections - generally when it receives
204+
# SIGTERM (for this model that is about 4096 / 10 ~ 40s)
205+
# 4. If there are any requests generating prompt tokens that will delay when those
206+
# output tokens start, and prompt token generation is roughly 6x faster than
207+
# compute-bound output token generation, so add 20% to the time from above (40s +
208+
# 16s ~ 55s)
209+
#
210+
# Thus we think it will take us at worst about 55s to complete the longest possible
211+
# request the model is likely to receive at maximum concurrency (highest latency)
212+
# once requests stop being sent.
213+
#
214+
# NOTE: This number will be lower than steady state p99 latency since we stop receiving
215+
# new requests which require continuous prompt token computation.
216+
# NOTE: The max timeout for backend connections from gateway to model servers should
217+
# be configured based on steady state p99 latency, not drain p99 latency
218+
#
219+
# 5. Add the time the pod takes in its preStop hook to allow the load balancers have
220+
# stopped sending us new requests (55s + 30s ~ 85s)
221+
#
222+
# Because termination grace period controls when the Kubelet forcibly terminates a
223+
# stuck or hung process (a possibility due to a GPU crash), there is operational safety
224+
# in keeping the value roughly proportional to the time to finish serving. There is also
225+
# value in adding a bit of extra time to deal with unexpectedly long workloads.
226+
#
227+
# 6. Add a 50% safety buffer to this time since the operational impact should be low
228+
# (85s * 1.5 ~ 130s)
229+
#
230+
# One additional source of drain latency is that some workloads may run close to
231+
# saturation and have queued requests on each server. Since traffic in excess of the
232+
# max sustainable QPS will result in timeouts as the queues grow, we assume that failure
233+
# to drain in time due to excess queues at the time of shutdown is an expected failure
234+
# mode of server overload. If your workload occasionally experiences high queue depths
235+
# due to periodic traffic, consider increasing the safety margin above to account for
236+
# time to drain queued requests.
237+
terminationGracePeriodSeconds: 130
238+
97239
volumes:
98240
- name: data
99241
emptyDir: {}

0 commit comments

Comments
 (0)