Skip to content

Commit dfc6542

Browse files
committed
Update README with examples for opaque config parameters
Signed-off-by: Kevin Klues <[email protected]>
1 parent 37d79b7 commit dfc6542

File tree

1 file changed

+48
-18
lines changed

1 file changed

+48
-18
lines changed

README.md

+48-18
Original file line numberDiff line numberDiff line change
@@ -225,10 +225,10 @@ metadata:
225225
```
226226

227227
Next, deploy four example apps that demonstrate how `ResourceClaim`s,
228-
`ResourceClaimTemplate`s, and custom `ClaimParameter` objects can be used to
229-
request access to resources in various ways:
228+
`ResourceClaimTemplate`s, and custom `GpuConfig` objects can be used to
229+
select and configure resources in various ways:
230230
```bash
231-
kubectl apply --filename=demo/gpu-test{1,2,3,4}.yaml
231+
kubectl apply --filename=demo/gpu-test{1,2,3,4,5}.yaml
232232
```
233233

234234
And verify that they are coming up successfully:
@@ -242,10 +242,11 @@ gpu-test2 pod0 0/2 Pending 0 2s
242242
gpu-test3 pod0 0/1 ContainerCreating 0 2s
243243
gpu-test3 pod1 0/1 ContainerCreating 0 2s
244244
gpu-test4 pod0 0/1 Pending 0 2s
245+
gpu-test5 pod0 0/4 Pending 0 2s
245246
...
246247
```
247248

248-
Use your favorite editor to look through each of the `gpu-test{1,2,3,4}.yaml`
249+
Use your favorite editor to look through each of the `gpu-test{1,2,3,4,5}.yaml`
249250
files and see what they are doing. The semantics of each match the figure
250251
below:
251252

@@ -254,12 +255,16 @@ below:
254255
Then dump the logs of each app to verify that GPUs were allocated to them
255256
according to these semantics:
256257
```bash
257-
for example in $(seq 1 4); do \
258+
for example in $(seq 1 5); do \
258259
echo "gpu-test${example}:"
259260
for pod in $(kubectl get pod -n gpu-test${example} --output=jsonpath='{.items[*].metadata.name}'); do \
260261
for ctr in $(kubectl get pod -n gpu-test${example} ${pod} -o jsonpath='{.spec.containers[*].name}'); do \
261262
echo "${pod} ${ctr}:"
262-
kubectl logs -n gpu-test${example} ${pod} -c ${ctr}| grep GPU_DEVICE
263+
if [ "${example}" -lt 3 ]; then
264+
kubectl logs -n gpu-test${example} ${pod} -c ${ctr}| grep -E "GPU_DEVICE_[0-9]+="
265+
else
266+
kubectl logs -n gpu-test${example} ${pod} -c ${ctr}| grep -E "GPU_DEVICE_[0-9]+"
267+
fi
263268
done
264269
done
265270
echo ""
@@ -270,43 +275,67 @@ This should produce output similar to the following:
270275
```bash
271276
gpu-test1:
272277
pod0 ctr0:
273-
declare -x GPU_DEVICE_0="gpu-e7b42cb1-4fd8-91b2-bc77-352a0c1f5747"
278+
declare -x GPU_DEVICE_0="gpu-ee3e4b55-fcda-44b8-0605-64b7a9967744"
274279
pod1 ctr0:
275-
declare -x GPU_DEVICE_0="gpu-f11773a1-5bfb-e48b-3d98-1beb5baaf08e"
280+
declare -x GPU_DEVICE_0="gpu-9ede7e32-5825-a11b-fa3d-bab6d47e0243"
276281

277282
gpu-test2:
278283
pod0 ctr0:
284+
declare -x GPU_DEVICE_0="gpu-e7b42cb1-4fd8-91b2-bc77-352a0c1f5747"
285+
declare -x GPU_DEVICE_1="gpu-f11773a1-5bfb-e48b-3d98-1beb5baaf08e"
286+
287+
gpu-test3:
288+
pod0 ctr0:
279289
declare -x GPU_DEVICE_0="gpu-0159f35e-99ee-b2b5-74f1-9d18df3f22ac"
290+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
291+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Default"
280292
pod0 ctr1:
281293
declare -x GPU_DEVICE_0="gpu-0159f35e-99ee-b2b5-74f1-9d18df3f22ac"
294+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
295+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Default"
282296

283-
gpu-test3:
297+
gpu-test4:
284298
pod0 ctr0:
285299
declare -x GPU_DEVICE_0="gpu-657bd2e7-f5c2-a7f2-fbaa-0d1cdc32f81b"
300+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
301+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Default"
286302
pod1 ctr0:
287303
declare -x GPU_DEVICE_0="gpu-657bd2e7-f5c2-a7f2-fbaa-0d1cdc32f81b"
304+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
305+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Default"
288306

289-
gpu-test4:
290-
pod0 ctr0:
307+
gpu-test5:
308+
pod0 ts-ctr0:
291309
declare -x GPU_DEVICE_0="gpu-18db0e85-99e9-c746-8531-ffeb86328b39"
310+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
311+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Long"
312+
pod0 ts-ctr1:
313+
declare -x GPU_DEVICE_0="gpu-18db0e85-99e9-c746-8531-ffeb86328b39"
314+
declare -x GPU_DEVICE_0_SHARING_STRATEGY="TimeSlicing"
315+
declare -x GPU_DEVICE_0_TIMESLICE_INTERVAL="Long"
316+
pod0 sp-ctr0:
317+
declare -x GPU_DEVICE_1="gpu-93d37703-997c-c46f-a531-755e3e0dc2ac"
318+
declare -x GPU_DEVICE_1_PARTITION_COUNT="10"
319+
declare -x GPU_DEVICE_1_SHARING_STRATEGY="SpacePartitioning"
320+
pod0 sp-ctr1:
292321
declare -x GPU_DEVICE_1="gpu-93d37703-997c-c46f-a531-755e3e0dc2ac"
293-
declare -x GPU_DEVICE_2="gpu-ee3e4b55-fcda-44b8-0605-64b7a9967744"
294-
declare -x GPU_DEVICE_3="gpu-9ede7e32-5825-a11b-fa3d-bab6d47e0243"
322+
declare -x GPU_DEVICE_1_PARTITION_COUNT="10"
323+
declare -x GPU_DEVICE_1_SHARING_STRATEGY="SpacePartitioning"
295324
```
296325

297326
In this example resource driver, no "actual" GPUs are made available to any
298327
containers. Instead, a set of environment variables are set in each container
299328
to indicate which GPUs *would* have been injected into them by a real resource
300-
driver.
329+
driver and how they *would* have been configured.
301330

302-
You can use the UUIDs of the GPUs set in these environment variables to verify
303-
that they were handed out in a way consistent with the semantics shown in the
304-
figure above.
331+
You can use the UUIDs of the GPUs as well as the GPU sharing settings set in
332+
these environment variables to verify that they were handed out in a way
333+
consistent with the semantics shown in the figure above.
305334

306335
Once you have verified everything is running correctly, delete all of the
307336
example apps:
308337
```bash
309-
kubectl delete --wait=false --filename=demo/gpu-test{1,2,3,4}.yaml
338+
kubectl delete --wait=false --filename=demo/gpu-test{1,2,3,4,5}.yaml
310339
```
311340

312341
And wait for them to terminate:
@@ -320,6 +349,7 @@ gpu-test2 pod0 2/2 Terminating 0 31m
320349
gpu-test3 pod0 1/1 Terminating 0 31m
321350
gpu-test3 pod1 1/1 Terminating 0 31m
322351
gpu-test4 pod0 1/1 Terminating 0 31m
352+
gpu-test5 pod0 4/4 Terminating 0 31m
323353
...
324354
```
325355

0 commit comments

Comments
 (0)