Skip to content

Commit 782e917

Browse files
committed
update gpu section in README
1 parent 646ca60 commit 782e917

File tree

1 file changed

+49
-1
lines changed

1 file changed

+49
-1
lines changed

README.md

Lines changed: 49 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,11 +93,59 @@ When passing through GPUs to the inner container, you may end up using associate
9393
9494
Envbox will detect these mounts and pass them inside the inner container it creates, so that GPU-aware tools run inside the inner container can still utilize these libraries.
9595
96+
Here's an example Docker command to run a GPU-enabled workload in Envbox. Note the following:
97+
98+
1) The NVidia container runtime must be installed on the host (`--runtime=nvidia`).
99+
2) `CODER_ADD_GPU=true` must be set to enable GPU-specific functionality.
100+
3) When `CODER_ADD_GPU` is set, it is required to also set `CODER_USR_LIB_DIR` to a location where the relvant host directory has been mounted inside the outer container. In the below example, this is `/usr/lib/x86_64-linux-gnu` on the underlying host. It is mounted into the container under the path `/var/coder/usr/lib`. We then set `CODER_USR_LIB_DIR=/var/coder/usr/lib`. The actual location inside the container is not important **as long as it does not overwrite any pre-existing directories containing system libraries**.
101+
102+
> Note: this step is required in case user workloads require libraries from the underlying host that are not added in by the container runtime.
103+
104+
```shell
105+
docker run -it --rm \
106+
--runtime=nvidia \
107+
--gpus=all \
108+
--name=envbox-gpu-test \
109+
-v /tmp/envbox/docker:/var/lib/coder/docker \
110+
-v /tmp/envbox/containers:/var/lib/coder/containers \
111+
-v /tmp/envbox/sysbox:/var/lib/sysbox \
112+
-v /tmp/envbox/docker:/var/lib/docker \
113+
-v /usr/src:/usr/src:ro \
114+
-v /lib/modules:/lib/modules:ro \
115+
-v /usr/lib/x86_64-linux-gnu:/var/coder/usr/lib \
116+
--privileged \
117+
-e CODER_INNER_IMAGE=nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2 \
118+
-e CODER_INNER_USERNAME=root \
119+
-e CODER_ADD_GPU=true \
120+
-e CODER_USR_LIB_DIR=/var/coder/usr/lib \
121+
envbox:latest /envbox docker
122+
```
123+
124+
To validate GPU functionality, you can run the following commands:
125+
126+
1) To validate that the container runtime correctly passed the required GPU tools into the outer container, run:
127+
128+
```shell
129+
docker exec -it envbox-gpu-test nvidia-smi
130+
```
131+
132+
2) To validate the same inside the inner container, run:
133+
134+
```shell
135+
docker exec -it envbox-gpu-test docker exec -it workspace_cvm nvidia-smi
136+
```
137+
138+
3) To validate that the sample CUDA application inside the container runs correctly:
139+
140+
```shell
141+
docker exec -it envbox-gpu-test docker exec -it workspace_cvm /tmp/vectorAdd
142+
```
143+
96144
## Hacking
97145

98146
Here's a simple one-liner to run the `codercom/enterprise-minimal:ubuntu` image in Envbox using Docker:
99147

100-
```
148+
```shell
101149
docker run -it --rm \
102150
-v /tmp/envbox/docker:/var/lib/coder/docker \
103151
-v /tmp/envbox/containers:/var/lib/coder/containers \

0 commit comments

Comments
 (0)