You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+49-1Lines changed: 49 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -93,11 +93,59 @@ When passing through GPUs to the inner container, you may end up using associate
93
93
94
94
Envbox will detect these mounts and pass them inside the inner container it creates, so that GPU-aware tools run inside the inner container can still utilize these libraries.
95
95
96
+
Here's an example Docker command to run a GPU-enabled workload in Envbox. Note the following:
97
+
98
+
1) The NVidia container runtime must be installed on the host (`--runtime=nvidia`).
99
+
2) `CODER_ADD_GPU=true` must be set to enable GPU-specific functionality.
100
+
3) When `CODER_ADD_GPU` is set, it is required to also set `CODER_USR_LIB_DIR` to a location where the relvant host directory has been mounted inside the outer container. In the below example, this is `/usr/lib/x86_64-linux-gnu` on the underlying host. It is mounted into the container under the path `/var/coder/usr/lib`. We then set `CODER_USR_LIB_DIR=/var/coder/usr/lib`. The actual location inside the container is not important **as long as it does not overwrite any pre-existing directories containing system libraries**.
101
+
102
+
> Note: this step is required in case user workloads require libraries from the underlying host that are not added in by the container runtime.
0 commit comments