You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<6> the name of the volume mount backed by a `PersistentVolumeClaim` that must be pre-existing
73
73
<7> the path on the volume mount: this is referenced in the `sparkConf` section where the extra class path is defined for the driver and executors
74
74
75
-
=== JVM (Scala): externally located artifact and dataset
75
+
=== JVM (Scala): externally located artifact and dataset
76
76
77
77
[source,yaml]
78
78
----
@@ -260,7 +260,7 @@ Below are listed the CRD fields that can be defined by the user:
260
260
|Volume mount path
261
261
262
262
|`spec.driver.nodeSelector`
263
-
|A dictionary of labels to use for node selection when scheduling the driver.
263
+
|A dictionary of labels to use for node selection when scheduling the driver N.B. this assumes there are no implicit node dependencies (e.g. `PVC`, `VolumeMount`) defined elsewhere.
264
264
265
265
|`spec.executor.cores`
266
266
|Number of cores for each executor
@@ -281,6 +281,5 @@ Below are listed the CRD fields that can be defined by the user:
281
281
|Volume mount path
282
282
283
283
|`spec.executor.nodeSelector`
284
-
|A dictionary of labels to use for node selection when scheduling the executors.
284
+
|A dictionary of labels to use for node selection when scheduling the executors N.B. this assumes there are no implicit node dependencies (e.g. `PVC`, `VolumeMount`) defined elsewhere.
0 commit comments