Skip to content

[Merged by Bors] - Resources limits #147

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 12 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,10 @@ All notable changes to this project will be documented in this file.
### Changed

- Bumped image to `3.3.0-stackable0.2.0` in tests and docs ([#145])
- BREAKING: use resource limit struct instead of passing spark configuration arguments ([#147])

[#145]: https://github.com/stackabletech/spark-k8s-operator/pull/145
[#147]: https://github.com/stackabletech/spark-k8s-operator/pull/147

## [0.5.0] - 2022-09-06

Expand Down
12 changes: 6 additions & 6 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

242 changes: 223 additions & 19 deletions deploy/crd/sparkapplication.crd.yaml

Large diffs are not rendered by default.

242 changes: 223 additions & 19 deletions deploy/helm/spark-k8s-operator/crds/crds.yaml

Large diffs are not rendered by default.

242 changes: 223 additions & 19 deletions deploy/manifests/crds.yaml

Large diffs are not rendered by default.

6 changes: 0 additions & 6 deletions docs/modules/ROOT/examples/example-encapsulated.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,5 @@ spec:
mode: cluster
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: /stackable/spark/examples/jars/spark-examples_2.12-3.3.0.jar # <2>
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
executor:
cores: 1
instances: 3
memory: "512m"
5 changes: 0 additions & 5 deletions docs/modules/ROOT/examples/example-sparkapp-configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,11 @@ spec:
sparkConf:
"spark.hadoop.fs.s3a.aws.credentials.provider": "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
volumeMounts:
- name: cm-job-arguments # <6>
mountPath: /arguments # <7>
executor:
cores: 1
instances: 3
memory: "512m"
volumeMounts:
- name: cm-job-arguments # <6>
mountPath: /arguments # <7>
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,11 @@ spec:
persistentVolumeClaim:
claimName: pvc-ksv
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies # <6>
executor:
cores: 1
instances: 3
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies # <6>
24 changes: 19 additions & 5 deletions docs/modules/ROOT/examples/example-sparkapp-image.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,25 @@ spec:
- tabulate==0.8.9 # <4>
sparkConf: # <5>
"spark.hadoop.fs.s3a.aws.credentials.provider": "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider"
job:
resources:
cpu:
min: "1"
max: "1"
memory:
limit: "1Gi"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
resources:
cpu:
min: "1"
max: "1500m"
memory:
limit: "1Gi"
executor:
cores: 1
instances: 3
memory: "512m"
resources:
cpu:
min: "1"
max: "4"
memory:
limit: "2Gi"
5 changes: 0 additions & 5 deletions docs/modules/ROOT/examples/example-sparkapp-pvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,16 +21,11 @@ spec:
persistentVolumeClaim:
claimName: pvc-ksv
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies # <5>
executor:
cores: 1
instances: 3
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies # <5>
6 changes: 0 additions & 6 deletions docs/modules/ROOT/examples/example-sparkapp-s3-private.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,5 @@ spec:
spark.hadoop.fs.s3a.aws.credentials.provider: "org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider" # <6>
spark.driver.extraClassPath: "/dependencies/jars/hadoop-aws-3.2.0.jar:/dependencies/jars/aws-java-sdk-bundle-1.11.375.jar"
spark.executor.extraClassPath: "/dependencies/jars/hadoop-aws-3.2.0.jar:/dependencies/jars/aws-java-sdk-bundle-1.11.375.jar"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
executor:
cores: 1
instances: 3
memory: "512m"
54 changes: 42 additions & 12 deletions docs/modules/ROOT/pages/usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,42 @@ spec:

This has the advantage that bucket configuration can be shared across `SparkApplication`s and reduces the cost of updating these details.

== Resource Requests

// The "nightly" version is needed because the "include" directive searches for
// files in the "stable" version by default.
// TODO: remove the "nightly" version after the next platform release (current: 22.09)
include::nightly@home:concepts:stackable_resource_requests.adoc[]

If no resources are configured explicitly, the operator uses the following defaults:

[source,yaml]
----
job:
resources:
cpu:
min: '50m'
max: "100m"
memory:
limit: '1Gi'
driver:
resources:
cpu:
min: '1'
max: "2"
memory:
limit: '2Gi'
executor:
resources:
cpu:
min: '1'
max: "4"
memory:
limit: '4Gi'
----
WARNING: The default values are _most likely_ not sufficient to run a proper cluster in production. Please adapt according to your requirements.
For more details regarding Kubernetes CPU limits see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/[Assign CPU Resources to Containers and Pods].

== CRD argument coverage

Below are listed the CRD fields that can be defined by the user:
Expand Down Expand Up @@ -214,14 +250,11 @@ Below are listed the CRD fields that can be defined by the user:
|`spec.volumes.persistentVolumeClaim.claimName`
|The persistent volume claim backing the volume

|`spec.driver.cores`
|Number of cores used by the driver (only in cluster mode)
|`spec.job.resources`
|Resources specification for the initiating Job

|`spec.driver.coreLimit`
|Total cores for all executors

|`spec.driver.memory`
|Specified memory for the driver
|`spec.driver.resources`
|Resources specification for the driver Pod

|`spec.driver.volumeMounts`
|A list of mounted volumes for the driver
Expand All @@ -235,15 +268,12 @@ Below are listed the CRD fields that can be defined by the user:
|`spec.driver.nodeSelector`
|A dictionary of labels to use for node selection when scheduling the driver N.B. this assumes there are no implicit node dependencies (e.g. `PVC`, `VolumeMount`) defined elsewhere.

|`spec.executor.cores`
|Number of cores for each executor
|`spec.executor.resources`
|Resources specification for the executor Pods

|`spec.executor.instances`
|Number of executor instances launched for this job

|`spec.executor.memory`
|Memory specified for executor

|`spec.executor.volumeMounts`
|A list of mounted volumes for each executor

Expand Down
6 changes: 0 additions & 6 deletions docs/modules/getting_started/examples/code/getting_started.sh
Original file line number Diff line number Diff line change
Expand Up @@ -56,14 +56,8 @@ spec:
sparkImage: docker.stackable.tech/stackable/pyspark-k8s:3.3.0-stackable0.2.0
mode: cluster
mainApplicationFile: local:///stackable/spark/examples/src/main/python/pi.py
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
executor:
cores: 1
instances: 3
memory: "512m"
EOF
# end::install-sparkapp[]

Expand Down
5 changes: 0 additions & 5 deletions examples/ny-tlc-report-external-dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,16 +33,11 @@ spec:
persistentVolumeClaim:
claimName: pvc-ksv
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies
executor:
cores: 1
instances: 3
memory: "512m"
volumeMounts:
- name: job-deps
mountPath: /dependencies
6 changes: 0 additions & 6 deletions examples/ny-tlc-report-image.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,5 @@ spec:
accessStyle: Path
sparkConf:
spark.hadoop.fs.s3a.aws.credentials.provider: "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
executor:
cores: 1
instances: 3
memory: "512m"
5 changes: 0 additions & 5 deletions examples/ny-tlc-report.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,16 +34,11 @@ spec:
sparkConf:
spark.hadoop.fs.s3a.aws.credentials.provider: "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
volumeMounts:
- name: cm-job-arguments
mountPath: /arguments
executor:
cores: 1
instances: 3
memory: "512m"
volumeMounts:
- name: cm-job-arguments
mountPath: /arguments
Loading