-
-
Notifications
You must be signed in to change notification settings - Fork 3
Implement resource limits and requests for Spark pods #128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I left this in the refinement column because I vaguely remember someone saying that this is already implemented for Spark and might just need a different implementation? I'm not sure to be honest and I might be wrong. |
Cores, Core-limit and memory can be specified for driver & executor pods (but not for the initiating job): e.g. https://github.com/stackabletech/spark-k8s-operator/blob/main/examples/ny-tlc-report-image.yaml#L36-L42 |
Thank you! I need some more input though I'm afraid :) Is there still anything to be done to close this ticket or can it already be closed as done?
|
@razvan what do you think? |
Having thought about this, I think it is worthwhile to implement this for the job Pod (the implementation would follow the standard pattern of introducing a struct that is defined in the CRD and passed through to the ContainerBuilder), but to leave the driver/executor management to Spark (as is currently done). |
Great, thank you! |
Have updated the issue description accordingly. |
Sorry for having missed this and yes I agree with @adwk67 that the initiating I'm not sure it's a good idea to make these limits configurable by the user. |
All other boxes have been ticked, except
|
Uh oh!
There was an error while loading. Please reload this page.
In other products we're introducing a common resource limit configuration. This ticket is about evaluating whether it makes sense to use it in this operator too. If it does, this ticket includes the implementation:
Part of this epic stackabletech/issues#241
Update
The spark-k8s operator hands management of driver and executor pods off to spark itself, and there are CRD fields for this purpose that are used to construct the relevant
spark-submit
arguments. This will remain unchanged. Currently there are no resources specified for the initiating/Job Pod, so this ticket will cover that. Specifically, since the operator does follow the role-pattern for other products, the struct will a top-level (directly under.spec
) field: the resources defined here will be passed on to theContainerBuilder
used for the job, here.Acceptance criteria
usage.adoc
with product specific information and link to common shared resources conceptusage.adoc
The text was updated successfully, but these errors were encountered: