Skip to content

[Merged by Bors] - Add pull policy #75

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 14 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file.

## [Unreleased]

### Added

- Added new fields to govern image pull policy ([#75])

[#75]: https://github.com/stackabletech/spark-k8s-operator/pull/75

### Changed

- Updated examples ([#71])
Expand Down
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

17 changes: 17 additions & 0 deletions deploy/crd/sparkapplication.crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -356,6 +356,23 @@ spec:
sparkImage:
nullable: true
type: string
sparkImagePullPolicy:
enum:
- Always
- IfNotPresent
- Never
nullable: true
type: string
sparkImagePullSecrets:
items:
description: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
properties:
name:
description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names"
type: string
type: object
nullable: true
type: array
stopped:
nullable: true
type: boolean
Expand Down
17 changes: 17 additions & 0 deletions deploy/helm/spark-k8s-operator/crds/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -358,6 +358,23 @@ spec:
sparkImage:
nullable: true
type: string
sparkImagePullPolicy:
enum:
- Always
- IfNotPresent
- Never
nullable: true
type: string
sparkImagePullSecrets:
items:
description: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
properties:
name:
description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names"
type: string
type: object
nullable: true
type: array
stopped:
nullable: true
type: boolean
Expand Down
17 changes: 17 additions & 0 deletions deploy/manifests/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -359,6 +359,23 @@ spec:
sparkImage:
nullable: true
type: string
sparkImagePullPolicy:
enum:
- Always
- IfNotPresent
- Never
nullable: true
type: string
sparkImagePullSecrets:
items:
description: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.
properties:
name:
description: "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names"
type: string
type: object
nullable: true
type: array
stopped:
nullable: true
type: boolean
Expand Down
6 changes: 6 additions & 0 deletions docs/modules/ROOT/pages/usage.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,12 @@ Below are listed the CRD fields that can be defined by the user:
|`spec.sparkImage`
| Spark image which will be deployed to driver and executor pods, which must contain spark environment needed by the job e.g. `docker.stackable.tech/stackable/spark-k8s:3.2.1-hadoop3.2-stackable0.4.0`

|`spec.sparkImagePullPolicy`
| Optional Enum (one of `Always`, `IfNotPresent` or `Never`) that determines the pull policy of the spark job image

|`spec.sparkImagePullSecrets`
| An optional list of references to secrets in the same namespace to use for pulling any of the images used by a `SparkApplication` resource. Each reference has a single property (`name`) that must contain a reference to a valid secret

|`spec.mainApplicationFile`
|The actual application file that will be called by `spark-submit`

Expand Down
2 changes: 2 additions & 0 deletions examples/ny-tlc-report-external-dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ metadata:
spec:
version: "1.0"
sparkImage: docker.stackable.tech/stackable/pyspark-k8s:3.2.1-hadoop3.2-python39-stackable0.1.0
# Always | IfNotPresent | Never
sparkImagePullPolicy: IfNotPresent
mode: cluster
mainApplicationFile: s3a://my-bucket/ny-tlc-report.py
args:
Expand Down
1 change: 1 addition & 0 deletions examples/ny-tlc-report-image.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ spec:
# everything under /jobs will be copied to /stackable/spark/jobs
image: docker.stackable.tech/stackable/ny-tlc-report:0.1.0
sparkImage: docker.stackable.tech/stackable/pyspark-k8s:3.2.1-hadoop3.2-python39-stackable0.1.0
sparkImagePullPolicy: Always
mode: cluster
mainApplicationFile: local:///stackable/spark/jobs/ny_tlc_report.py
args:
Expand Down
1 change: 1 addition & 0 deletions rust/crd/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,4 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
serde_yaml = "0.8"
snafu = "0.7"
strum = { version = "0.24", features = ["derive"] }
93 changes: 92 additions & 1 deletion rust/crd/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ pub mod constants;
use constants::*;
use stackable_operator::commons::s3::{InlinedS3BucketSpec, S3BucketDef};
use stackable_operator::k8s_openapi::api::core::v1::{
EnvVar, EnvVarSource, SecretKeySelector, Volume, VolumeMount,
EnvVar, EnvVarSource, LocalObjectReference, SecretKeySelector, Volume, VolumeMount,
};

use std::collections::{BTreeMap, HashMap};
Expand All @@ -20,6 +20,7 @@ use stackable_operator::{
role_utils::CommonConfiguration,
schemars::{self, JsonSchema},
};
use strum::{Display, EnumString};

#[derive(Snafu, Debug)]
pub enum Error {
Expand Down Expand Up @@ -68,6 +69,10 @@ pub struct SparkApplicationSpec {
#[serde(default, skip_serializing_if = "Option::is_none")]
pub spark_image: Option<String>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub spark_image_pull_policy: Option<ImagePullPolicy>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub spark_image_pull_secrets: Option<Vec<LocalObjectReference>>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub driver: Option<DriverConfig>,
#[serde(default, skip_serializing_if = "Option::is_none")]
pub executor: Option<ExecutorConfig>,
Expand All @@ -89,6 +94,13 @@ pub struct SparkApplicationSpec {
pub env: Option<Vec<EnvVar>>,
}

#[derive(Clone, Debug, Deserialize, Eq, JsonSchema, PartialEq, Serialize, Display, EnumString)]
pub enum ImagePullPolicy {
Always,
IfNotPresent,
Never,
}

#[derive(Clone, Debug, Default, Deserialize, JsonSchema, PartialEq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct JobDependencies {
Expand Down Expand Up @@ -123,6 +135,14 @@ impl SparkApplication {
self.spec.image.as_deref()
}

pub fn spark_image_pull_policy(&self) -> Option<ImagePullPolicy> {
self.spec.spark_image_pull_policy.clone()
}

pub fn spark_image_pull_secrets(&self) -> Option<Vec<LocalObjectReference>> {
self.spec.spark_image_pull_secrets.clone()
}

pub fn version(&self) -> Option<&str> {
self.spec.version.as_deref()
}
Expand Down Expand Up @@ -380,7 +400,10 @@ pub struct CommandStatus {

#[cfg(test)]
mod tests {
use crate::ImagePullPolicy;
use crate::LocalObjectReference;
use crate::SparkApplication;
use std::str::FromStr;

#[test]
fn test_spark_examples_s3() {
Expand Down Expand Up @@ -542,4 +565,72 @@ spec:
assert!(spark_application.spec.main_class.is_none());
assert!(spark_application.spec.image.is_none());
}

#[test]
fn test_image_actions() {
let spark_application = serde_yaml::from_str::<SparkApplication>(
r#"
---
apiVersion: spark.stackable.tech/v1alpha1
kind: SparkApplication
metadata:
name: spark-pi-local
namespace: default
spec:
version: "1.0"
sparkImage: docker.stackable.tech/stackable/spark-k8s:3.2.1-hadoop3.2-stackable0.4.0
sparkImagePullPolicy: Always
sparkImagePullSecrets:
- name: myregistrykey
mode: cluster
mainClass: org.apache.spark.examples.SparkPi
mainApplicationFile: local:///stackable/spark/examples/jars/spark-examples_2.12-3.2.1.jar
sparkConf:
spark.kubernetes.node.selector.node: "2"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
executor:
cores: 1
instances: 1
memory: "512m"
"#,
)
.unwrap();

assert_eq!(
Some(vec![LocalObjectReference {
name: Some("myregistrykey".to_string())
}]),
spark_application.spark_image_pull_secrets()
);
assert_eq!(
Some(ImagePullPolicy::Always),
spark_application.spark_image_pull_policy()
);
}

#[test]
fn test_image_pull_policy_ser() {
assert_eq!("Never", ImagePullPolicy::Never.to_string());
assert_eq!("Always", ImagePullPolicy::Always.to_string());
assert_eq!("IfNotPresent", ImagePullPolicy::IfNotPresent.to_string());
}

#[test]
fn test_image_pull_policy_de() {
assert_eq!(
ImagePullPolicy::Always,
ImagePullPolicy::from_str("Always").unwrap()
);
assert_eq!(
ImagePullPolicy::Never,
ImagePullPolicy::from_str("Never").unwrap()
);
assert_eq!(
ImagePullPolicy::IfNotPresent,
ImagePullPolicy::from_str("IfNotPresent").unwrap()
);
}
}
Loading