Skip to content
This repository was archived by the owner on Mar 16, 2021. It is now read-only.

Added Kustomize for controller, sidecar & CRDs #9

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
80 changes: 80 additions & 0 deletions kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

images:
# Controller
- name: objectstorage-controller
newName: quay.io/containerobjectstorage/objectstorage-controller
newTag: latest
# Sidecar
- name: sample-driver
newName: quay.io/containerobjectstorage/sample-driver
newTag: latest
- name: object-storage-sidecar
newName: quay.io/containerobjectstorage/object-storage-sidecar
newTag: latest

resources:
# CRDs
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_bucketaccessclasses.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_bucketaccesses.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_bucketaccessrequests.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_bucketclasses.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_bucketrequests.yaml
- https://raw.githubusercontent.com/kubernetes-sigs/container-object-storage-interface-api/master/crds/objectstorage.k8s.io_buckets.yaml
# Controller
- manifests/ns.yaml
- manifests/sa.yaml
- manifests/rbac.yaml
- manifests/deployment.yaml
# Sidecar
- https://raw.githubusercontent.com/container-object-storage-interface/cosi-provisioner-sidecar/master/examples/object-storage-sidecar.yaml
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cosi-controller-manager does not in any way require the sample-provisioner to be running in the cluster, as such, its deployment should not include it. At least, the base layer. This is a case where having an overlay which does deploy the whole stack including the sample-provisioner could make sense, for testing/demo purposes!


patches:
# CRDs
- target:
kind: CustomResourceDefinition
patch: |-
- op: add
path: /metadata/annotations
value:
controller-gen.kubebuilder.io/version: (devel)
api-approved.kubernetes.io: https://github.com/kubernetes-sigs/container-object-storage-interface-api/pull/2
# Controller
- target:
kind: Deployment
name: objectstorage-controller
patch: |-
- op: replace
path: /spec/template/spec/containers/0/imagePullPolicy
value: IfNotPresent
# Sidecar
- target:
kind: Deployment
name: object-storage-provisioner
patch: |-
- op: replace
path: /spec/template/spec/containers/0/imagePullPolicy
value: IfNotPresent
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On one hand, the image overrides specify the latest image tags, but then we'd use IfNotPresent as ImagePullPolicy, that seems weird (in general, you want latest to be Always).
Now, K8s will automatically use Always for latest images, and IfNotPresent for others, so there's no reason (I believe) to have these patches. Let's go with the default, and if there are site-specific overrides required, one can do this in an overlay on this base.

- op: replace
path: /spec/template/spec/containers/1/imagePullPolicy
value: IfNotPresent
- op: replace
path: /metadata
value:
name: object-storage-provisioner
labels:
app: object-storage-provisioner
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could be coded in the manifest directly, no reason to do it as a patch.

Also, consider adopting https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ instead.

namespace: objectstorage-provisioner-ns
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not handle namespaces like this but rely on kubectl apply -n ... and alike. See earlier comment.

- target:
kind: Secret
name: object-storage-provisioner
patch: |-
- op: replace
path: /metadata
value:
name: object-storage-provisioner
labels:
app: object-storage-provisioner
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above.

namespace: objectstorage-provisioner-ns
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above.

24 changes: 24 additions & 0 deletions manifests/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: objectstorage-controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Objects can get prefixed through a kustomization if desired, so no real need to do so here. Plain controller should do.

namespace: objectstorage-system
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See earlier comments.

spec:
replicas: 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the controller have some kind of leader election? If not, I think it's easier to simply deploy this as a StatefulSet of a single replica.
Keep in mind a Deployment with replicas: 1 does not guarantee only a single replica is running at any point in time.

strategy:
rollingUpdate:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless leader election is implemented, this could cause trouble 😃

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have leader election

maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: objectstorage-controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be a bit more specific, e.g.:

app.kubernetes.io/part-of: cosi
app.kubernetes.io/name: cosi-controller-manager
app.kubernetes.io/component: controller

Then, have a app.kubernetes.io/instance in the root Kustomization, e.g. set to cosi-controller-manager (as well).

Again, see https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/#labels.

template:
metadata:
labels:
app: objectstorage-controller
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above. Here, have a app.kubernetes.io/version as well.

spec:
serviceAccountName: objectstorage-controller-sa
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See earlier comment about prefix handling.

containers:
- name: objectstorage-controller
image: quay.io/containerobjectstorage/objectstorage-controller:latest
5 changes: 5 additions & 0 deletions manifests/ns.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need, see above.

apiVersion: v1
kind: Namespace
metadata:
name: objectstorage-system
55 changes: 55 additions & 0 deletions manifests/rbac.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: objectstorage-controller-role
rules:
- apiGroups: ["objectstorage.k8s.io"]
resources: ["bucketrequests", "bucketaccessrequests"]
verbs: ["get", "list", "watch"]
- apiGroups: ["objectstorage.k8s.io"]
resources: ["buckets", "bucketaccess"]
verbs: ["get", "list", "watch", "update", "create", "delete"]
- apiGroups: ["objectstorage.k8s.io"]
resources: ["bucketclass","bucketaccessclass"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: objectstorage-controller:system
subjects:
- kind: ServiceAccount
name: objectstorage-controller-sa
namespace: objectstorage-system
roleRef:
kind: ClusterRole
name: objectstorage-controller-role
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: objectstorage-controller
namespace: objectstorage-system
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: objectstorage-controller
namespace: objectstorage-system
subjects:
- kind: ServiceAccount
name: objectstorage-controller-sa
namespace: objectstorage-system
roleRef:
kind: Role
name: objectstorage-controller
apiGroup: rbac.authorization.k8s.io
6 changes: 6 additions & 0 deletions manifests/sa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: objectstorage-controller-sa
namespace: objectstorage-system