-
Notifications
You must be signed in to change notification settings - Fork 68
✨ Generate the manifests by calling the kubeadm #216
✨ Generate the manifests by calling the kubeadm #216
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: charleszheng44 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
2949cf0
to
07575c2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is amazing so far, 1 comment for now and I want to pul this down and experiment with it tomorrow. 👍
Dockerfile
Outdated
ENTRYPOINT [ "/start.sh", "/workspace/manager" ] | ||
|
||
# Use distroless as minimal base image to package the manager binary | ||
# Refer to https://github.com/GoogleContainerTools/distroless for more details | ||
FROM gcr.io/distroless/static:nonroot | ||
# FROM gcr.io/distroless/static:nonroot | ||
FROM ubuntu |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need to switch this can we use the debian from the distroless repo - https://github.com/GoogleContainerTools/distroless#docker
07575c2
to
66c731a
Compare
66c731a
to
966bca1
Compare
case kubeadm.APIServer: | ||
ret[kubeadm.APIServer] = completeKASPodSpec(pod, clusterName) | ||
case kubeadm.ControllerManager: | ||
ret[kubeadm.APIServer] = completeKCMPodSpec(pod, clusterName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ret[kubeadm.APIServer] = completeKCMPodSpec(pod, clusterName) | |
ret[kubeadm. ControllerManager] = completeKCMPodSpec(pod, clusterName) |
case kubeadm.ControllerManager: | ||
ret[kubeadm.APIServer] = completeKCMPodSpec(pod, clusterName) | ||
case kubeadm.Etcd: | ||
ret[kubeadm.APIServer] = completeEtcdPodSpec(pod, clusterName) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ret[kubeadm.APIServer] = completeEtcdPodSpec(pod, clusterName) | |
ret[kubeadm. Etcd] = completeEtcdPodSpec(pod, clusterName) |
966bca1
to
19f4262
Compare
Thanks @charleszheng44 for the effort! I'm trying out this PR, however, the etcd statefulset DNS resolve have some issue so it doesn't come up, not sure if you've seen this?
|
@jzhoucliqr Thanks for finding the bug. I am working on it and will let you know once it is fixed 😃 |
I think the issue is because in statefulset we use component name as service name, which ends with -nestedetcd, but the actual service object name ends with -etcd. |
|
||
const ( | ||
KubeadmExecPath = "/kubeadm" | ||
KASManifestsPath = "/etc/kubernetes/manifests/kube-apiserver.yaml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By using the same default file locations, could it cause conflict, for eg, when multiple clusters are created at the same time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. That could be a problem. kubeadm
does not support generating static pod manifests in a directory other than /etc/kubernetes/manifests
. We should figure out a way to solve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@christopherhein @Fei-Guo do you have any suggestions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we span a new pod for each cluster and generate the manifests in the new pod then terminate after they have been created as configmaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we span a new pod for each cluster and generate the manifests in the new pod then terminate after they have been created as configmaps?
Yes. That's also the only way I can see. Let's try to merge this PR first and I will create a new issue/PR to fix it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that might work the only idea I could have is tracking what cluster is currently being templated and "lock" other reconciliation from processing until the current cluster bring up finishes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then we need to add a big lock in the reconcile loop or reduce the ncp_controller's concurrency to one to ensure the ncp_controller is not creating more than one nested control plane at a time. Will that affect the performance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A big lock might be a good short term solution and proposing a path override option to the kubeadm folks for a longer term solution?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as mentioned to @charleszheng44 on slack, the kubeadm commands used to generate the manifests accept --rootfs
, so you can pass e.g. ./
to generate the manifest tree under ./
. this requires running kubeadm as root there obviously.
i don't see kubeadm allowing a configurable /etc/kubernetes/manifests
any time soon.
That's right 😅 Let me try it out. Thanks |
19f4262
to
9f0f5b7
Compare
@jzhoucliqr It should work now. Would you try it out? |
1e9600e
to
da2067d
Compare
Thanks. Will try it out. |
Changes work fine. Thanks @charleszheng44 |
e71f308
to
21e37b5
Compare
21e37b5
to
00d69ef
Compare
/lgtm |
What this PR does / why we need it:
Generate the ncp manifests using the
kubeadm
executable