-
Notifications
You must be signed in to change notification settings - Fork 68
Conversation
13cdd9f
to
f8a37d1
Compare
@jichenjc I was using dev build and found the patch can be successful, am I missing anything? root@xanthene1:~/go/src/github.com/kubernetes-sigs/cluster-api-provider-nested# kubectl patch --type=merge nestedcluster nestedcluster-sample -p '{"spec":{"controlPlaneEndpoint":{"port":6444}}}'
nestedcluster.infrastructure.cluster.x-k8s.io/nestedcluster-sample patched root@xanthene1:~/go/src/github.com/kubernetes-sigs/cluster-api-provider-nested# kubectl get nestedclusters.infrastructure.cluster.x-k8s.io -A -oyaml
apiVersion: v1
items:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: NestedCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"infrastructure.cluster.x-k8s.io/v1alpha4","kind":"NestedCluster","metadata":{"annotations":{},"name":"nestedcluster-sample","namespace":"default"},"spec":{"controlPlaneEndpoint":{"host":"localhost","port":6443}}}
creationTimestamp: "2021-07-23T03:52:17Z"
generation: 2
labels:
cluster.x-k8s.io/cluster-name: cluster-sample
managedFields:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:controlPlaneEndpoint:
.: {}
f:host: {}
f:port: {}
manager: kubectl
operation: Update
time: "2021-07-23T03:52:17Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:ownerReferences:
.: {}
k:{"uid":"ece6ff0b-be59-456d-ac1d-b5387c0dba64"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:ready: {}
manager: manager
operation: Update
time: "2021-07-23T03:55:44Z"
name: nestedcluster-sample
namespace: default
ownerReferences:
- apiVersion: cluster.x-k8s.io/v1alpha4
blockOwnerDeletion: true
controller: true
kind: Cluster
name: cluster-sample
uid: ece6ff0b-be59-456d-ac1d-b5387c0dba64
resourceVersion: "167373"
uid: f5db2a16-be0d-42a6-9b43-7c42e592d83b
spec:
controlPlaneEndpoint:
host: localhost
port: 6444
status:
ready: true
kind: List
metadata:
resourceVersion: ""
selfLink: "" root@xanthene1:~/go/src/github.com/kubernetes-sigs/cluster-api-provider-nested# kubectl patch --type=merge nestedcluster nestedcluster-sample -p '{"spec":{"controlPlaneEndpoint":{"port":6443}}}'
nestedcluster.infrastructure.cluster.x-k8s.io/nestedcluster-sample patched root@xanthene1:~/go/src/github.com/kubernetes-sigs/cluster-api-provider-nested# kubectl get nestedclusters.infrastructure.cluster.x-k8s.io -A -oyaml
apiVersion: v1
items:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
kind: NestedCluster
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"infrastructure.cluster.x-k8s.io/v1alpha4","kind":"NestedCluster","metadata":{"annotations":{},"name":"nestedcluster-sample","namespace":"default"},"spec":{"controlPlaneEndpoint":{"host":"localhost","port":6443}}}
creationTimestamp: "2021-07-23T03:52:17Z"
generation: 3
labels:
cluster.x-k8s.io/cluster-name: cluster-sample
managedFields:
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:controlPlaneEndpoint:
.: {}
f:host: {}
f:port: {}
manager: kubectl
operation: Update
time: "2021-07-23T03:52:17Z"
- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha4
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:cluster.x-k8s.io/cluster-name: {}
f:ownerReferences:
.: {}
k:{"uid":"ece6ff0b-be59-456d-ac1d-b5387c0dba64"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:status:
.: {}
f:ready: {}
manager: manager
operation: Update
time: "2021-07-23T03:55:44Z"
name: nestedcluster-sample
namespace: default
ownerReferences:
- apiVersion: cluster.x-k8s.io/v1alpha4
blockOwnerDeletion: true
controller: true
kind: Cluster
name: cluster-sample
uid: ece6ff0b-be59-456d-ac1d-b5387c0dba64
resourceVersion: "167479"
uid: f5db2a16-be0d-42a6-9b43-7c42e592d83b
spec:
controlPlaneEndpoint:
host: localhost
port: 6443
status:
ready: true
kind: List
metadata:
resourceVersion: ""
selfLink: "" |
@gyliu513 thanks!I forgot to add this new file config/webhook/manifests.yaml into git |
@jichenjc my question here is: Even without your fix, I did not get any error when |
@gyliu513 because my fix added a ValidatingWebhookConfiguration and do the validation .. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking really good so far. Are we able to write any automated tests for this? Alson one nit
about the main.go
changes.
main.go
Outdated
func setupWebhooks(mgr ctrl.Manager) { | ||
if err := (&infrastructurev1.NestedCluster{}).SetupWebhookWithManager(mgr); err != nil { | ||
setupLog.Error(err, "unable to create webhook", "webhook", "NestedCluster") | ||
os.Exit(1) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we have this if block defined above in main()
where setupWebhooks(mgr)
is, the go tag below // +kubebuilder:scaffold:builder
should actually auto inject this for you if you use kubebuilder new ...
so to keep the configuration the same place always then when anyone makes changes they always get the same exp.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure I fully understand the request, anyway, I updated per my understanding
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to reduce turn around cycles on this, all I'm thinking if instead of calling/adding a new func for this put the if
block back in the main()
func where you are calling the func.
The reasoning is the go comment tag is what kubebuilder uses to inject code and one of the piece of code it usually injects is this same exact if
block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I think I understand the right thing, I updated with additional test, please help check again, thanks !~
@christopherhein
64ae4b9
to
3565983
Compare
if err := (&infrastructurev1.NestedCluster{}).SetupWebhookWithManager(mgr); err != nil { | ||
setupLog.Error(err, "unable to create webhook", "webhook", "NestedCluster") | ||
os.Exit(1) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry to be pedantic about this but can you move it above // +kubebuilder:scaffold:builder
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than this, this PR looks and works great. Once this is done we can lgtm and approve it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for comment, just updated , appreciate the review~
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
🎉
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: christopherhein, jichenjc The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
enabled a set of manifests and webhook configuration, so the webhook sevice
will run on 9443 of capn-controller-manager , in case we edit the nestedcluster ,e.g
we will have following error
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):part of #184