|
| 1 | +<!-- START doctoc generated TOC please keep comment here to allow auto update --> |
| 2 | +<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> |
| 3 | +**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)* |
| 4 | + |
| 5 | +- [Development Quick Start](#development-quick-start) |
| 6 | + - [Prerequisites](#prerequisites) |
| 7 | + - [Create `kind` cluster](#create-kind-cluster) |
| 8 | + - [Install `cert-manager`](#install-cert-manager) |
| 9 | + - [Clone CAPI and Deploy Dev release](#clone-capi-and-deploy-dev-release) |
| 10 | + - [Clone CAPN](#clone-capn) |
| 11 | + - [Create Docker Images, Manifests and Load Images](#create-docker-images-manifests-and-load-images) |
| 12 | + - [Deploy CAPN](#deploy-capn) |
| 13 | + - [Apply Sample Tenant Cluster](#apply-sample-tenant-cluster) |
| 14 | + - [Get `KUBECONFIG`](#get-kubeconfig) |
| 15 | + - [Port Forward](#port-forward) |
| 16 | + - [Connect to Cluster](#connect-to-cluster) |
| 17 | + - [Connect to the Cluster! :tada:](#connect-to-the-cluster-tada) |
| 18 | + - [Clean Up](#clean-up) |
| 19 | + |
| 20 | +<!-- END doctoc generated TOC please keep comment here to allow auto update --> |
| 21 | + |
| 22 | +## Development Quick Start |
| 23 | + |
| 24 | +This tutorial introduces how to create a nested controlplane from source code for development. CAPN should work with any standard Kubernetes cluster out of box, but for demo purposes, in this tutorial, we will use a `kind` cluster as the management cluster as well as the nested workload cluster. |
| 25 | + |
| 26 | +### Prerequisites |
| 27 | + |
| 28 | +Please install the latest version of [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) and [kubectl](https://kubernetes.io/docs/tasks/tools/) |
| 29 | + |
| 30 | +### Create `kind` cluster |
| 31 | + |
| 32 | +```console |
| 33 | +kind create cluster --name=capn |
| 34 | +``` |
| 35 | + |
| 36 | +### Install `cert-manager` |
| 37 | + |
| 38 | +Cert Manager is a soft dependency for the Cluster API components to enable mutating and validating webhooks to be auto deployed. For more detailed instructions go [Cert Manager Installion](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests). |
| 39 | + |
| 40 | +```console |
| 41 | +kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml |
| 42 | +``` |
| 43 | + |
| 44 | +After Cert Manager got installed, you will see there is a new namespace `cert-manager` has been created, and some pods for cert manager are running under this namespace. |
| 45 | + |
| 46 | +```console |
| 47 | +# kubectl get ns |
| 48 | +NAME STATUS AGE |
| 49 | +cert-manager Active 27s |
| 50 | +default Active 71s |
| 51 | +kube-node-lease Active 73s |
| 52 | +kube-public Active 73s |
| 53 | +kube-system Active 73s |
| 54 | +local-path-storage Active 68s |
| 55 | +``` |
| 56 | + |
| 57 | +```console |
| 58 | +# kubectl get po -n cert-manager |
| 59 | +NAME READY STATUS RESTARTS AGE |
| 60 | +cert-manager-7dd5854bb4-8jp4b 1/1 Running 0 32s |
| 61 | +cert-manager-cainjector-64c949654c-bhdzz 1/1 Running 0 32s |
| 62 | +cert-manager-webhook-6b57b9b886-5cbnp 1/1 Running 0 32s |
| 63 | +``` |
| 64 | + |
| 65 | +### Clone CAPI and Deploy Dev release |
| 66 | + |
| 67 | +As a cluster API~(CAPI) provider, CAPN requires core components of CAPI to be setup. We need to deploy the unreleased version of CAPI for `v1alpha4` API support. |
| 68 | + |
| 69 | +```console |
| 70 | +git clone [email protected]:kubernetes-sigs/cluster-api.git |
| 71 | +cd cluster-api |
| 72 | +make release-manifests |
| 73 | +# change feature flags on core |
| 74 | +sed -i'' -e 's@- --feature-gates=.*@- --feature-gates=MachinePool=false,ClusterResourceSet=true@' out/core-components.yaml |
| 75 | +kubectl apply -f out/core-components.yaml |
| 76 | +cd .. |
| 77 | +``` |
| 78 | + |
| 79 | +This will help deploy the Cluster API Core Controller under namesapce `capi-system`. |
| 80 | + |
| 81 | +```console |
| 82 | +# kubectl get ns |
| 83 | +NAME STATUS AGE |
| 84 | +capi-system Active 52s |
| 85 | +cert-manager Active 2m47s |
| 86 | +default Active 3m31s |
| 87 | +kube-node-lease Active 3m33s |
| 88 | +kube-public Active 3m33s |
| 89 | +kube-system Active 3m33s |
| 90 | +local-path-storage Active 3m28s |
| 91 | +``` |
| 92 | + |
| 93 | +```console |
| 94 | +# kubectl get po -n capi-system |
| 95 | +NAME READY STATUS RESTARTS AGE |
| 96 | +capi-controller-manager-5b74fcc774-wpxn7 1/1 Running 0 64s |
| 97 | +``` |
| 98 | +### Clone CAPN |
| 99 | + |
| 100 | +```console |
| 101 | +git clone https://github.com/kubernetes-sigs/cluster-api-provider-nested |
| 102 | +cd cluster-api-provider-nested |
| 103 | +``` |
| 104 | + |
| 105 | +### Create Docker Images, Manifests and Load Images |
| 106 | + |
| 107 | +```console |
| 108 | +PULL_POLICY=Never TAG=dev make docker-build release-manifests |
| 109 | +kind load docker-image gcr.io/cluster-api-nested-controller-amd64:dev --name=capn |
| 110 | +kind load docker-image gcr.io/nested-controlplane-controller-amd64:dev --name=capn |
| 111 | +``` |
| 112 | + |
| 113 | +### Deploy CAPN |
| 114 | + |
| 115 | +Next, we will deploy the CAPN related CRDs and controllers. |
| 116 | + |
| 117 | +```console |
| 118 | +kubectl apply -f out/cluster-api-provider-nested-components.yaml |
| 119 | +``` |
| 120 | + |
| 121 | +This will help deploy two controllers: |
| 122 | +- Cluster API Nested Controller under namesapce `capn-system`. |
| 123 | +- Cluster API Nested Control Plane under namespace `capn-nested-control-plane-system`. |
| 124 | + |
| 125 | +```console |
| 126 | +# kubectl get ns |
| 127 | +NAME STATUS AGE |
| 128 | +capi-system Active 17m |
| 129 | +capn-nested-control-plane-system Active 5s |
| 130 | +capn-system Active 5s |
| 131 | +cert-manager Active 19m |
| 132 | +default Active 19m |
| 133 | +kube-node-lease Active 19m |
| 134 | +kube-public Active 19m |
| 135 | +kube-system Active 19m |
| 136 | +local-path-storage Active 19m |
| 137 | +``` |
| 138 | + |
| 139 | +```console |
| 140 | +# kubectl get po -n capn-nested-control-plane-system |
| 141 | +NAME READY STATUS RESTARTS AGE |
| 142 | +capn-nested-control-plane-controller-manager-8865cdc4f-787h5 2/2 Running 0 36s |
| 143 | +``` |
| 144 | + |
| 145 | +```console |
| 146 | +# kubectl get po -n capn-system |
| 147 | +NAME READY STATUS RESTARTS AGE |
| 148 | +capn-controller-manager-6fb7bdd57d-7v77s 2/2 Running 0 50s |
| 149 | +``` |
| 150 | +### Apply Sample Tenant Cluster |
| 151 | + |
| 152 | +```console |
| 153 | +kubectl apply -f config/samples/ |
| 154 | +``` |
| 155 | + |
| 156 | +If you found you cluster keeps provisioning and all of the Tenant Cluster components including APIServer, Controller and Etcd keeps crashed due to docker image pull rate limit as follows: |
| 157 | + |
| 158 | +```console |
| 159 | +# kubectl get cluster |
| 160 | +NAME PHASE |
| 161 | +cluster-sample Provisioning |
| 162 | +``` |
| 163 | + |
| 164 | +```console |
| 165 | +# kubectl get po |
| 166 | +NAME READY STATUS RESTARTS AGE |
| 167 | +cluster-sample-apiserver-0 0/1 ImagePullBackOff 0 6m39s |
| 168 | +cluster-sample-controller-manager-0 0/1 ImagePullBackOff 0 6m56s |
| 169 | +cluster-sample-etcd-0 0/1 ImagePullBackOff 0 6m45s |
| 170 | +``` |
| 171 | +```console |
| 172 | +# kubectl describe pod cluster-sample-apiserver-0 |
| 173 | +Name: cluster-sample-apiserver-0 |
| 174 | +Namespace: default |
| 175 | +Priority: 0 |
| 176 | +Node: capn-control-plane/172.18.0.2 |
| 177 | +Start Time: Mon, 19 Jul 2021 23:53:45 -0700 |
| 178 | +Labels: component-name=nestedapiserver-sample |
| 179 | + controller-revision-hash=cluster-sample-apiserver-57bbbd9b49 |
| 180 | + statefulset.kubernetes.io/pod-name=cluster-sample-apiserver-0 |
| 181 | +... |
| 182 | +Events: |
| 183 | + Type Reason Age From Message |
| 184 | + ---- ------ ---- ---- ------- |
| 185 | + Normal Scheduled 6m47s default-scheduler Successfully assigned default/cluster-sample-apiserver-0 to capn-control-plane |
| 186 | + Normal Pulling 5m8s (x4 over 6m47s) kubelet, capn-control-plane Pulling image "virtualcluster/apiserver-v1.16.2" |
| 187 | + Warning Failed 5m5s (x4 over 6m44s) kubelet, capn-control-plane Failed to pull image "virtualcluster/apiserver-v1.16.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/virtualcluster/apiserver-v1.16.2:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/virtualcluster/apiserver-v1.16.2/manifests/sha256:81fc8bb510b07535525413b725aed05765b56961c1f4ed28b92ba30acd65f6fb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit |
| 188 | + Warning Failed 5m5s (x4 over 6m44s) kubelet, capn-control-plane Error: ErrImagePull |
| 189 | + Warning Failed 4m53s (x6 over 6m44s) kubelet, capn-control-plane Error: ImagePullBackOff |
| 190 | + Normal BackOff 106s (x19 over 6m44s) kubelet, capn-control-plane Back-off pulling image "virtualcluster/apiserver-v1.16.2" |
| 191 | +``` |
| 192 | + |
| 193 | +Please follow the following guidnace to workaround: |
| 194 | + |
| 195 | +```console |
| 196 | +kind load docker-image docker.io/virtualcluster/apiserver-v1.16.2:latest --name=capn |
| 197 | +kind load docker-image docker.io/virtualcluster/controller-manager-v1.16.2:latest --name=capn |
| 198 | +kind load docker-image docker.io/virtualcluster/etcd-v3.4.0:latest --name=capn |
| 199 | +``` |
| 200 | + |
| 201 | +Get all of the StatefulSet for Tenant Cluster and update the `imagePullPolicy` to `Never`. |
| 202 | + |
| 203 | +```console |
| 204 | +# kubectl get sts |
| 205 | +NAME READY AGE |
| 206 | +cluster-sample-apiserver 0/1 15m |
| 207 | +cluster-sample-controller-manager 0/1 15m |
| 208 | +cluster-sample-etcd 0/1 15m |
| 209 | +``` |
| 210 | + |
| 211 | +Delete all pods for above StatefulSet resources: |
| 212 | + |
| 213 | +```console |
| 214 | +# kubectl delete po cluster-sample-apiserver-0 cluster-sample-controller-manager-0 cluster-sample-etcd-0 |
| 215 | +``` |
| 216 | + |
| 217 | +### Get `KUBECONFIG` |
| 218 | + |
| 219 | +We will use the `clusterctl` command-line tool to generate the `KUBECONFIG`, which will be used to access the nested controlplane later. |
| 220 | + |
| 221 | +```console |
| 222 | +cd cluster-api |
| 223 | +make clusterctl |
| 224 | +./bin/clusterctl get kubeconfig cluster-sample > ../kubeconfig |
| 225 | +cd .. |
| 226 | +``` |
| 227 | + |
| 228 | +### Port Forward |
| 229 | + |
| 230 | +To access the nested controlplane, in a separate shell, you will need to `port-forward` the apiserver service. |
| 231 | + |
| 232 | +```console |
| 233 | +kubectl port-forward svc/cluster-sample-apiserver 6443:6443 |
| 234 | +``` |
| 235 | + |
| 236 | +### Connect to Cluster |
| 237 | + |
| 238 | +To use the `KUBECONFIG` created by `clusterctl` without modification, we first need to setup a host record for the apiserver service name, to do this need add following line to `/etc/hosts`. |
| 239 | + |
| 240 | +``` |
| 241 | +127.0.0.1 cluster-sample-apiserver |
| 242 | +``` |
| 243 | + |
| 244 | +### Connect to the Cluster! :tada: |
| 245 | + |
| 246 | +```shell |
| 247 | +kubectl --kubeconfig kubeconfig get all -A |
| 248 | +``` |
| 249 | + |
| 250 | +### Clean Up |
| 251 | + |
| 252 | +```shell |
| 253 | +kind delete cluster --name=capn |
| 254 | +``` |
0 commit comments