Skip to content
This repository was archived by the owner on Oct 28, 2024. It is now read-only.

Commit 6586c68

Browse files
authored
Merge pull request #186 from gyliu513/dev-quickstart
Added a dev quick start guide
2 parents 9b41cd8 + 730dc50 commit 6586c68

File tree

1 file changed

+272
-0
lines changed

1 file changed

+272
-0
lines changed

docs/dev-quickstart.md

+272
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,272 @@
1+
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
2+
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
3+
**Table of Contents** *generated with [DocToc](https://github.com/thlorenz/doctoc)*
4+
5+
- [Development Quick Start](#development-quick-start)
6+
- [Prerequisites](#prerequisites)
7+
- [Create `kind` cluster](#create-kind-cluster)
8+
- [Install `cert-manager`](#install-cert-manager)
9+
- [Clone CAPI and Deploy Dev release](#clone-capi-and-deploy-dev-release)
10+
- [Clone CAPN](#clone-capn)
11+
- [Create Docker Images, Manifests and Load Images](#create-docker-images-manifests-and-load-images)
12+
- [Deploy CAPN](#deploy-capn)
13+
- [Apply Sample Tenant Cluster](#apply-sample-tenant-cluster)
14+
- [Get `KUBECONFIG`](#get-kubeconfig)
15+
- [Port Forward](#port-forward)
16+
- [Connect to Cluster](#connect-to-cluster)
17+
- [Connect to the Cluster! :tada:](#connect-to-the-cluster-tada)
18+
- [Clean Up](#clean-up)
19+
20+
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
21+
22+
## Development Quick Start
23+
24+
This tutorial introduces how to create a nested controlplane from source code for development. CAPN should work with any standard Kubernetes cluster out of box, but for demo purposes, in this tutorial, we will use a `kind` cluster as the management cluster as well as the nested workload cluster.
25+
26+
### Prerequisites
27+
28+
Please install the latest version of [kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) and [kubectl](https://kubernetes.io/docs/tasks/tools/)
29+
30+
### Create `kind` cluster
31+
32+
```console
33+
kind create cluster --name=capn
34+
```
35+
36+
### Install `cert-manager`
37+
38+
Cert Manager is a soft dependency for the Cluster API components to enable mutating and validating webhooks to be auto deployed. For more detailed instructions go [Cert Manager Installion](https://cert-manager.io/docs/installation/kubernetes/#installing-with-regular-manifests).
39+
40+
```console
41+
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.3.1/cert-manager.yaml
42+
```
43+
44+
After Cert Manager got installed, you will see there is a new namespace `cert-manager` has been created, and some pods for cert manager are running under this namespace.
45+
46+
```console
47+
# kubectl get ns
48+
NAME STATUS AGE
49+
cert-manager Active 27s
50+
default Active 71s
51+
kube-node-lease Active 73s
52+
kube-public Active 73s
53+
kube-system Active 73s
54+
local-path-storage Active 68s
55+
```
56+
57+
```console
58+
# kubectl get po -n cert-manager
59+
NAME READY STATUS RESTARTS AGE
60+
cert-manager-7dd5854bb4-8jp4b 1/1 Running 0 32s
61+
cert-manager-cainjector-64c949654c-bhdzz 1/1 Running 0 32s
62+
cert-manager-webhook-6b57b9b886-5cbnp 1/1 Running 0 32s
63+
```
64+
65+
### Clone CAPI and Deploy Dev release
66+
67+
As a cluster API~(CAPI) provider, CAPN requires core components of CAPI to be setup. We need to deploy the unreleased version of CAPI for `v1alpha4` API support.
68+
69+
```console
70+
git clone [email protected]:kubernetes-sigs/cluster-api.git
71+
cd cluster-api
72+
make release-manifests
73+
# change feature flags on core
74+
sed -i'' -e 's@- --feature-gates=.*@- --feature-gates=MachinePool=false,ClusterResourceSet=true@' out/core-components.yaml
75+
kubectl apply -f out/core-components.yaml
76+
cd ..
77+
```
78+
79+
This will help deploy the Cluster API Core Controller under namesapce `capi-system`.
80+
81+
```console
82+
# kubectl get ns
83+
NAME STATUS AGE
84+
capi-system Active 52s
85+
cert-manager Active 2m47s
86+
default Active 3m31s
87+
kube-node-lease Active 3m33s
88+
kube-public Active 3m33s
89+
kube-system Active 3m33s
90+
local-path-storage Active 3m28s
91+
```
92+
93+
```console
94+
# kubectl get po -n capi-system
95+
NAME READY STATUS RESTARTS AGE
96+
capi-controller-manager-5b74fcc774-wpxn7 1/1 Running 0 64s
97+
```
98+
### Clone CAPN
99+
100+
```console
101+
git clone https://github.com/kubernetes-sigs/cluster-api-provider-nested
102+
cd cluster-api-provider-nested
103+
```
104+
105+
### Create Docker Images, Manifests and Load Images
106+
107+
```console
108+
PULL_POLICY=Never TAG=dev make docker-build release-manifests
109+
kind load docker-image gcr.io/cluster-api-nested-controller-amd64:dev --name=capn
110+
kind load docker-image gcr.io/nested-controlplane-controller-amd64:dev --name=capn
111+
```
112+
113+
### Deploy CAPN
114+
115+
Next, we will deploy the CAPN related CRDs and controllers.
116+
117+
```console
118+
kubectl apply -f out/cluster-api-provider-nested-components.yaml
119+
```
120+
121+
This will help deploy two controllers:
122+
- Cluster API Nested Controller under namesapce `capn-system`.
123+
- Cluster API Nested Control Plane under namespace `capn-nested-control-plane-system`.
124+
125+
```console
126+
# kubectl get ns
127+
NAME STATUS AGE
128+
capi-system Active 17m
129+
capn-nested-control-plane-system Active 5s
130+
capn-system Active 5s
131+
cert-manager Active 19m
132+
default Active 19m
133+
kube-node-lease Active 19m
134+
kube-public Active 19m
135+
kube-system Active 19m
136+
local-path-storage Active 19m
137+
```
138+
139+
```console
140+
# kubectl get po -n capn-nested-control-plane-system
141+
NAME READY STATUS RESTARTS AGE
142+
capn-nested-control-plane-controller-manager-8865cdc4f-787h5 2/2 Running 0 36s
143+
```
144+
145+
```console
146+
# kubectl get po -n capn-system
147+
NAME READY STATUS RESTARTS AGE
148+
capn-controller-manager-6fb7bdd57d-7v77s 2/2 Running 0 50s
149+
```
150+
### Apply Sample Tenant Cluster
151+
152+
```console
153+
kubectl apply -f config/samples/
154+
```
155+
156+
After the cluster was created, you will be able to see the cluster was provisioned, and all the pods for tenant cluster including apiserver, controller manager and etcd will be running.
157+
158+
```console
159+
# kubectl get cluster
160+
NAME PHASE
161+
cluster-sample Provisioned
162+
```
163+
164+
```console
165+
# kubectl get pods
166+
NAME READY STATUS RESTARTS AGE
167+
cluster-sample-apiserver-0 1/1 Running 0 160m
168+
cluster-sample-controller-manager-0 1/1 Running 1 160m
169+
cluster-sample-etcd-0 1/1 Running 0 6h4m
170+
```
171+
172+
If you found you cluster keeps provisioning and all of the Tenant Cluster components including APIServer, Controller and Etcd keeps crashed due to docker image pull rate limit as follows:
173+
174+
```console
175+
# kubectl get cluster
176+
NAME PHASE
177+
cluster-sample Provisioning
178+
```
179+
180+
```console
181+
# kubectl get po
182+
NAME READY STATUS RESTARTS AGE
183+
cluster-sample-apiserver-0 0/1 ImagePullBackOff 0 6m39s
184+
cluster-sample-controller-manager-0 0/1 ImagePullBackOff 0 6m56s
185+
cluster-sample-etcd-0 0/1 ImagePullBackOff 0 6m45s
186+
```
187+
```console
188+
# kubectl describe pod cluster-sample-apiserver-0
189+
Name: cluster-sample-apiserver-0
190+
Namespace: default
191+
Priority: 0
192+
Node: capn-control-plane/172.18.0.2
193+
Start Time: Mon, 19 Jul 2021 23:53:45 -0700
194+
Labels: component-name=nestedapiserver-sample
195+
controller-revision-hash=cluster-sample-apiserver-57bbbd9b49
196+
statefulset.kubernetes.io/pod-name=cluster-sample-apiserver-0
197+
...
198+
Events:
199+
Type Reason Age From Message
200+
---- ------ ---- ---- -------
201+
Normal Scheduled 6m47s default-scheduler Successfully assigned default/cluster-sample-apiserver-0 to capn-control-plane
202+
Normal Pulling 5m8s (x4 over 6m47s) kubelet, capn-control-plane Pulling image "virtualcluster/apiserver-v1.16.2"
203+
Warning Failed 5m5s (x4 over 6m44s) kubelet, capn-control-plane Failed to pull image "virtualcluster/apiserver-v1.16.2": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/virtualcluster/apiserver-v1.16.2:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/virtualcluster/apiserver-v1.16.2/manifests/sha256:81fc8bb510b07535525413b725aed05765b56961c1f4ed28b92ba30acd65f6fb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
204+
Warning Failed 5m5s (x4 over 6m44s) kubelet, capn-control-plane Error: ErrImagePull
205+
Warning Failed 4m53s (x6 over 6m44s) kubelet, capn-control-plane Error: ImagePullBackOff
206+
Normal BackOff 106s (x19 over 6m44s) kubelet, capn-control-plane Back-off pulling image "virtualcluster/apiserver-v1.16.2"
207+
```
208+
209+
Please follow the following guidnace to workaround:
210+
211+
```console
212+
kind load docker-image docker.io/virtualcluster/apiserver-v1.16.2:latest --name=capn
213+
kind load docker-image docker.io/virtualcluster/controller-manager-v1.16.2:latest --name=capn
214+
kind load docker-image docker.io/virtualcluster/etcd-v3.4.0:latest --name=capn
215+
```
216+
217+
Get all of the StatefulSet for Tenant Cluster and update the `imagePullPolicy` to `Never`.
218+
219+
```console
220+
# kubectl get sts
221+
NAME READY AGE
222+
cluster-sample-apiserver 0/1 15m
223+
cluster-sample-controller-manager 0/1 15m
224+
cluster-sample-etcd 0/1 15m
225+
```
226+
227+
Delete all pods for above StatefulSet resources:
228+
229+
```console
230+
# kubectl delete po cluster-sample-apiserver-0 cluster-sample-controller-manager-0 cluster-sample-etcd-0 --force --grace-period=0
231+
```
232+
233+
After above steps finsihed, you will be able to see the cluster was provisioned, and all the pods for tenant cluster including apiserver, controller manager and etcd will be running.
234+
235+
### Get `KUBECONFIG`
236+
237+
We will use the `clusterctl` command-line tool to generate the `KUBECONFIG`, which will be used to access the nested controlplane later.
238+
239+
```console
240+
cd cluster-api
241+
make clusterctl
242+
./bin/clusterctl get kubeconfig cluster-sample > ../kubeconfig
243+
cd ..
244+
```
245+
246+
### Port Forward
247+
248+
To access the nested controlplane, in a separate shell, you will need to `port-forward` the apiserver service.
249+
250+
```console
251+
kubectl port-forward svc/cluster-sample-apiserver 6443:6443
252+
```
253+
254+
### Connect to Cluster
255+
256+
To use the `KUBECONFIG` created by `clusterctl` without modification, we first need to setup a host record for the apiserver service name, to do this need add following line to `/etc/hosts`.
257+
258+
```
259+
127.0.0.1 cluster-sample-apiserver
260+
```
261+
262+
### Connect to the Cluster! :tada:
263+
264+
```shell
265+
kubectl --kubeconfig kubeconfig get all -A
266+
```
267+
268+
### Clean Up
269+
270+
```shell
271+
kind delete cluster --name=capn
272+
```

0 commit comments

Comments
 (0)