@@ -37,43 +37,18 @@ You can just update the image in the webhook Deployment on an existing KIND clus
37
37
make KIND_CLUSTER_NAME=<> dev.update-webhook-image-on-kind
38
38
```
39
39
40
- If creating an AWS cluster using the example files, you will also need to create a secret with your AWS credentials:
40
+ Generate a cluster definition from the file specified in the ` --from ` flag
41
+ and apply the generated resource to actually create the cluster in the API.
42
+ For example, the following command will create a Docker cluster with Cilium CNI applied via the Helm addon provider:
41
43
42
44
``` shell
43
- kubectl apply --server-side -f - << EOF
44
- apiVersion: v1
45
- kind: Secret
46
- metadata:
47
- name: "aws-quick-start-creds"
48
- namespace: capa-system
49
- stringData:
50
- AccessKeyID: ${AWS_ACCESS_KEY_ID}
51
- SecretAccessKey: ${AWS_SECRET_ACCESS_KEY}
52
- SessionToken: ${AWS_SESSION_TOKEN}
53
- EOF
45
+ export CLUSTER_NAME=docker-cluster-cilium-helm-addon
46
+ export CLUSTER_FILE=examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml
54
47
```
55
48
56
- If you are using an ` AWS_PROFILE ` to log in use the following:
57
-
58
- ``` shell
59
- kubectl apply --server-side -f - << EOF
60
- apiVersion: v1
61
- kind: Secret
62
- metadata:
63
- name: "aws-quick-start-creds"
64
- namespace: capa-system
65
- stringData:
66
- AccessKeyID: $( aws configure get aws_access_key_id)
67
- SecretAccessKey: $( aws configure get aws_secret_access_key)
68
- SessionToken: $( aws configure get aws_session_token)
69
- EOF
70
- ```
71
-
72
- To create an example cluster:
73
-
74
49
``` shell
75
- clusterctl generate cluster docker-quick-start-helm-addon-cilium \
76
- --from examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml \
50
+ clusterctl generate cluster ${CLUSTER_NAME} \
51
+ --from ${CLUSTER_FILE} \
77
52
--kubernetes-version v1.29.1 \
78
53
--worker-machine-count 1 | \
79
54
kubectl apply --server-side -f -
@@ -82,36 +57,36 @@ clusterctl generate cluster docker-quick-start-helm-addon-cilium \
82
57
Wait until control plane is ready:
83
58
84
59
``` shell
85
- kubectl wait clusters/docker-quick-start-helm-addon-cilium --for=condition=ControlPlaneInitialized --timeout=5m
60
+ kubectl wait clusters/${CLUSTER_NAME} --for=condition=ControlPlaneInitialized --timeout=5m
86
61
```
87
62
88
63
To get the kubeconfig for the new cluster, run:
89
64
90
65
``` shell
91
- clusterctl get kubeconfig docker-quick-start-helm-addon-cilium > docker-kubeconfig
66
+ clusterctl get kubeconfig ${CLUSTER_NAME} > ${CLUSTER_NAME} .conf
92
67
```
93
68
94
69
If you are not on Linux, you will also need to fix the generated kubeconfig's ` server ` , run:
95
70
96
71
``` shell
97
- kubectl config set-cluster docker-quick-start-helm-addon-cilium \
98
- --kubeconfig docker-kubeconfig \
99
- --server=https://$( docker container port docker-quick-start-helm-addon-cilium -lb 6443/tcp)
72
+ kubectl config set-cluster ${CLUSTER_NAME} \
73
+ --kubeconfig ${CLUSTER_NAME} .conf \
74
+ --server=https://$( docker container port ${CLUSTER_NAME} -lb 6443/tcp)
100
75
```
101
76
102
77
Wait until all nodes are ready (this indicates that CNI has been deployed successfully):
103
78
104
79
``` shell
105
- kubectl --kubeconfig docker-kubeconfig wait nodes --all --for=condition=Ready --timeout=5m
80
+ kubectl --kubeconfig ${CLUSTER_NAME} .conf wait nodes --all --for=condition=Ready --timeout=5m
106
81
```
107
82
108
83
Show that Cilium is running successfully on the workload cluster:
109
84
110
85
``` shell
111
- kubectl --kubeconfig docker-kubeconfig get daemonsets -n kube-system cilium
86
+ kubectl --kubeconfig ${CLUSTER_NAME} .conf get daemonsets -n kube-system cilium
112
87
```
113
88
114
- Deploy kube-vip to provide service load-balancer:
89
+ Deploy kube-vip to provide service load-balancer functionality for Docker clusters :
115
90
116
91
``` shell
117
92
helm repo add --force-update kube-vip https://kube-vip.github.io/helm-charts
@@ -122,29 +97,29 @@ kubectl create configmap \
122
97
--namespace kube-system kubevip \
123
98
--from-literal " range-global=${kind_subnet_prefix} 100.0-${kind_subnet_prefix} 100.20" \
124
99
--dry-run=client -oyaml |
125
- kubectl --kubeconfig docker-kubeconfig apply --server-side -n kube-system -f -
100
+ kubectl --kubeconfig ${CLUSTER_NAME} .conf apply --server-side -n kube-system -f -
126
101
127
102
helm upgrade kube-vip-cloud-provider kube-vip/kube-vip-cloud-provider --version 0.2.2 \
128
103
--install \
129
104
--wait --wait-for-jobs \
130
105
--namespace kube-system \
131
- --kubeconfig docker-kubeconfig \
106
+ --kubeconfig ${CLUSTER_NAME} .conf \
132
107
--set-string=image.tag=v0.0.6
133
108
134
109
helm upgrade kube-vip kube-vip/kube-vip --version 0.4.2 \
135
110
--install \
136
111
--wait --wait-for-jobs \
137
112
--namespace kube-system \
138
- --kubeconfig docker-kubeconfig \
113
+ --kubeconfig ${CLUSTER_NAME} .conf \
139
114
--set-string=image.tag=v0.6.0
140
115
```
141
116
142
117
Deploy traefik as a LB service:
143
118
144
119
``` shell
145
- helm --kubeconfig docker-kubeconfig repo add traefik https://helm.traefik.io/traefik
120
+ helm --kubeconfig ${CLUSTER_NAME} .conf repo add traefik https://helm.traefik.io/traefik
146
121
helm repo update & > /dev/null
147
- helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
122
+ helm --kubeconfig ${CLUSTER_NAME} .conf upgrade --install traefik traefik/traefik \
148
123
--version v10.9.1 \
149
124
--wait --wait-for-jobs \
150
125
--set ports.web.hostPort=80 \
@@ -155,13 +130,13 @@ helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
155
130
Watch for traefik LB service to get an external address:
156
131
157
132
``` shell
158
- watch -n 0.5 kubectl --kubeconfig docker-kubeconfig get service/traefik
133
+ watch -n 0.5 kubectl --kubeconfig ${CLUSTER_NAME} .conf get service/traefik
159
134
```
160
135
161
136
To delete the workload cluster, run:
162
137
163
138
``` shell
164
- kubectl delete cluster docker-quick-start-helm-addon-cilium
139
+ kubectl delete cluster ${CLUSTER_NAME}
165
140
```
166
141
167
142
Notice that the traefik service is deleted before the cluster is actually finally deleted.
0 commit comments