Skip to content
This repository was archived by the owner on Apr 11, 2024. It is now read-only.

Commit d49bd37

Browse files
authored
docs: simplify running examples in README (nutanix-cloud-native#422)
I was trying to create AWS example clusters and it was pretty cumbersome. Changing the README in an attempt to make it easier to to use all example files.
1 parent 6dbe249 commit d49bd37

File tree

5 files changed

+22
-78
lines changed

5 files changed

+22
-78
lines changed

README.md

Lines changed: 22 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -37,43 +37,18 @@ You can just update the image in the webhook Deployment on an existing KIND clus
3737
make KIND_CLUSTER_NAME=<> dev.update-webhook-image-on-kind
3838
```
3939

40-
If creating an AWS cluster using the example files, you will also need to create a secret with your AWS credentials:
40+
Generate a cluster definition from the file specified in the `--from` flag
41+
and apply the generated resource to actually create the cluster in the API.
42+
For example, the following command will create a Docker cluster with Cilium CNI applied via the Helm addon provider:
4143

4244
```shell
43-
kubectl apply --server-side -f - <<EOF
44-
apiVersion: v1
45-
kind: Secret
46-
metadata:
47-
name: "aws-quick-start-creds"
48-
namespace: capa-system
49-
stringData:
50-
AccessKeyID: ${AWS_ACCESS_KEY_ID}
51-
SecretAccessKey: ${AWS_SECRET_ACCESS_KEY}
52-
SessionToken: ${AWS_SESSION_TOKEN}
53-
EOF
45+
export CLUSTER_NAME=docker-cluster-cilium-helm-addon
46+
export CLUSTER_FILE=examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml
5447
```
5548

56-
If you are using an `AWS_PROFILE` to log in use the following:
57-
58-
```shell
59-
kubectl apply --server-side -f - <<EOF
60-
apiVersion: v1
61-
kind: Secret
62-
metadata:
63-
name: "aws-quick-start-creds"
64-
namespace: capa-system
65-
stringData:
66-
AccessKeyID: $(aws configure get aws_access_key_id)
67-
SecretAccessKey: $(aws configure get aws_secret_access_key)
68-
SessionToken: $(aws configure get aws_session_token)
69-
EOF
70-
```
71-
72-
To create an example cluster:
73-
7449
```shell
75-
clusterctl generate cluster docker-quick-start-helm-addon-cilium \
76-
--from examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml \
50+
clusterctl generate cluster ${CLUSTER_NAME} \
51+
--from ${CLUSTER_FILE} \
7752
--kubernetes-version v1.29.1 \
7853
--worker-machine-count 1 | \
7954
kubectl apply --server-side -f -
@@ -82,36 +57,36 @@ clusterctl generate cluster docker-quick-start-helm-addon-cilium \
8257
Wait until control plane is ready:
8358

8459
```shell
85-
kubectl wait clusters/docker-quick-start-helm-addon-cilium --for=condition=ControlPlaneInitialized --timeout=5m
60+
kubectl wait clusters/${CLUSTER_NAME} --for=condition=ControlPlaneInitialized --timeout=5m
8661
```
8762

8863
To get the kubeconfig for the new cluster, run:
8964

9065
```shell
91-
clusterctl get kubeconfig docker-quick-start-helm-addon-cilium > docker-kubeconfig
66+
clusterctl get kubeconfig ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
9267
```
9368

9469
If you are not on Linux, you will also need to fix the generated kubeconfig's `server`, run:
9570

9671
```shell
97-
kubectl config set-cluster docker-quick-start-helm-addon-cilium \
98-
--kubeconfig docker-kubeconfig \
99-
--server=https://$(docker container port docker-quick-start-helm-addon-cilium-lb 6443/tcp)
72+
kubectl config set-cluster ${CLUSTER_NAME} \
73+
--kubeconfig ${CLUSTER_NAME}.conf \
74+
--server=https://$(docker container port ${CLUSTER_NAME}-lb 6443/tcp)
10075
```
10176

10277
Wait until all nodes are ready (this indicates that CNI has been deployed successfully):
10378

10479
```shell
105-
kubectl --kubeconfig docker-kubeconfig wait nodes --all --for=condition=Ready --timeout=5m
80+
kubectl --kubeconfig ${CLUSTER_NAME}.conf wait nodes --all --for=condition=Ready --timeout=5m
10681
```
10782

10883
Show that Cilium is running successfully on the workload cluster:
10984

11085
```shell
111-
kubectl --kubeconfig docker-kubeconfig get daemonsets -n kube-system cilium
86+
kubectl --kubeconfig ${CLUSTER_NAME}.conf get daemonsets -n kube-system cilium
11287
```
11388

114-
Deploy kube-vip to provide service load-balancer:
89+
Deploy kube-vip to provide service load-balancer functionality for Docker clusters:
11590

11691
```shell
11792
helm repo add --force-update kube-vip https://kube-vip.github.io/helm-charts
@@ -122,29 +97,29 @@ kubectl create configmap \
12297
--namespace kube-system kubevip \
12398
--from-literal "range-global=${kind_subnet_prefix}100.0-${kind_subnet_prefix}100.20" \
12499
--dry-run=client -oyaml |
125-
kubectl --kubeconfig docker-kubeconfig apply --server-side -n kube-system -f -
100+
kubectl --kubeconfig ${CLUSTER_NAME}.conf apply --server-side -n kube-system -f -
126101

127102
helm upgrade kube-vip-cloud-provider kube-vip/kube-vip-cloud-provider --version 0.2.2 \
128103
--install \
129104
--wait --wait-for-jobs \
130105
--namespace kube-system \
131-
--kubeconfig docker-kubeconfig \
106+
--kubeconfig ${CLUSTER_NAME}.conf \
132107
--set-string=image.tag=v0.0.6
133108

134109
helm upgrade kube-vip kube-vip/kube-vip --version 0.4.2 \
135110
--install \
136111
--wait --wait-for-jobs \
137112
--namespace kube-system \
138-
--kubeconfig docker-kubeconfig \
113+
--kubeconfig ${CLUSTER_NAME}.conf \
139114
--set-string=image.tag=v0.6.0
140115
```
141116

142117
Deploy traefik as a LB service:
143118

144119
```shell
145-
helm --kubeconfig docker-kubeconfig repo add traefik https://helm.traefik.io/traefik
120+
helm --kubeconfig ${CLUSTER_NAME}.conf repo add traefik https://helm.traefik.io/traefik
146121
helm repo update &>/dev/null
147-
helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
122+
helm --kubeconfig ${CLUSTER_NAME}.conf upgrade --install traefik traefik/traefik \
148123
--version v10.9.1 \
149124
--wait --wait-for-jobs \
150125
--set ports.web.hostPort=80 \
@@ -155,13 +130,13 @@ helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
155130
Watch for traefik LB service to get an external address:
156131

157132
```shell
158-
watch -n 0.5 kubectl --kubeconfig docker-kubeconfig get service/traefik
133+
watch -n 0.5 kubectl --kubeconfig ${CLUSTER_NAME}.conf get service/traefik
159134
```
160135

161136
To delete the workload cluster, run:
162137

163138
```shell
164-
kubectl delete cluster docker-quick-start-helm-addon-cilium
139+
kubectl delete cluster ${CLUSTER_NAME}
165140
```
166141

167142
Notice that the traefik service is deleted before the cluster is actually finally deleted.

examples/capi-quick-start/aws-cluster-identity.yaml

Lines changed: 0 additions & 11 deletions
This file was deleted.

hack/examples/bases/aws/AWSClusterStaticIdentity.yaml

Lines changed: 0 additions & 13 deletions
This file was deleted.

hack/examples/bases/aws/kustomization.yaml.tmpl

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@ resources:
99
- ./calico/helm-addon
1010
- ./cilium/crs
1111
- ./cilium/helm-addon
12-
- AWSClusterStaticIdentity.yaml
1312

1413
namePrefix: aws-
1514

hack/examples/sync.sh

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@ mkdir -p "${EXAMPLE_CLUSTERCLASSES_DIR}"
1919
readonly EXAMPLE_CLUSTERS_DIR=examples/capi-quick-start
2020
mkdir -p "${EXAMPLE_CLUSTERS_DIR}"
2121

22-
mkdir -p examples/capi-quick-start
2322
# Sync ClusterClasses (including Templates) and Clusters to separate files.
2423
kustomize build ./hack/examples |
2524
tee \
@@ -95,11 +94,6 @@ kustomize build ./hack/examples |
9594
and .spec.topology.variables[0].value.addons.cni.strategy == "HelmAddon"
9695
)' >"${EXAMPLE_CLUSTERS_DIR}/aws-cluster-cilium-helm-addon.yaml"
9796
) \
98-
>(
99-
gojq --yaml-input --yaml-output 'select(.metadata.labels["cluster.x-k8s.io/provider"] == "aws"
100-
and .kind == "AWSClusterStaticIdentity"
101-
)' >"${EXAMPLE_CLUSTERS_DIR}/aws-cluster-identity.yaml"
102-
) \
10397
>/dev/null
10498

10599
#shellcheck disable=SC2016

0 commit comments

Comments
 (0)