Skip to content

docs: simplify running examples in README #422

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 22 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,43 +37,18 @@ You can just update the image in the webhook Deployment on an existing KIND clus
make KIND_CLUSTER_NAME=<> dev.update-webhook-image-on-kind
```

If creating an AWS cluster using the example files, you will also need to create a secret with your AWS credentials:
Generate a cluster definition from the file specified in the `--from` flag
and apply the generated resource to actually create the cluster in the API.
For example, the following command will create a Docker cluster with Cilium CNI applied via the Helm addon provider:

```shell
kubectl apply --server-side -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: "aws-quick-start-creds"
namespace: capa-system
stringData:
AccessKeyID: ${AWS_ACCESS_KEY_ID}
SecretAccessKey: ${AWS_SECRET_ACCESS_KEY}
SessionToken: ${AWS_SESSION_TOKEN}
EOF
export CLUSTER_NAME=docker-cluster-cilium-helm-addon
export CLUSTER_FILE=examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml
```

If you are using an `AWS_PROFILE` to log in use the following:

```shell
kubectl apply --server-side -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: "aws-quick-start-creds"
namespace: capa-system
stringData:
AccessKeyID: $(aws configure get aws_access_key_id)
SecretAccessKey: $(aws configure get aws_secret_access_key)
SessionToken: $(aws configure get aws_session_token)
EOF
```

To create an example cluster:

```shell
clusterctl generate cluster docker-quick-start-helm-addon-cilium \
--from examples/capi-quick-start/docker-cluster-cilium-helm-addon.yaml \
clusterctl generate cluster ${CLUSTER_NAME} \
--from ${CLUSTER_FILE} \
--kubernetes-version v1.29.1 \
--worker-machine-count 1 | \
kubectl apply --server-side -f -
Expand All @@ -82,36 +57,36 @@ clusterctl generate cluster docker-quick-start-helm-addon-cilium \
Wait until control plane is ready:

```shell
kubectl wait clusters/docker-quick-start-helm-addon-cilium --for=condition=ControlPlaneInitialized --timeout=5m
kubectl wait clusters/${CLUSTER_NAME} --for=condition=ControlPlaneInitialized --timeout=5m
```

To get the kubeconfig for the new cluster, run:

```shell
clusterctl get kubeconfig docker-quick-start-helm-addon-cilium > docker-kubeconfig
clusterctl get kubeconfig ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
```

If you are not on Linux, you will also need to fix the generated kubeconfig's `server`, run:

```shell
kubectl config set-cluster docker-quick-start-helm-addon-cilium \
--kubeconfig docker-kubeconfig \
--server=https://$(docker container port docker-quick-start-helm-addon-cilium-lb 6443/tcp)
kubectl config set-cluster ${CLUSTER_NAME} \
--kubeconfig ${CLUSTER_NAME}.conf \
--server=https://$(docker container port ${CLUSTER_NAME}-lb 6443/tcp)
```

Wait until all nodes are ready (this indicates that CNI has been deployed successfully):

```shell
kubectl --kubeconfig docker-kubeconfig wait nodes --all --for=condition=Ready --timeout=5m
kubectl --kubeconfig ${CLUSTER_NAME}.conf wait nodes --all --for=condition=Ready --timeout=5m
```

Show that Cilium is running successfully on the workload cluster:

```shell
kubectl --kubeconfig docker-kubeconfig get daemonsets -n kube-system cilium
kubectl --kubeconfig ${CLUSTER_NAME}.conf get daemonsets -n kube-system cilium
```

Deploy kube-vip to provide service load-balancer:
Deploy kube-vip to provide service load-balancer functionality for Docker clusters:

```shell
helm repo add --force-update kube-vip https://kube-vip.github.io/helm-charts
Expand All @@ -122,29 +97,29 @@ kubectl create configmap \
--namespace kube-system kubevip \
--from-literal "range-global=${kind_subnet_prefix}100.0-${kind_subnet_prefix}100.20" \
--dry-run=client -oyaml |
kubectl --kubeconfig docker-kubeconfig apply --server-side -n kube-system -f -
kubectl --kubeconfig ${CLUSTER_NAME}.conf apply --server-side -n kube-system -f -

helm upgrade kube-vip-cloud-provider kube-vip/kube-vip-cloud-provider --version 0.2.2 \
--install \
--wait --wait-for-jobs \
--namespace kube-system \
--kubeconfig docker-kubeconfig \
--kubeconfig ${CLUSTER_NAME}.conf \
--set-string=image.tag=v0.0.6

helm upgrade kube-vip kube-vip/kube-vip --version 0.4.2 \
--install \
--wait --wait-for-jobs \
--namespace kube-system \
--kubeconfig docker-kubeconfig \
--kubeconfig ${CLUSTER_NAME}.conf \
--set-string=image.tag=v0.6.0
```

Deploy traefik as a LB service:

```shell
helm --kubeconfig docker-kubeconfig repo add traefik https://helm.traefik.io/traefik
helm --kubeconfig ${CLUSTER_NAME}.conf repo add traefik https://helm.traefik.io/traefik
helm repo update &>/dev/null
helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
helm --kubeconfig ${CLUSTER_NAME}.conf upgrade --install traefik traefik/traefik \
--version v10.9.1 \
--wait --wait-for-jobs \
--set ports.web.hostPort=80 \
Expand All @@ -155,13 +130,13 @@ helm --kubeconfig docker-kubeconfig upgrade --install traefik traefik/traefik \
Watch for traefik LB service to get an external address:

```shell
watch -n 0.5 kubectl --kubeconfig docker-kubeconfig get service/traefik
watch -n 0.5 kubectl --kubeconfig ${CLUSTER_NAME}.conf get service/traefik
```

To delete the workload cluster, run:

```shell
kubectl delete cluster docker-quick-start-helm-addon-cilium
kubectl delete cluster ${CLUSTER_NAME}
```

Notice that the traefik service is deleted before the cluster is actually finally deleted.
Expand Down
11 changes: 0 additions & 11 deletions examples/capi-quick-start/aws-cluster-identity.yaml

This file was deleted.

13 changes: 0 additions & 13 deletions hack/examples/bases/aws/AWSClusterStaticIdentity.yaml

This file was deleted.

1 change: 0 additions & 1 deletion hack/examples/bases/aws/kustomization.yaml.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ resources:
- ./calico/helm-addon
- ./cilium/crs
- ./cilium/helm-addon
- AWSClusterStaticIdentity.yaml

namePrefix: aws-

Expand Down
6 changes: 0 additions & 6 deletions hack/examples/sync.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ mkdir -p "${EXAMPLE_CLUSTERCLASSES_DIR}"
readonly EXAMPLE_CLUSTERS_DIR=examples/capi-quick-start
mkdir -p "${EXAMPLE_CLUSTERS_DIR}"

mkdir -p examples/capi-quick-start
# Sync ClusterClasses (including Templates) and Clusters to separate files.
kustomize build ./hack/examples |
tee \
Expand Down Expand Up @@ -95,11 +94,6 @@ kustomize build ./hack/examples |
and .spec.topology.variables[0].value.addons.cni.strategy == "HelmAddon"
)' >"${EXAMPLE_CLUSTERS_DIR}/aws-cluster-cilium-helm-addon.yaml"
) \
>(
gojq --yaml-input --yaml-output 'select(.metadata.labels["cluster.x-k8s.io/provider"] == "aws"
and .kind == "AWSClusterStaticIdentity"
)' >"${EXAMPLE_CLUSTERS_DIR}/aws-cluster-identity.yaml"
) \
>/dev/null

#shellcheck disable=SC2016
Expand Down