Skip to content

Commit 35cc3cb

Browse files
committed
Update README for latest snapshot API and update snapshotter version
Signed-off-by: Grant Griffiths <[email protected]>
1 parent b44e56a commit 35cc3cb

8 files changed

+835
-397
lines changed

README.md

Lines changed: 10 additions & 395 deletions
Large diffs are not rendered by default.

deploy/kubernetes-1.17/hostpath/csi-hostpath-snapshotter.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ spec:
4040
serviceAccount: csi-snapshotter
4141
containers:
4242
- name: csi-snapshotter
43-
image: quay.io/k8scsi/csi-snapshotter:v2.0.0
43+
image: quay.io/k8scsi/csi-snapshotter:v2.0.1
4444
args:
4545
- -v=5
4646
- --csi-address=/csi/csi.sock

deploy/util/deploy-hostpath.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,6 @@ SNAPSHOTTER_RBAC_RELATIVE_PATH="rbac.yaml"
115115
if version_gt $(rbac_version "${BASE_DIR}/hostpath/csi-hostpath-snapshotter.yaml" csi-snapshotter "${UPDATE_RBAC_RULES}") "v1.255.255"; then
116116
SNAPSHOTTER_RBAC_RELATIVE_PATH="csi-snapshotter/rbac-csi-snapshotter.yaml"
117117
fi
118-
echo "SNAPSHOTTER_RBAC_RELATIVE_PATH $SNAPSHOTTER_RBAC_RELATIVE_PATH"
119118

120119
CSI_PROVISIONER_RBAC_YAML="https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/$(rbac_version "${BASE_DIR}/hostpath/csi-hostpath-provisioner.yaml" csi-provisioner false)/deploy/kubernetes/rbac.yaml"
121120
: ${CSI_PROVISIONER_RBAC:=https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/$(rbac_version "${BASE_DIR}/hostpath/csi-hostpath-provisioner.yaml" csi-provisioner "${UPDATE_RBAC_RULES}")/deploy/kubernetes/rbac.yaml}

docs/deploy-1.17-and-later.md

Lines changed: 271 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,271 @@
1+
## Cluster setup
2+
For Kubernetes 1.17+, some initial cluster setup is required to install the following:
3+
- CSI VolumeSnapshot beta CRDs (custom resource definitions)
4+
- Snapshot Controller
5+
6+
### Check if cluster components are already installed
7+
Run the follow commands to ensure the VolumeSnapshot CRDs have been installed:
8+
```
9+
$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
10+
$ kubectl get volumesnapshots.snapshot.storage.k8s.io
11+
$ kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
12+
```
13+
If any of these commands return the following error message, you must install the corresponding CRD:
14+
```
15+
error: the server doesn't have a resource type "volumesnapshotclasses"
16+
```
17+
18+
Next, check if any pods are running the snapshot-controller image:
19+
```
20+
$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{range .spec.containers[*]}{.image}{", "}{end}{end}' | grep snapshot-controller
21+
quay.io/k8scsi/snapshot-controller:v2.0.1,
22+
```
23+
24+
If no pods are running the snapshot-controller, follow the instructions below to create the snapshot-controller
25+
26+
__Note:__ The above command may not work for clusters running on managed k8s services. In this case, the presence of all VolumeSnapshot CRDs is an indicator that your cluster is ready for hostpath deployment.
27+
28+
### VolumeSnapshot CRDs and snapshot controller installation
29+
Run the following commands to install these components:
30+
```shell
31+
# Change to the latest supported snapshotter version
32+
$ SNAPSHOTTER_VERSION=v2.0.1
33+
34+
# Apply VolumeSnapshot CRDs
35+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
36+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
37+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
38+
39+
# Create snapshot controller
40+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
41+
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/${SNAPSHOTTER_VERSION}/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
42+
```
43+
44+
## Deployment
45+
The easiest way to test the Hostpath driver is to run the `deploy-hostpath.sh` script for the Kubernetes version used by
46+
the cluster as shown below for Kubernetes 1.17. This creates the deployment that is maintained specifically for that
47+
release of Kubernetes. However, other deployments may also work.
48+
49+
```
50+
# deploy hostpath driver
51+
$ deploy/kubernetes-latest/deploy-hostpath.sh
52+
```
53+
54+
You should see an output similar to the following printed on the terminal showing the application of rbac rules and the
55+
result of deploying the hostpath driver, external provisioner, external attacher and snapshotter components. Note that the following output is from Kubernetes 1.17:
56+
57+
```shell
58+
applying RBAC rules
59+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-provisioner/v1.5.0/deploy/kubernetes/rbac.yaml
60+
serviceaccount/csi-provisioner created
61+
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
62+
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
63+
role.rbac.authorization.k8s.io/external-provisioner-cfg created
64+
rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created
65+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-attacher/v2.1.0/deploy/kubernetes/rbac.yaml
66+
serviceaccount/csi-attacher created
67+
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
68+
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
69+
role.rbac.authorization.k8s.io/external-attacher-cfg created
70+
rolebinding.rbac.authorization.k8s.io/csi-attacher-role-cfg created
71+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v2.0.1/deploy/kubernetes/csi-snapshotter/rbac-csi-snapshotter.yaml
72+
serviceaccount/csi-snapshotter created
73+
clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created
74+
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created
75+
role.rbac.authorization.k8s.io/external-snapshotter-leaderelection created
76+
rolebinding.rbac.authorization.k8s.io/external-snapshotter-leaderelection created
77+
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-resizer/v0.4.0/deploy/kubernetes/rbac.yaml
78+
serviceaccount/csi-resizer created
79+
clusterrole.rbac.authorization.k8s.io/external-resizer-runner created
80+
clusterrolebinding.rbac.authorization.k8s.io/csi-resizer-role created
81+
role.rbac.authorization.k8s.io/external-resizer-cfg created
82+
rolebinding.rbac.authorization.k8s.io/csi-resizer-role-cfg created
83+
deploying hostpath components
84+
deploy/kubernetes-latest/hostpath/csi-hostpath-attacher.yaml
85+
using image: quay.io/k8scsi/csi-attacher:v2.1.0
86+
service/csi-hostpath-attacher created
87+
statefulset.apps/csi-hostpath-attacher created
88+
deploy/kubernetes-latest/hostpath/csi-hostpath-driverinfo.yaml
89+
csidriver.storage.k8s.io/hostpath.csi.k8s.io created
90+
deploy/kubernetes-latest/hostpath/csi-hostpath-plugin.yaml
91+
using image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
92+
using image: quay.io/k8scsi/hostpathplugin:v1.3.0
93+
using image: quay.io/k8scsi/livenessprobe:v1.1.0
94+
service/csi-hostpathplugin created
95+
statefulset.apps/csi-hostpathplugin created
96+
deploy/kubernetes-latest/hostpath/csi-hostpath-provisioner.yaml
97+
using image: quay.io/k8scsi/csi-provisioner:v1.5.0
98+
service/csi-hostpath-provisioner created
99+
statefulset.apps/csi-hostpath-provisioner created
100+
deploy/kubernetes-latest/hostpath/csi-hostpath-resizer.yaml
101+
using image: quay.io/k8scsi/csi-resizer:v0.4.0
102+
service/csi-hostpath-resizer created
103+
statefulset.apps/csi-hostpath-resizer created
104+
deploy/kubernetes-latest/hostpath/csi-hostpath-snapshotter.yaml
105+
using image: quay.io/k8scsi/csi-snapshotter:v2.0.1
106+
service/csi-hostpath-snapshotter created
107+
statefulset.apps/csi-hostpath-snapshotter created
108+
deploy/kubernetes-latest/hostpath/csi-hostpath-testing.yaml
109+
using image: alpine/socat:1.0.3
110+
service/hostpath-service created
111+
statefulset.apps/csi-hostpath-socat created
112+
11:37:57 waiting for hostpath deployment to complete, attempt #0
113+
11:38:07 waiting for hostpath deployment to complete, attempt #1
114+
deploying snapshotclass based on snapshotter version
115+
volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass created
116+
```
117+
118+
The [livenessprobe side-container](https://github.com/kubernetes-csi/livenessprobe) provided by the CSI community is deployed with the CSI driver to provide the liveness checking of the CSI services.
119+
120+
## Run example application and validate
121+
122+
Next, validate the deployment. First, ensure all expected pods are running properly including the external attacher, provisioner, snapshotter and the actual hostpath driver plugin:
123+
124+
```shell
125+
$ kubectl get pods
126+
NAME READY STATUS RESTARTS AGE
127+
csi-hostpath-attacher-0 1/1 Running 0 4m21s
128+
csi-hostpath-provisioner-0 1/1 Running 0 4m19s
129+
csi-hostpath-resizer-0 1/1 Running 0 4m19s
130+
csi-hostpath-snapshotter-0 1/1 Running 0 4m18s
131+
csi-hostpath-socat-0 1/1 Running 0 4m18s
132+
csi-hostpathplugin-0 3/3 Running 0 4m20s
133+
snapshot-controller-0 1/1 Running 0 4m37s
134+
```
135+
136+
From the root directory, deploy the application pods including a storage class, a PVC, and a pod which mounts a volume using the Hostpath driver found in directory `./examples`:
137+
138+
```shell
139+
$ for i in ./examples/csi-storageclass.yaml ./examples/csi-pvc.yaml ./examples/csi-app.yaml; do kubectl apply -f $i; done
140+
storageclass.storage.k8s.io/csi-hostpath-sc created
141+
persistentvolumeclaim/csi-pvc created
142+
pod/my-csi-app created
143+
```
144+
145+
Let's validate the components are deployed:
146+
147+
```shell
148+
$ kubectl get pv
149+
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
150+
pvc-ad827273-8d08-430b-9d5a-e60e05a2bc3e 1Gi RWO Delete Bound default/csi-pvc csi-hostpath-sc 45s
151+
152+
$ kubectl get pvc
153+
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
154+
csi-pvc Bound pvc-ad827273-8d08-430b-9d5a-e60e05a2bc3e 1Gi RWO csi-hostpath-sc 94s
155+
```
156+
157+
Finally, inspect the application pod `my-csi-app` which mounts a Hostpath volume:
158+
159+
```shell
160+
$ kubectl describe pods/my-csi-app
161+
Name: my-csi-app
162+
Namespace: default
163+
Priority: 0
164+
Node: csi-prow-worker/172.17.0.2
165+
Start Time: Mon, 09 Mar 2020 14:38:05 -0700
166+
Labels: <none>
167+
Annotations: kubectl.kubernetes.io/last-applied-configuration:
168+
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-csi-app","namespace":"default"},"spec":{"containers":[{"command":[...
169+
Status: Running
170+
IP: 10.244.2.52
171+
IPs:
172+
IP: 10.244.2.52
173+
Containers:
174+
my-frontend:
175+
Container ID: containerd://bf82f1a3e46a29dc6507a7217f5a5fc33b4ee471d9cc09ec1e680a1e8e2fd60a
176+
Image: busybox
177+
Image ID: docker.io/library/busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a
178+
Port: <none>
179+
Host Port: <none>
180+
Command:
181+
sleep
182+
1000000
183+
State: Running
184+
Started: Mon, 09 Mar 2020 14:38:12 -0700
185+
Ready: True
186+
Restart Count: 0
187+
Environment: <none>
188+
Mounts:
189+
/data from my-csi-volume (rw)
190+
/var/run/secrets/kubernetes.io/serviceaccount from default-token-46lvh (ro)
191+
Conditions:
192+
Type Status
193+
Initialized True
194+
Ready True
195+
ContainersReady True
196+
PodScheduled True
197+
Volumes:
198+
my-csi-volume:
199+
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
200+
ClaimName: csi-pvc
201+
ReadOnly: false
202+
default-token-46lvh:
203+
Type: Secret (a volume populated by a Secret)
204+
SecretName: default-token-46lvh
205+
Optional: false
206+
QoS Class: BestEffort
207+
Node-Selectors: <none>
208+
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
209+
node.kubernetes.io/unreachable:NoExecute for 300s
210+
Events:
211+
Type Reason Age From Message
212+
---- ------ ---- ---- -------
213+
Normal Scheduled 106s default-scheduler Successfully assigned default/my-csi-app to csi-prow-worker
214+
Normal SuccessfulAttachVolume 106s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-ad827273-8d08-430b-9d5a-e60e05a2bc3e"
215+
Normal Pulling 102s kubelet, csi-prow-worker Pulling image "busybox"
216+
Normal Pulled 99s kubelet, csi-prow-worker Successfully pulled image "busybox"
217+
Normal Created 99s kubelet, csi-prow-worker Created container my-frontend
218+
Normal Started 99s kubelet, csi-prow-worker Started container my-frontend
219+
```
220+
221+
## Confirm Hostpath driver works
222+
The Hostpath driver is configured to create new volumes under `/csi-data-dir` inside the hostpath container that is specified in the plugin StatefulSet found [here](../deploy/kubernetes-1.17/hostpath/csi-hostpath-plugin.yaml). This path persist as long as the StatefulSet pod is up and running.
223+
224+
A file written in a properly mounted Hostpath volume inside an application should show up inside the Hostpath container. The following steps confirms that Hostpath is working properly. First, create a file from the application pod as shown:
225+
226+
```shell
227+
$ kubectl exec -it my-csi-app /bin/sh
228+
/ # touch /data/hello-world
229+
/ # exit
230+
```
231+
232+
Next, ssh into the Hostpath container and verify that the file shows up there:
233+
```shell
234+
$ kubectl exec -it $(kubectl get pods --selector app=csi-hostpathplugin -o jsonpath='{.items[*].metadata.name}') -c hostpath /bin/sh
235+
236+
```
237+
Then, use the following command to locate the file. If everything works OK you should get a result similar to the following:
238+
239+
```shell
240+
/ # find / -name hello-world
241+
/var/lib/kubelet/pods/34bbb561-d240-4483-a56c-efcc6504518c/volumes/kubernetes.io~csi/pvc-ad827273-8d08-430b-9d5a-e60e05a2bc3e/mount/hello-world
242+
/csi-data-dir/42bdc1e0-624e-11ea-beee-42d40678b2d1/hello-world
243+
/ # exit
244+
```
245+
246+
## Confirm the creation of the VolumeAttachment object
247+
An additional way to ensure the driver is working properly is by inspecting the VolumeAttachment API object created that represents the attached volume:
248+
249+
```shell
250+
$ kubectl describe volumeattachment
251+
Name: csi-5f182b564c52cd52e04e148a1feef00d470155e051924893d3aee8c3b26b8471
252+
Namespace:
253+
Labels: <none>
254+
Annotations: <none>
255+
API Version: storage.k8s.io/v1
256+
Kind: VolumeAttachment
257+
Metadata:
258+
Creation Timestamp: 2020-03-09T21:38:05Z
259+
Resource Version: 10119
260+
Self Link: /apis/storage.k8s.io/v1/volumeattachments/csi-5f182b564c52cd52e04e148a1feef00d470155e051924893d3aee8c3b26b8471
261+
UID: 2d28d7e4-cda1-4ba9-a8fc-56fe081d71e9
262+
Spec:
263+
Attacher: hostpath.csi.k8s.io
264+
Node Name: csi-prow-worker
265+
Source:
266+
Persistent Volume Name: pvc-ad827273-8d08-430b-9d5a-e60e05a2bc3e
267+
Status:
268+
Attached: true
269+
Events: <none>
270+
```
271+

0 commit comments

Comments
 (0)