Skip to content

Commit 3324f67

Browse files
authored
Merge pull request #509 from mattcary/migrate-instructions
Migrate instructions
2 parents ac1f8c0 + 7f402a6 commit 3324f67

File tree

1 file changed

+159
-1
lines changed

1 file changed

+159
-1
lines changed

docs/kubernetes/user-guides/snapshots.md

+159-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,9 @@
11
# Kubernetes Snapshots User Guide (Beta)
22

3-
>**Attention:** Attention: VolumeSnapshot is a Beta feature enabled by default in Kubernetes 1.17+. Attention: VolumeSnapshot is only available in the driver version "master".
3+
>**Attention:** VolumeSnapshot is a Beta feature enabled by default in
4+
Kubernetes 1.17+.
5+
6+
>**Attention:** VolumeSnapshot is only available in the driver version "master".
47
58
### Install Driver with beta snapshot feature as described [here](driver-install.md)
69

@@ -122,3 +125,158 @@
122125
lost+found
123126
sample-file.txt
124127
```
128+
129+
### Import a Pre-Existing Snapshot
130+
131+
An existing PD snapshot can be used to provision a `VolumeSnapshotContents`
132+
manually. A possible use case is to populate a PD in GKE from a snapshot created
133+
elsewhere in GCP.
134+
135+
1. Go to
136+
[console.cloud.google.com/compute/snapshots](https://console.cloud.google.com/compute/snapshots),
137+
locate your snapshot, and set an env variable from the snapshot name;
138+
`export SNAPSHOT_NAME=snapshot-XXXXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXXXXX` (copy
139+
in your exact name).
140+
1. Export your project id: `export PROJECT_ID=<your project id>`.
141+
1. Create a `VolumeSnapshot` resource which will be bound to a pre-provisioned
142+
`VolumeSnapshotContents`. Note this is called `restored-snapshot`; this
143+
name can be changed, but do it consistently across the other resources.
144+
```console
145+
kubectl apply -f - <<EOF
146+
apiVersion: snapshot.storage.k8s.io/v1beta1
147+
kind: VolumeSnapshot
148+
metadata:
149+
name: restored-snapshot
150+
spec:
151+
volumeSnapshotClassName: csi-gce-pd-snapshot-class
152+
source:
153+
volumeSnapshotContentName: snapcontent-migrated
154+
EOF
155+
```
156+
1. Create a `VolumeSnapshotContents` pointing to your existing PD
157+
snapshot from the first step.
158+
```console
159+
kubectl apply -f - <<EOF
160+
apiVersion: snapshot.storage.k8s.io/v1beta1
161+
kind: VolumeSnapshotContents
162+
metadata:
163+
name: restored-snapshot-content
164+
spec:
165+
deletionPolicy: Retain
166+
driver: pd.csi.storage.gke.io
167+
source:
168+
snapshotHandle: projects/$PROJECT_ID/global/snapshots/$SNAPSHOT_ID
169+
volumeSnapshotRef:
170+
kind: VolumeSnapshot
171+
name: restored-snapshot
172+
namespace: default
173+
EOF
174+
```
175+
1. Create a `PersistentVolumeClaim` which will pull from the
176+
`VolumeSnapshot`. The `StorageClass` must match what you use to provision
177+
PDs; what is below matches the first example in this document.
178+
```console
179+
kubectl apply -f - <<EOF
180+
kind: PersistentVolumeClaim
181+
apiVersion: v1
182+
metadata:
183+
name: restored-pvc
184+
spec:
185+
accessModes:
186+
- ReadWriteOnce
187+
storageClassName: csi-gce-pd
188+
resources:
189+
requests:
190+
storage: 6Gi
191+
dataSource:
192+
kind: VolumeSnapshot
193+
name: restored-snapshot
194+
apiGroup: snapshot.storage.k8s.io
195+
EOF
196+
```
197+
1. Finally, create a pod referring to the `PersistentVolumeClaim`. The PD CSI
198+
driver will provision a `PersistentVolume` and populate it from the
199+
snapshot. The pod from `examples/kubernetes/snapshot/restored-pod.yaml` is
200+
set to use the PVC created above. After `kubectl apply`'ing the pod, run
201+
`kubectl exec restored-pod -- ls -l /demo/data/` to confirm that the
202+
snapshot has been restored correctly.
203+
204+
#### Troubleshooting
205+
206+
If the `VolumeSnapshot`, `VolumeSnapshotContents` and `PersistentVolumeClaim`
207+
are not all mutually synchronized, the pod will not start. Try `kubectl describe
208+
volumesnapshotcontent restored-snapshot-content` to see any error messages
209+
relating to the binding together of the `VolumeSnapshot` and the
210+
`VolumeSnapshotContents`. Further errors may be found in the snapshot controller
211+
logs:
212+
213+
```console
214+
kubectl logs snapshot-controller-0 | tail
215+
```
216+
217+
Any errors in `kubectl describe pvc restored-pvc` may also shed light on any
218+
troubles. Sometimes there is quite a wait to dynamically provision the
219+
`PersistentVolume` for the PVC; the following command will give more
220+
information.
221+
222+
```console
223+
kubectl logs -n gce-pd-csi-driver csi-gce-pd-controller-0 -c csi-provisioner | tail
224+
```
225+
226+
### Tips on Migrating from Alpha Snapshots
227+
228+
The api version has changed between the alpha and beta releases of the CSI
229+
snapshotter. In many cases, snapshots created with the alpha will not be able to
230+
be used with a beta driver. In some cases, they can be migrated.
231+
232+
>**Attention:** You may lose all data in your snapshot!
233+
234+
>**Attention:** These instructions not work in your case and could delete all data in your
235+
snapshot!
236+
237+
These instructions happened to work with a particular GKE configuration. Because
238+
of the variety of alpha deployments, it is not possible to verify these steps
239+
will work all of the time. The following has been tested with a GKE cluster of
240+
master version 1.17.5-gke.0 using the following image versions for the
241+
snapshot-controller and PD CSI driver:
242+
243+
* quay.io/k8scsi/snapshot-controller:v2.0.1
244+
* gke.gcr.io/csi-provisioner:v1.5.0-gke.0
245+
* gke.gcr.io/csi-attacher:v2.1.1-gke.0
246+
* gke.gcr.io/csi-resizer:v0.4.0-gke.0
247+
* gke.gcr.io/csi-snapshotter:v2.1.1-gke.0
248+
* gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0
249+
250+
#### Migrating by Restoring from a Manually Provisioned Snapshot
251+
252+
The idea is to provision a beta `VolumeSnapshotContents` in your new or upgraded
253+
beta PD CSI driver cluster with the PD snapshot handle from the alpha snapshot,
254+
then bind that to a new beta `VolumeSnapshot`.
255+
256+
Note that if using the same upgraded cluster where the alpha driver was
257+
installed, the alpha driver must be completely removed, including an CRs and
258+
CRDs. In particular the alpha `VolumeSnapshot` and `VolumeSnapshotContents`
259+
**must** be deleted before installing any of the beta CRDs. This means that any
260+
information about which snapshot goes with which workload must be manually saved
261+
somewhere. We recommend spinnng up a new cluster and installing the beta driver
262+
there rather than trying to upgrade the alpha driver in-place.
263+
264+
If you are able to set (or have already set) the `deletionPolicy` of any
265+
existing alpha `VolumeSnapshotContents` and alpha `VolumeSnapshotClass` to
266+
`Retain`, then deleting the snapshot CRs will not delete the underlying PD
267+
snapshot.
268+
269+
If you are creating a new cluster rather than upgrading an existing one, the
270+
method below could also be used before deleting the alpha cluster and/or alpha
271+
PD CSI driver deployment. This may be a safer option if you are not sure about
272+
the deletion policy of your alpha snapshot.
273+
274+
After confirming that the PD snapshot is available, restore it using the beta PD
275+
CSI driver as described above in [Import a Pre-Existing
276+
Snapshot](#import-a-pre-existing-snapshot). The restoration should in a cluster
277+
with the beta PD CSI driver installed. At no point is the alpha cluster or
278+
driver referenced. After this is done, using the alpha snapshot may conflict
279+
with the resources created below in the beta cluster.
280+
281+
The maintainers will welcome any additional information and will be happy to
282+
review PRs clarifying or extending these tips.

0 commit comments

Comments
 (0)