Skip to content

Migrate instructions #509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 19, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 159 additions & 1 deletion docs/kubernetes/user-guides/snapshots.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# Kubernetes Snapshots User Guide (Beta)

>**Attention:** Attention: VolumeSnapshot is a Beta feature enabled by default in Kubernetes 1.17+. Attention: VolumeSnapshot is only available in the driver version "master".
>**Attention:** VolumeSnapshot is a Beta feature enabled by default in
Kubernetes 1.17+.

>**Attention:** VolumeSnapshot is only available in the driver version "master".

### Install Driver with beta snapshot feature as described [here](driver-install.md)

Expand Down Expand Up @@ -122,3 +125,158 @@
lost+found
sample-file.txt
```

### Import a Pre-Existing Snapshot

An existing PD snapshot can be used to provision a `VolumeSnapshotContents`
manually. A possible use case is to populate a PD in GKE from a snapshot created
elsewhere in GCP.

1. Go to
[console.cloud.google.com/compute/snapshots](https://console.cloud.google.com/compute/snapshots),
locate your snapshot, and set an env variable from the snapshot name;
`export SNAPSHOT_NAME=snapshot-XXXXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXXXXX` (copy
in your exact name).
1. Export your project id: `export PROJECT_ID=<your project id>`.
1. Create a `VolumeSnapshot` resource which will be bound to a pre-provisioned
`VolumeSnapshotContents`. Note this is called `restored-snapshot`; this
name can be changed, but do it consistently across the other resources.
```console
kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: restored-snapshot
spec:
volumeSnapshotClassName: csi-gce-pd-snapshot-class
source:
volumeSnapshotContentName: snapcontent-migrated
EOF
```
1. Create a `VolumeSnapshotContents` pointing to your existing PD
snapshot from the first step.
```console
kubectl apply -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotContents
metadata:
name: restored-snapshot-content
spec:
deletionPolicy: Retain
driver: pd.csi.storage.gke.io
source:
snapshotHandle: projects/$PROJECT_ID/global/snapshots/$SNAPSHOT_ID
volumeSnapshotRef:
kind: VolumeSnapshot
name: restored-snapshot
namespace: default
EOF
```
1. Create a `PersistentVolumeClaim` which will pull from the
`VolumeSnapshot`. The `StorageClass` must match what you use to provision
PDs; what is below matches the first example in this document.
```console
kubectl apply -f - <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: restored-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd
resources:
requests:
storage: 6Gi
dataSource:
kind: VolumeSnapshot
name: restored-snapshot
apiGroup: snapshot.storage.k8s.io
EOF
```
1. Finally, create a pod referring to the `PersistentVolumeClaim`. The PD CSI
driver will provision a `PersistentVolume` and populate it from the
snapshot. The pod from `examples/kubernetes/snapshot/restored-pod.yaml` is
set to use the PVC created above. After `kubectl apply`'ing the pod, run
`kubectl exec restored-pod -- ls -l /demo/data/` to confirm that the
snapshot has been restored correctly.

#### Troubleshooting

If the `VolumeSnapshot`, `VolumeSnapshotContents` and `PersistentVolumeClaim`
are not all mutually synchronized, the pod will not start. Try `kubectl describe
volumesnapshotcontent restored-snapshot-content` to see any error messages
relating to the binding together of the `VolumeSnapshot` and the
`VolumeSnapshotContents`. Further errors may be found in the snapshot controller
logs:

```console
kubectl logs snapshot-controller-0 | tail
```

Any errors in `kubectl describe pvc restored-pvc` may also shed light on any
troubles. Sometimes there is quite a wait to dynamically provision the
`PersistentVolume` for the PVC; the following command will give more
information.

```console
kubectl logs -n gce-pd-csi-driver csi-gce-pd-controller-0 -c csi-provisioner | tail
```

### Tips on Migrating from Alpha Snapshots

The api version has changed between the alpha and beta releases of the CSI
snapshotter. In many cases, snapshots created with the alpha will not be able to
be used with a beta driver. In some cases, they can be migrated.

>**Attention:** You may lose all data in your snapshot!

>**Attention:** These instructions not work in your case and could delete all data in your
snapshot!

These instructions happened to work with a particular GKE configuration. Because
of the variety of alpha deployments, it is not possible to verify these steps
will work all of the time. The following has been tested with a GKE cluster of
master version 1.17.5-gke.0 using the following image versions for the
snapshot-controller and PD CSI driver:

* quay.io/k8scsi/snapshot-controller:v2.0.1
* gke.gcr.io/csi-provisioner:v1.5.0-gke.0
* gke.gcr.io/csi-attacher:v2.1.1-gke.0
* gke.gcr.io/csi-resizer:v0.4.0-gke.0
* gke.gcr.io/csi-snapshotter:v2.1.1-gke.0
* gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0

#### Migrating by Restoring from a Manually Provisioned Snapshot

The idea is to provision a beta `VolumeSnapshotContents` in your new or upgraded
beta PD CSI driver cluster with the PD snapshot handle from the alpha snapshot,
then bind that to a new beta `VolumeSnapshot`.

Note that if using the same upgraded cluster where the alpha driver was
installed, the alpha driver must be completely removed, including an CRs and
CRDs. In particular the alpha `VolumeSnapshot` and `VolumeSnapshotContents`
**must** be deleted before installing any of the beta CRDs. This means that any
information about which snapshot goes with which workload must be manually saved
somewhere. We recommend spinnng up a new cluster and installing the beta driver
there rather than trying to upgrade the alpha driver in-place.

If you are able to set (or have already set) the `deletionPolicy` of any
existing alpha `VolumeSnapshotContents` and alpha `VolumeSnapshotClass` to
`Retain`, then deleting the snapshot CRs will not delete the underlying PD
snapshot.

If you are creating a new cluster rather than upgrading an existing one, the
method below could also be used before deleting the alpha cluster and/or alpha
PD CSI driver deployment. This may be a safer option if you are not sure about
the deletion policy of your alpha snapshot.

After confirming that the PD snapshot is available, restore it using the beta PD
CSI driver as described above in [Import a Pre-Existing
Snapshot](#import-a-pre-existing-snapshot). The restoration should in a cluster
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: should be

with the beta PD CSI driver installed. At no point is the alpha cluster or
driver referenced. After this is done, using the alpha snapshot may conflict
with the resources created below in the beta cluster.

The maintainers will welcome any additional information and will be happy to
review PRs clarifying or extending these tips.