diff --git a/docs/kubernetes/user-guides/snapshots.md b/docs/kubernetes/user-guides/snapshots.md index f27b8f32b..ffd47be76 100644 --- a/docs/kubernetes/user-guides/snapshots.md +++ b/docs/kubernetes/user-guides/snapshots.md @@ -1,6 +1,9 @@ # Kubernetes Snapshots User Guide (Beta) ->**Attention:** Attention: VolumeSnapshot is a Beta feature enabled by default in Kubernetes 1.17+. Attention: VolumeSnapshot is only available in the driver version "master". +>**Attention:** VolumeSnapshot is a Beta feature enabled by default in +Kubernetes 1.17+. + +>**Attention:** VolumeSnapshot is only available in the driver version "master". ### Install Driver with beta snapshot feature as described [here](driver-install.md) @@ -122,3 +125,158 @@ lost+found sample-file.txt ``` + +### Import a Pre-Existing Snapshot + +An existing PD snapshot can be used to provision a `VolumeSnapshotContents` +manually. A possible use case is to populate a PD in GKE from a snapshot created +elsewhere in GCP. + + 1. Go to + [console.cloud.google.com/compute/snapshots](https://console.cloud.google.com/compute/snapshots), + locate your snapshot, and set an env variable from the snapshot name; + `export SNAPSHOT_NAME=snapshot-XXXXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXXXXX` (copy + in your exact name). + 1. Export your project id: `export PROJECT_ID=`. + 1. Create a `VolumeSnapshot` resource which will be bound to a pre-provisioned + `VolumeSnapshotContents`. Note this is called `restored-snapshot`; this + name can be changed, but do it consistently across the other resources. + ```console + kubectl apply -f - <**Attention:** You may lose all data in your snapshot! + +>**Attention:** These instructions not work in your case and could delete all data in your +snapshot! + +These instructions happened to work with a particular GKE configuration. Because +of the variety of alpha deployments, it is not possible to verify these steps +will work all of the time. The following has been tested with a GKE cluster of +master version 1.17.5-gke.0 using the following image versions for the +snapshot-controller and PD CSI driver: + + * quay.io/k8scsi/snapshot-controller:v2.0.1 + * gke.gcr.io/csi-provisioner:v1.5.0-gke.0 + * gke.gcr.io/csi-attacher:v2.1.1-gke.0 + * gke.gcr.io/csi-resizer:v0.4.0-gke.0 + * gke.gcr.io/csi-snapshotter:v2.1.1-gke.0 + * gke.gcr.io/gcp-compute-persistent-disk-csi-driver:v0.7.0-gke.0 + +#### Migrating by Restoring from a Manually Provisioned Snapshot + +The idea is to provision a beta `VolumeSnapshotContents` in your new or upgraded +beta PD CSI driver cluster with the PD snapshot handle from the alpha snapshot, +then bind that to a new beta `VolumeSnapshot`. + +Note that if using the same upgraded cluster where the alpha driver was +installed, the alpha driver must be completely removed, including an CRs and +CRDs. In particular the alpha `VolumeSnapshot` and `VolumeSnapshotContents` +**must** be deleted before installing any of the beta CRDs. This means that any +information about which snapshot goes with which workload must be manually saved +somewhere. We recommend spinnng up a new cluster and installing the beta driver +there rather than trying to upgrade the alpha driver in-place. + +If you are able to set (or have already set) the `deletionPolicy` of any +existing alpha `VolumeSnapshotContents` and alpha `VolumeSnapshotClass` to +`Retain`, then deleting the snapshot CRs will not delete the underlying PD +snapshot. + +If you are creating a new cluster rather than upgrading an existing one, the +method below could also be used before deleting the alpha cluster and/or alpha +PD CSI driver deployment. This may be a safer option if you are not sure about +the deletion policy of your alpha snapshot. + +After confirming that the PD snapshot is available, restore it using the beta PD +CSI driver as described above in [Import a Pre-Existing +Snapshot](#import-a-pre-existing-snapshot). The restoration should in a cluster +with the beta PD CSI driver installed. At no point is the alpha cluster or +driver referenced. After this is done, using the alpha snapshot may conflict +with the resources created below in the beta cluster. + +The maintainers will welcome any additional information and will be happy to +review PRs clarifying or extending these tips.