-
Notifications
You must be signed in to change notification settings - Fork 159
Support for disk labels parameter #340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Sorry, I didn't read the doc. |
Hi @katsew, regardless of the feature request for in-tree vs CSI driver, I would like to understand your use cases. Do you want the labels on the K8s PersistentVolume object, or the GCP Persistent Disk object? Do you want to label per app, per-namespace, or per cluster? |
Thanks for the reaction! For GCP PD, I want per-app, per-cluster labels to determine which app creates a disk. For k8s PV, I want to cascade labels which PVC has, and it can be per-app, per-namespace labels. |
Sorry for the late response. Yes, I think the request makes sense. For labeling GCP PD, we can add a new StorageClass parameter for it. For the K8s labeling case, you can get the PV that a PVC uses from PVC.Spec.Name. Is that sufficient to search for PVs by labels? /kind feature |
Thanks for the response.
This could help finding PVs by labels, but could not filtering these.
I think adding labels from PVCs to PVs gives more sufficient way to filter PVs by labels.
|
I also think that propagating Kubernetes labels from the PVC to PV and from the PV to the actual GCE PD would increase discoverability. At the moment, now I'm force to create GCE PDs myself, then a Kubernetes PV, which is far from ideal and does not work well with Pod auto-scaling. |
1 similar comment
I also think that propagating Kubernetes labels from the PVC to PV and from the PV to the actual GCE PD would increase discoverability. At the moment, now I'm force to create GCE PDs myself, then a Kubernetes PV, which is far from ideal and does not work well with Pod auto-scaling. |
Putting here as fyi, GKE has a feature to label all GCP resources on a per cluster basis. This may help partially with your use case. https://cloud.google.com/kubernetes-engine/docs/how-to/creating-managing-labels We are also considering adding some k8s metadata like PVC name and namespace to the disk. #387 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/lifecycle frozen |
Any comments on the status? We're in need to put custom labels for special identification purposes (matters not related to kubernetes). Looking into other drivers:
Should we start working on request.Parameters["labels"] = "key1=val1,key2=val2" that translates into GCE Disk labels? We also consider forking the project for this improvement to get it available ASAP. |
Adding a storageclass parameter seems reasonable to me. Contributions are greatly appreciated! |
I have opened PR to address this ticket, please take a look. |
I've taken over #469 and will submit a new PR shortly. |
/remove-help |
Problem
I use PVC and PV resources on GKE, and I also use PersistentDisk(PD) for ComputeEngine instances.
I usually filter PDs by labels when I delete it.
However, the PD which created by PV does not have labels for filtering.
This makes me difficult to filter to delete PDs.
Solution
Adding parameter to StorageClass and use it in this driver.
Additional Info
I read the related issue on external-provisioner, but in my use case, I think it's better to implement on this driver, since it depends on GCP PD.
Sample implementation:
https://github.com/katsew/gcp-compute-persistent-disk-csi-driver/tree/disk-labels-support/pkg
The text was updated successfully, but these errors were encountered: