-
Notifications
You must be signed in to change notification settings - Fork 159
pdcsi 0.14.9 - failed to find and re-link disk with udevadm #1289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm finding same error with version We've tried upgrading GKE version (twice) with same results. Also replacing the VMs, deleting the pods, ... does not work. Disks seems to be healthy and can be mounted on the VM.
I think in our case the issue is more related with #1398 |
Hi, We had same issue this Saturday while we were upgrading the GKE to
btw, @devoxel , restarting the pod does not help. We even tried to cordon the node where the pod was running and deleting the pod so the was scheduled to another node, but that didn't help. Same error. As this is a Prod cluster we had no other choice that deleting the PVs and get new ones, luckily for us the data was replicated in other PVs. |
Hi, Same issue. multiple nodes in the cluster started having this error when the pods are rescheduled. Rescheduled the pods, provisioned new nodes, these PVs will not mount.
Errors about the serial numbers in the
|
Just adding a +1 here: I've seen this, but curiously only in GKE ARM clusters. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Still seeing this in GKE ARM nodes. ETA: Or is our issue more like #608 ?
|
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Had the same issue happen today:
|
We are hitting a similar issue to #608 in a GKE cluster - running version
1.25.8-gke.1000
and a Spot VM nodepool. That last comment of that bug mentioned it's best to start a new bug when discussing this.Some Background
The Bug
Here's the json of the csidriver object on the cluster:
I haven't deleted the pod yet. This is a playground cluster and this sts can tolerate a single failure. I expect deleting the pod will fix the issue - but if requested I can verify this. I thought it might be more useful to keep the pod / node for direct debugging (but it's a spot VM so it could just be yanked away).
I'm also opening a support issue for this, but these things probably will move faster if I report it directly here.
The text was updated successfully, but these errors were encountered: