Description
/kind bug
What happened?
When a dynamically provisioned volume is deleted, and that delete fails, the driver will not handle the failure, and still clean related artifacts.
What you expected to happen?
The driver should validate the delete was successful. The success from the API is not sufficient.
How to reproduce it (as minimally and precisely as possible)?
Simulate a situation that causes volume deletion to fail (mount it outside the driver), attempt to delete it, validate the PV is cleaned, yet the volume still exists.
Anything else we need to know?:
If you encountered this issue, and would like to inventory these volumes, see https://github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver/blob/main/docs/debugging.md#how-do-i-know-what-resources-in-my-account-are-maintained-by-the-driver
Environment
- Kubernetes version (use
kubectl version
): - - Driver version: v1.1.0