Skip to content

Add Volume Deletion Validation #67

Closed as not planned
Closed as not planned
@gomesjason

Description

@gomesjason

/kind bug

What happened?
When a dynamically provisioned volume is deleted, and that delete fails, the driver will not handle the failure, and still clean related artifacts.

What you expected to happen?
The driver should validate the delete was successful. The success from the API is not sufficient.

How to reproduce it (as minimally and precisely as possible)?
Simulate a situation that causes volume deletion to fail (mount it outside the driver), attempt to delete it, validate the PV is cleaned, yet the volume still exists.

Anything else we need to know?:
If you encountered this issue, and would like to inventory these volumes, see https://github.com/kubernetes-sigs/aws-fsx-openzfs-csi-driver/blob/main/docs/debugging.md#how-do-i-know-what-resources-in-my-account-are-maintained-by-the-driver

Environment

  • Kubernetes version (use kubectl version): -
  • Driver version: v1.1.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions