Skip to content

Force timeout nodeunstagevolume #1918

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

davis-haba
Copy link
Contributor

@davis-haba davis-haba commented Feb 3, 2025

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change

/kind bug

/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake

What this PR does / why we need it:

Adds a 30 second timeout to NodeUnstageVolume. If a device has been in use for longer than 30 seconds, the next NodeUnstageVolume attempt to succeed.

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

2 CMEK E2E tests are failing locally, but I believe this is due to test setup issues, as this change is entirely unrelated to CMEK.

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/design Categorizes issue or PR as related to design. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Feb 3, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @davis-haba!

It looks like this is your first PR to kubernetes-sigs/gcp-compute-persistent-disk-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/gcp-compute-persistent-disk-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @davis-haba. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Feb 3, 2025
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 3, 2025
@davis-haba
Copy link
Contributor Author

/cc @pwschuurman

@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch 4 times, most recently from d6e7c09 to 10719ad Compare February 3, 2025 23:25
@davis-haba
Copy link
Contributor Author

/test all

@k8s-ci-robot
Copy link
Contributor

@davis-haba: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/test all

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@davis-haba
Copy link
Contributor Author

/ok-to-test

@k8s-ci-robot
Copy link
Contributor

@davis-haba: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.

In response to this:

/ok-to-test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@mattcary
Copy link
Contributor

mattcary commented Feb 4, 2025

/ok-to-test

I'll let @pwschuurman comment on the PR.

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Feb 4, 2025
@pwschuurman
Copy link
Contributor

/ok-to-test

@davis-haba
Copy link
Contributor Author

/retest

1 similar comment
@davis-haba
Copy link
Contributor Author

/retest

// Wait 35s (30s timeout + 5s buffer) and try again
time.Sleep(35 * time.Second)
err = client.NodeUnstageVolume(volID, stageDir)
Expect(err).To(BeNil(), "Failed to unpublish after 30s in-use timeout for volume: %s, stageDir: %s", volID, stageDir)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should succeed, right?

Copy link
Contributor Author

@davis-haba davis-haba Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, unless I'm missing something, it does succeed. Expects err to be nil.

@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch from c7d97eb to 6d78a36 Compare February 4, 2025 22:40
@davis-haba
Copy link
Contributor Author

/retest

@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch 2 times, most recently from 3c10de7 to c7e8c05 Compare February 4, 2025 22:51
@@ -64,6 +64,9 @@ var (
waitForOpBackoffSteps = flag.Int("wait-op-backoff-steps", 100, "Steps for wait for operation backoff")
waitForOpBackoffCap = flag.Duration("wait-op-backoff-cap", 0, "Cap for wait for operation backoff")

enableDeviceInUseTimeout = flag.Bool("enable-device-in-use-timeout", true, "If set to true, ignores device in use errors when attempting to unstage a device if it has been stuck for longer than 'device-in-use-timeout'")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the boolean flag should bypass the "device in use" check entirely, and be renamed enable-device-in-use-check-on-node-unstage. I don't see much value in having a boolean for using the timeout or not (eg: if a user really wants a disable the timeout, they can set device-in-use-timeout to infinity). I think the value of the switch more for users where the device in use check is blocking NodeUnstage due to the way /sys/fs/ is reacting, and wants to skip in entirely, for all disks.

In retrospect, this is something that I should have added in #1658, to allow the feature to be turned off.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see what you mean. Fixed --the confirmDeviceUnused in node.go will now always return nil if the flag is true.

klog.Warningf("Unabled to check if device for %s is unused. Device has been unmounted successfully. Ignoring and continuing with unstaging. (%v)", volumeID, err)
} else if ns.deviceInUseErrors.checkDeviceErrorTimeout(volumeID) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some edge cases if a NodeUnstage that gets skipped due to force detach, or manual intervention. I don't know how often these could occur, and I think generally they're just theoretical problems:

  1. A pre-existing old deviceInUse, resulting in the device in use signal being ignored. I'm less concerned about this one, as worst case it wouldn't block a user's workload, and this is a best effort deviceInUse check. We could solve this with a TTL cache, but I don't see an existing reference implementation we could easily use.
  2. Device entries growing unbounded, if there are new unique volume IDs being added every now and then, and NodeUnstage gets skipped. In GCE the maximum number of concurrent disks is 128, so we could set a maximum cache entry to something larger, say 256 or 512 to bound the growth. This would allow for a fresh set of ~128 concurrent disks with room for some overlap. We could solve this with a LRU cache (github.com/hashicorp/golang-lru).

Copy link
Contributor Author

@davis-haba davis-haba Feb 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I implemented the LRU cache you suggested to address point 2.

Regarding point 1, it looks the library you mentioned also has a cache that will auto-expire keys (e.g. TTL): https://github.com/hashicorp/golang-lru?tab=readme-ov-file#expirable-lru-cache-example

Do we want to use this? What would a reasonable TTL be? maybe deviceInUseTimeout * 2?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to use this? What would a reasonable TTL be? maybe deviceInUseTimeout * 2?

Yeah, that seems reasonable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. I added the exirable cache.
c := expirable.NewLRU[string, time.Time](maxDeviceCacheSize, nil, timeout*2)

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 7, 2025
@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch 3 times, most recently from 391038a to 834b272 Compare February 7, 2025 22:20
@davis-haba
Copy link
Contributor Author

/retest

devErrMap.mux.Lock()
defer devErrMap.mux.Unlock()

lastErrTime, exists := devErrMap.cache.Get(deviceName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't the cache hold the first error time, rather than the last? Maybe rename this to indicate something like "first time error encountered".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good call. Renamed to firstEncounteredErrTime.

defer devErrMap.mux.Unlock()

lastErrTime, exists := devErrMap.cache.Get(deviceName)
return exists && currentTime().Sub(lastErrTime).Seconds() >= devErrMap.timeout.Seconds()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be more readable to compare Time objects with After()

expirationTime := lastErrTime.Add(devErrMap.timeout)
return exists && currentTime().After(expirationTime)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch from cea18ad to e3fc317 Compare February 8, 2025 01:05
@davis-haba
Copy link
Contributor Author

/retest

1 similar comment
@davis-haba
Copy link
Contributor Author

/retest

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: davis-haba, pwschuurman

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 10, 2025
@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch from 1a4056b to 33c2d9e Compare February 10, 2025 22:59
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 10, 2025
@davis-haba davis-haba force-pushed the force-timeout-nodeunstagevolume branch from 33c2d9e to f15a491 Compare February 10, 2025 23:51
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 10, 2025
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Feb 11, 2025

@davis-haba: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration f15a491 link true /test pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@davis-haba
Copy link
Contributor Author

/retest

@pwschuurman
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 11, 2025
@pwschuurman pwschuurman merged commit 54297b1 into kubernetes-sigs:master Feb 11, 2025
5 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/design Categorizes issue or PR as related to design. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants