-
Notifications
You must be signed in to change notification settings - Fork 159
Attach/Detach back off #847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Welcome @lizhuqi! |
Hi @lizhuqi. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign mattcary |
Some scalability tests would be good to understand performance impact about this change |
/ok-to-test Offline discussion: the problems we've been seeing in practice occur when a particular node starts to error (ie, the per-vm operation queue in GCE fills up causing attach requests to immediately return with an error until the queue drains). So Julie's looking into making the rate limiting be more fine-grained, at least to the node level and maybe only on error as well. We don't want, eg, scale-ups for large clusters to be unnecessarily rate-limited when the requests per node are not high. |
/retest |
/test pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration |
/cc @saikat-royc |
pkg/gce-pd-csi-driver/controller.go
Outdated
// nodes is a list of nodes with attach/detach operation failures so those | ||
// nodes shall be rate limited for all the attach/detach operations until | ||
// there is an attach / detach operation succeeds | ||
nodes map[string]bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"nodes" could have a more descriptive name, eg publishErrorsSeenOnNode.
/retest |
t.Fatalf("Only %v requests queued up for node has seen error", gceDriver.cs.queue.Len()) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wait a little longer, shouldn't the queue empty (because the fake provider will succeed the test?)
It's also possible that this test will be flaky. It would be more rigorous to use a fake rate limiter that could be controlled explicitly. I don't know if that's straightforward to do, so maybe we can see how this goes (although you may be creating more work for yourself in the long run...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The queue will be not empty as the request doesn't succeed so it will be re-queued. The reason why the request doesn't succeed is because I am using a real cloud provider so it will fail at getting volume lock step. The test case is to make sure when the request doesn't go through, all the requests failed and new requests will be queued.
Maybe I can add more test cases using a fake cloud provider to have the request succeed but if that is the case, it is quite dynamic and async so nothing can be for sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added more tests
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lizhuqi, mattcary The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
In GCE, Attach/Detach operations have a 32-deep queue on VM. Added a workqueue for Attach/Detach calls with a rate limiter.
Special notes for your reviewer:
Does this PR introduce a user-facing change?: