Skip to content

Attach/Detach back off #847

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Dec 1, 2021
Merged

Attach/Detach back off #847

merged 1 commit into from
Dec 1, 2021

Conversation

lizhuqi
Copy link
Contributor

@lizhuqi lizhuqi commented Sep 28, 2021

What type of PR is this?
/kind feature

What this PR does / why we need it:
In GCE, Attach/Detach operations have a 32-deep queue on VM. Added a workqueue for Attach/Detach calls with a rate limiter.

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 28, 2021
@k8s-ci-robot
Copy link
Contributor

Welcome @lizhuqi!

It looks like this is your first PR to kubernetes-sigs/gcp-compute-persistent-disk-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/gcp-compute-persistent-disk-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @lizhuqi. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 28, 2021
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 28, 2021
@lizhuqi
Copy link
Contributor Author

lizhuqi commented Sep 28, 2021

/assign mattcary

@jingxu97
Copy link
Contributor

Some scalability tests would be good to understand performance impact about this change

@mattcary
Copy link
Contributor

/ok-to-test

Offline discussion: the problems we've been seeing in practice occur when a particular node starts to error (ie, the per-vm operation queue in GCE fills up causing attach requests to immediately return with an error until the queue drains). So Julie's looking into making the rate limiting be more fine-grained, at least to the node level and maybe only on error as well.

We don't want, eg, scale-ups for large clusters to be unnecessarily rate-limited when the requests per node are not high.

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Sep 29, 2021
@lizhuqi
Copy link
Contributor Author

lizhuqi commented Oct 7, 2021

/retest

@lizhuqi
Copy link
Contributor Author

lizhuqi commented Oct 7, 2021

/test pull-gcp-compute-persistent-disk-csi-driver-kubernetes-integration

@lizhuqi
Copy link
Contributor Author

lizhuqi commented Oct 8, 2021

/cc @saikat-royc

// nodes is a list of nodes with attach/detach operation failures so those
// nodes shall be rate limited for all the attach/detach operations until
// there is an attach / detach operation succeeds
nodes map[string]bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"nodes" could have a more descriptive name, eg publishErrorsSeenOnNode.

@lizhuqi
Copy link
Contributor Author

lizhuqi commented Nov 5, 2021

/retest

t.Fatalf("Only %v requests queued up for node has seen error", gceDriver.cs.queue.Len())
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we wait a little longer, shouldn't the queue empty (because the fake provider will succeed the test?)

It's also possible that this test will be flaky. It would be more rigorous to use a fake rate limiter that could be controlled explicitly. I don't know if that's straightforward to do, so maybe we can see how this goes (although you may be creating more work for yourself in the long run...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The queue will be not empty as the request doesn't succeed so it will be re-queued. The reason why the request doesn't succeed is because I am using a real cloud provider so it will fail at getting volume lock step. The test case is to make sure when the request doesn't go through, all the requests failed and new requests will be queued.

Maybe I can add more test cases using a fake cloud provider to have the request succeed but if that is the case, it is quite dynamic and async so nothing can be for sure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more tests

@lizhuqi lizhuqi changed the title [WIP] Attach/Detach back off Attach/Detach back off Nov 16, 2021
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 16, 2021
@mattcary
Copy link
Contributor

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 30, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: lizhuqi, mattcary

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 30, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants