-
Notifications
You must be signed in to change notification settings - Fork 159
Add volume level serialization for controller operations #316
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add volume level serialization for controller operations #316
Conversation
Hi @hantaowang. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
/assign @davidz627 |
896ea45
to
8f91f4e
Compare
I created a VolumeLock struct that abstracts away the logic of the synchronized map. The earlier node level parallelization use it too now. Let me know if you think an actual Interface is required too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
core logic mostly LGTM. some comments on naming and testing.
Thanks!
type FakeBlockingCloudProvider struct { | ||
*FakeCloudProvider | ||
snapshotToExec chan SnapshotSourceAndTarget | ||
readyToExecute chan struct{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: naming. readyToExecute
doesn't really tell me much about who is ready to execute what. Maybe createSnapshotCalled
is more accurate?
@davidz627: GitHub didn't allow me to request PR reviews from the following users: misterikkit. Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
||
// Check that vol1CreateSnapshotReqB is rejected, due to same volume ID | ||
// Also allow vol1CreateSnapshotReqB to complete, in case it is allowed to Mount | ||
snapshotToExecute <- sourceAndTargetForRequest(vol1CreateSnapshotReqB) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pattern you are using here can be described as, "Signal on a channel once the request has begun, and wait for a return signal before allowing the request to complete."
I like this pattern.
A lot of the complexity in this test comes from the fact that you are using two independent channels to do this. An alternative is to use a channel of channels. That is, each request starts by creating a reply channel and sending it to the unit test through the "main" channel. Then each request waits for a reply on its own channel, and the test has clearer control over which request is allowed to proceed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried implementing this, but it actually don't because of how the call is set up. The test interface blocks in the cloudprovider function call, while logic that aborts the operation occurs in the layer above that. So if the operation is aborted, it will never make it to the cloud provider call. This means that the channel of channels won't get a chance to create and send the reply channel.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem arises from the fact that merging the two independent channels means that the caller will need to block on the "main" channel in to get the channel that allows the call to continue to completion. In the current setup, the operation that should be aborted has the "go ahead" SnapshotSourceAndTarget prequeued so that we can directly just go and check the error channel. In this set up, in order to give the go ahead we need the reply channel that is generated by the function call, but because that call is aborted, the reply channel isn't created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can still cause the first call to block while the second call gets an abort error. You don't need to block or unblock any threads for that. I'm a little sleep deprived, so maybe not understanding the concurrency correctly.
adcaa53
to
a8fb9fb
Compare
d61ff34
to
130a31e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is awesome! Just a couple small final comments
@@ -60,6 +60,12 @@ type GCECompute interface { | |||
GetSnapshot(ctx context.Context, snapshotName string) (*compute.Snapshot, error) | |||
CreateSnapshot(ctx context.Context, volKey *meta.Key, snapshotName string) (*compute.Snapshot, error) | |||
DeleteSnapshot(ctx context.Context, snapshotName string) error | |||
// Metadata | |||
GetProject() string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This breaks the abstraction, the project comes from the metadata service not the Compute service
pkg/gce-pd-csi-driver/controller.go
Outdated
return nil, status.Errorf(codes.InvalidArgument, "CreateVolume replication type '%s' is not supported", replicationType) | ||
} | ||
|
||
volumeID, err := common.KeyToVolumeID(volKey, gceCS.CloudProvider.GetProject()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gceCS.MetadataService.GetProject()
pkg/gce-pd-csi-driver/controller.go
Outdated
} | ||
|
||
attached, err := diskIsAttachedAndCompatible(deviceName, instance, volumeCapability, readWrite) | ||
if err != nil { | ||
return nil, status.Error(codes.AlreadyExists, fmt.Sprintf("Disk %v already published to node %v but incompatbile: %v", volKey.Name, nodeID, err)) | ||
return nil, status.Errorf(codes.AlreadyExists, "Disk %v already published to node %v but incompatbile: %v", volKey.Name, nodeID, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not gonna make you do this now but in the future this type of unrelated change should go in a seperate PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
The added diffs make it harder to review this PR.
pkg/gce-pd-csi-driver/controller.go
Outdated
} | ||
|
||
attached, err := diskIsAttachedAndCompatible(deviceName, instance, volumeCapability, readWrite) | ||
if err != nil { | ||
return nil, status.Error(codes.AlreadyExists, fmt.Sprintf("Disk %v already published to node %v but incompatbile: %v", volKey.Name, nodeID, err)) | ||
return nil, status.Errorf(codes.AlreadyExists, "Disk %v already published to node %v but incompatbile: %v", volKey.Name, nodeID, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
The added diffs make it harder to review this PR.
SourceVolumeId: testVolumeId + "2", | ||
} | ||
|
||
runRequest := func(req *csi.CreateSnapshotRequest) chan error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: to make this a little more readable, make the return type <-chan error
d45e50d
to
c8759d2
Compare
d248c8f
to
0d372dd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly lgtm
0d372dd
to
53a286c
Compare
@@ -39,7 +38,7 @@ type GCENodeServer struct { | |||
|
|||
// A map storing all volumes with ongoing operations so that additional operations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update the comments after changing to volumeLocks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that a comment is actually even needed here, as what VolumeLocks does is explained by the source common.VolumeLocks
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: davidz627, hantaowang The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind feature |
Per Issue 307 using logic similar to #303
Fixes: #307