-
Notifications
You must be signed in to change notification settings - Fork 159
Set MaxVolumesPerNode on NodeGetInfo call based on Node Type #19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, I want to take this as my first issue, and I hope to make sure I did not misunderstand the problem. |
Hi @Mockery-Li! Thank you for your interest :) To implement What this issue entails is modifying The max number of nodes can be determined via machine type: see Persistent Disk Limits section. You should be able to get the Machine type through the Metadata Service in This should just be a matter of piping the metadata service through to the Hope that is enough information to get you started. Feel free to reach out to me on the Kubernetes Slack (I am |
You can probably also copy a lot of the functionality from here |
That's true but I think there are a few differences in the current state (and the driver) from that code.
|
Thank you. I'm trying both methods. |
…ncy-openshift-4.10-ose-gcp-pd-csi-driver Updating ose-gcp-pd-csi-driver images to be consistent with ART
Currently
NodeGetInfoResponse
returns the default of0
forMaxVolumesPerNode
so the CO will decide how many volumes can be published on a node.For GCE we need to return a different number based on node type as the Max Attachable Volumes depends on the number of vCPUs the instance has.
For the actual limits see:
See: https://cloud.google.com/compute/docs/disks/
"persistent disk limits" section
You should be able to GET the instance from the cloud and pull the number of vCPUs from that.
Bonus: We seem to need information from the node object a lot, caching the relevant information somewhere would be nice. Maybe in the
GCENodeServer
objectThe text was updated successfully, but these errors were encountered: