-
Notifications
You must be signed in to change notification settings - Fork 159
Driver doesn't throw a useful error with unformatted readonly disks #296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/cc @msau42 @saad-ali @hantaowang |
It seems like an oversight that NodeStageVolume doesn't pass in a readonly flag? Agree since the end behavior is that they both result in error it's not high priority to fix. Is there anyway we can return a better error message than "exit code 1"? Can we not get the actual error code/msg from the format command? |
This is the behavior of in-tree plugin :) we could open a separate issue there. |
I was referring to csi driver:
|
/help |
@davidz627: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
adds the detailed error output that contains the "cant format readonly disk" message. |
/assign |
Pre-formatted Disks
Disks that have been pre-formatted behave "properly" with
readOnly
on the PV Spec.Caveat:
NodeStage
technically doesn't mount the disk with thero
option (fromPod.spec.volumes.persistentVolumeClaim.readOnly
), but theNodePublish
bind-mount should bind mount asro
.Unformatted Disks
This CSI Driver
When an un-formatted disk is Attached as
readOnly
with AccessMode of Mount the driver will try to format a file-system on the disk (no matter whetherPod.spec.volumes.persistentVolumeClaim.readOnly
is set or not) and fail with error:In-tree Plugin
When an un-formatted disk is attached as
readOnly
with AccessMode of Mount the behavior of the (ifPod.spec.volumes.persistentVolumeClaim.readOnly
is set) is to throw a nice error:if
Pod.spec.volumes.persistentVolumeClaim.readOnly
is not set we get the error:Conclusions
AFAIK In the end the only difference is error messages but I thought I'd write this up since I did the exploration.
To give better error messages the
NodeStage
needs to know that the mount was requested as ReadOnly so that it can detect this error and throw a nicer error. (if put into mountoptions
forFormatAndMount
we get the same error behavior as in-tree)There are 2 options:
ro
option to theMountFlags
passed in to the driver from the Kubernetes side that comes from thepvSpec
ro
inMountFlags
only works for the PD model (?)readOnly
bool to theNodeStageVolume
operations that is propagated from thepvSpec
.The text was updated successfully, but these errors were encountered: