Skip to content

HostToContainer Mount Propagation not working #611

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
taylorsilva opened this issue Sep 22, 2020 · 2 comments
Closed

HostToContainer Mount Propagation not working #611

taylorsilva opened this issue Sep 22, 2020 · 2 comments

Comments

@taylorsilva
Copy link

taylorsilva commented Sep 22, 2020

Summary

Pods with HostToContainer mount propagation do not receive existing overlay mounts on a GCP PD.
Unclear if #95 is related.

Steps to Reproduce

We have one pod, mounter, that has a GCP PD where it creates overlay mounts. It is configured with Bidirectional mount propagation. PVC and Mounter deployment manifest.

kubectl create ns vt
kubectl apply -n vt -f pvc-mounter.yml
# wait for pod to come up then check logs to verify mounts were created
kubectl logs -n vt deployment/mounter

After creating overlay mounts we create a second pod, consumer, which is configured with HostToContainer mount propagation. Consumer manifest

kubectl apply -n vt -f consumer.yml
# check logs to see that the mount under `some-pvc` did not propagate.
# The mount under `some-hostpath` does proagate
kubectl logs -n vt deployment/consumer

Expected

Mounts under /some-pvc propagate to all future containers.

Actual

Mounts under /some-pvc do not propagate to future containers. They only propagate to containers that exist before the mount was created.

Other Comments

Our initial experiments were with GCP PD's. We then tried hostPath and found that mount propagation worked as expected. We haven't tried other CSI drivers. Not sure what information is relevant to y'all so here's some of what we thought may be useful.

$ k describe sc/standard
Name:                  standard
IsDefaultClass:        Yes
Annotations:           storageclass.beta.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/gce-pd
Parameters:            type=pd-standard
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Node Version: v1.15.9-gke.8
Master Version: 1.15.12-gke.2

Side-note: after following the reproducible steps we try and clean up everything in the namespace but everything hangs in a terminating state. This might be a more k8s specific issue but we're unsure what's causing all the various k8s objects (deployments, pods, pvc, pv, etc.) to get stuck in a terminating state and never finish. We can eventually clear everything out with --force --grace-period=0 on all the objects.

@mattcary
Copy link
Contributor

Thanks for the issue!

As you've described it, this seems to be an issue with the in-tree driver and not this CSI driver? The storageclass above is kubernetes.io/gce-pd which is the in-tree driver.

@xtreme-sameer-vohra
Copy link

Thanks @mattcary. Created a new issue here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants