You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Pods with HostToContainer mount propagation do not receive existing overlay mounts on a GCP PD.
Unclear if #95 is related.
Steps to Reproduce
We have one pod, mounter, that has a GCP PD where it creates overlay mounts. It is configured with Bidirectional mount propagation. PVC and Mounter deployment manifest.
kubectl create ns vt
kubectl apply -n vt -f pvc-mounter.yml
# wait for pod to come up then check logs to verify mounts were created
kubectl logs -n vt deployment/mounter
After creating overlay mounts we create a second pod, consumer, which is configured with HostToContainer mount propagation. Consumer manifest
kubectl apply -n vt -f consumer.yml
# check logs to see that the mount under `some-pvc` did not propagate.
# The mount under `some-hostpath` does proagate
kubectl logs -n vt deployment/consumer
Expected
Mounts under /some-pvc propagate to all future containers.
Actual
Mounts under /some-pvc do not propagate to future containers. They only propagate to containers that exist before the mount was created.
Other Comments
Our initial experiments were with GCP PD's. We then tried hostPath and found that mount propagation worked as expected. We haven't tried other CSI drivers. Not sure what information is relevant to y'all so here's some of what we thought may be useful.
Side-note: after following the reproducible steps we try and clean up everything in the namespace but everything hangs in a terminating state. This might be a more k8s specific issue but we're unsure what's causing all the various k8s objects (deployments, pods, pvc, pv, etc.) to get stuck in a terminating state and never finish. We can eventually clear everything out with --force --grace-period=0 on all the objects.
The text was updated successfully, but these errors were encountered:
As you've described it, this seems to be an issue with the in-tree driver and not this CSI driver? The storageclass above is kubernetes.io/gce-pd which is the in-tree driver.
Summary
Pods with
HostToContainer
mount propagation do not receive existing overlay mounts on a GCP PD.Unclear if #95 is related.
Steps to Reproduce
We have one pod,
mounter
, that has a GCP PD where it creates overlay mounts. It is configured withBidirectional
mount propagation. PVC and Mounter deployment manifest.After creating overlay mounts we create a second pod,
consumer
, which is configured withHostToContainer
mount propagation. Consumer manifestExpected
Mounts under
/some-pvc
propagate to all future containers.Actual
Mounts under
/some-pvc
do not propagate to future containers. They only propagate to containers that exist before the mount was created.Other Comments
Our initial experiments were with GCP PD's. We then tried
hostPath
and found that mount propagation worked as expected. We haven't tried other CSI drivers. Not sure what information is relevant to y'all so here's some of what we thought may be useful.Node Version:
v1.15.9-gke.8
Master Version:
1.15.12-gke.2
Side-note: after following the reproducible steps we try and clean up everything in the namespace but everything hangs in a
terminating
state. This might be a more k8s specific issue but we're unsure what's causing all the various k8s objects (deployments, pods, pvc, pv, etc.) to get stuck in aterminating
state and never finish. We can eventually clear everything out with--force --grace-period=0
on all the objects.The text was updated successfully, but these errors were encountered: