You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once RequiresRepublish is implemented (#585), a ServiceAccount token will be supplied every ~75 seconds by the Kubelet for each volume, even though the underlying token won't change until about ~80% (with jitter) of the SA token's lifetime. (Default min/default token lifetime is 10m/24h, so token rotation is ~8m/19h12m)
This could trigger an unnecessary call volume for providers that might today synchronously exchange a token for cloud credentials that assume they'll only ever be called on initial setup, but don't need to refresh exchanged credentials with the same frequency as the repeated NodePublishVolume calls by the Kubelet.
This naturally will lead providers to likely cache a provided token and cloud credentials, but this leads to the problem of what to use for a cache key.
The current Mount info supplied to providers includes the Pod name/namespace/uid, which is not necessarily enough for volume uniqueness (ex: pod name can be reused across time (although UID won't be reused), and pods can have multiple volumes: potentially using multiple SecretProviderClasses) to store a cached token and/or credential.
While its true that the token itself could be used as a cache key by providers, its probably not a good idea as its much more likely to be logged, and possible cloud credentials exchanged for a given token could differ between volume mounts (ex: pod1/mount1 uses token1 for credentials in cloud region us-east-1, pod1/mount2 uses the same token1 for credentials in cloud region us-west-2. The AWS provider allows such a configuration)
To give providers a way to uniquely cache tokens and exchanged cloud credentials, we should supply the VolumeID from the NodePubilshVolume request into the Mount request passed on to providers.
Anything else you would like to add:
N/A
Environment:
Secrets Store CSI Driver version: (use the image tag): Any
Kubernetes version: (use kubectl version): Any modern version
The text was updated successfully, but these errors were encountered:
Describe the solution you'd like
Once RequiresRepublish is implemented (#585), a ServiceAccount token will be supplied every ~75 seconds by the Kubelet for each volume, even though the underlying token won't change until about ~80% (with jitter) of the SA token's lifetime. (Default min/default token lifetime is 10m/24h, so token rotation is ~8m/19h12m)
This could trigger an unnecessary call volume for providers that might today synchronously exchange a token for cloud credentials that assume they'll only ever be called on initial setup, but don't need to refresh exchanged credentials with the same frequency as the repeated NodePublishVolume calls by the Kubelet.
This naturally will lead providers to likely cache a provided token and cloud credentials, but this leads to the problem of what to use for a cache key.
The current Mount info supplied to providers includes the Pod name/namespace/uid, which is not necessarily enough for volume uniqueness (ex: pod name can be reused across time (although UID won't be reused), and pods can have multiple volumes: potentially using multiple SecretProviderClasses) to store a cached token and/or credential.
While its true that the token itself could be used as a cache key by providers, its probably not a good idea as its much more likely to be logged, and possible cloud credentials exchanged for a given token could differ between volume mounts (ex: pod1/mount1 uses token1 for credentials in cloud region us-east-1, pod1/mount2 uses the same token1 for credentials in cloud region us-west-2. The AWS provider allows such a configuration)
To give providers a way to uniquely cache tokens and exchanged cloud credentials, we should supply the VolumeID from the NodePubilshVolume request into the Mount request passed on to providers.
Anything else you would like to add:
N/A
Environment:
kubectl version
): Any modern versionThe text was updated successfully, but these errors were encountered: