You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the 1.26.1 version of the driver we do experience I/O errors on the mount point after some time.
We suspect the expiration of the token obtained using the workload identity method to not be taken into account. Looking at the code we see no renewal of the token once obtained. So when it expires, access to the storage account is no longer possible, causing an application outage.
The driver should renew its token before the expiration happens to prevent any issue.
How to reproduce it:
Using workload identities with a managed identity, mount a container on the pod. Wait 24h or more, then from the pod any access to the mounted volume result in an I/O error.
Anything else we need to know?:
We had to roll back to 1.25.3, before the workload identity implementation was made in the driver. With this version, the issue does not occur.
Environment:
CSI Driver version: v1.26.1
Kubernetes version (use kubectl version): 1.30.7
OS (e.g. from /etc/os-release): AKSUbuntu-2204gen2containerd-202501.22.0