-
Notifications
You must be signed in to change notification settings - Fork 30
Description
Steps To Reproduce
- Declare a ReadWriteOnce storageClass for MSSQL related volumes
- Configure them in the chart as such
# The MSSQL volumes for the PVCs
volume:
backups:
# Use an existing PVC by specifying the name.
# existingClaim: claimName
# Override the accessMode specified in general.volumeAccessMode
accessMode: ReadWriteOnce
# Override the storageClass specified in sharedStorageClassName
storageClass: "rook-cephrbd-bitwarden-mssql"
size: 1Gi
labels: {}
data:
# Use an existing PVC by specifying the name.
# existingClaim: claimName
# Override the accessMode specified in general.volumeAccessMode
accessMode: ReadWriteOnce
# Override the storageClass specified in sharedStorageClassName
storageClass: "rook-cephrbd-bitwarden-mssql"
size: 10Gi
labels: {}
log:
# Use an existing PVC by specifying the name.
# existingClaim: claimName
# Override the accessMode specified in general.volumeAccessMode
accessMode: ReadWriteOnce
# Override the storageClass specified in sharedStorageClassName
storageClass: "rook-cephrbd-bitwarden-mssql"
size: 10Gi
labels: {}
- Try to upgrade and watch it timeout
Expected Result
The helm chart should upgrade successfully.
Actual Result
The helm chart is stuck in the post-install-db-migrator-job, as the initContainer tries to mount the volume, and cannot, because it is a ReadWriteOnce volume.
[...]
volumeMounts:
{{- if .Values.database.enabled }}
- name: mssql-data
mountPath: /db
{{- end }}
[...]
volumes:
{{- if .Values.database.enabled }}
- name: mssql-data
persistentVolumeClaim:
claimName: {{ default ( include "bitwarden.mssqlData" . ) .Values.database.volume.data.existingClaim }}
{{- end }}
[...]
Screenshots or Videos
No response
Additional Context
I am fully aware that Bitwarden requires a ReadWriteMany-compatible storageClass for shared volumes. However, MSSQL volumes are not shared volumes (they should not be anyways, as we do not want multiple pods to write to the same database files). Additionally most storage CSIs warn about using ReadWriteMany volumes for database workloads.
It is the case for Ceph, that we are using, which does not recommend CephFS (RWM) which has very poor performance compared to CephRBD (RWO).
The post-install-db-migrator-job only uses the volume to check for the presence of the mdf file, which does not make a lot of sense IMHO, as we already checked that the mssql pod is up :
args: ['
while [[ $(kubectl get pods -n {{ .Release.Namespace }} -l app={{ template "bitwarden.mssql" . }} -o jsonpath="{.items[*].status.containerStatuses[*].ready}") != "true" ]]; do sleep 1; done
echo "SQL Ready!"
while [[ $(kubectl get pods -n {{ .Release.Namespace }} -l app={{ template "bitwarden.admin" . }} -o jsonpath="{.items[*].status.containerStatuses[*].ready}") != "true" ]]; do sleep 1; done
echo "Admin Ready!"
while [ ! -f /db/vault.mdf ]; do sleep 1; done
echo "DB Ready!"
']
Removing this check would be enough to make this chart compliant.
Chart Version
self-host-2024.8.0
Environment Details
- K8s
Issue Tracking Info
- I understand that work is tracked outside of Github. A PR will be linked to this issue should one be opened to address it, but Bitwarden doesn't use fields like "assigned", "milestone", or "project" to track progress.