When we got below error msg then we will follow below steps to resolve.
Warning ProvisioningFailed 3m1s (x9 over 7m16s) disk.csi.azure.com_csi-azuredisk-controller-768c8fcdc9-brl5t_b08ght54-a23re-4j1f-b-78d04a02729f failed to provision volume with StorageClass "default": error getting handle for DataSource Type VolumeSnapshot by Name swift-snapshot-wordpress-data-0-1736250965: requested volume size 5368709120 is less than the size 8589934592 for the source snapshot swift-snapshot-wordpress-data-0-1736250965
Normal ExternalProvisioning 108s (x23 over 7m16s) persistentvolume-controller
Waiting for a volume to be created either by the external provisioner 'disk.csi.azure.com' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Problem : The Azure disk associated with a Kubernetes Persistent Volume Claim (PVC) was manually resized from 5GB to 8GB in the Azure portal. However, the PVC and Persistent Volume (PV) objects in Kubernetes still reflect the old size (5GB) because Kubernetes is unaware of the manual resize. This mismatch can lead to inconsistencies, potential storage allocation issues, and application disruptions.
Solution:
- Please check the size of azure disk associated with the source PVC in azure portal.
- The name of the disk can be fetched from PVC object check spec.volumeName field.
- Lets take an example that the size of the disk in azure portal is 8GB whereas the PVC/PV is reflecting 5GB, as the volume was manually resized from 5 to 8 GB and the PVC/PV objects aren't aware of this change.
- If this is the case, please work with app team to resize the PVC/PV in K8S.
- Steps for resizing the PVC/PV in K8S:
- Edit the PVC, change the spec.resources.requests.storage value to the value seen in azure portal for the disk
- If the pods utilizing the volume are created by a parent object like deployement/Statefulset, just delete the pods and let the parent object controller recreate it. If the pods are standalone pods without a parent object, backup the pod yaml, delete the pod and apply the backed up yaml.