0

I have a VM in zone 1 and 2 and a disk in zone 1, but when I run my script it fails with the following message:

AttachVolume.Attach failed for volume "disk-name" : rpc error: code = Unknown desc = 
Attach volume /subscriptions/subscription-id/resourceGroups/xxxxx_westeurope/providers/Microsoft.Compute/disks/disk-name to instance virtual-machine-name failed with Retriable: false, RetryAfter: 0s, HTTPStatusCode: 400, 
RawError: {
   "error": {
      "code": "BadRequest",
      "message": "Disk /subscriptions/subscription-id/resourceGroups/xxxxx_westeurope/providers/Microsoft.Compute/disks/disk-name cannot be attached to the VM because it is not in the same zone as the VM. VM zone: '2'. Disk zone: '1'."
   }
}

I've kind of tried everything now and have no idea how to solve this. Are there issues with machines running in multiple zones and disks running in a single zone?

[Edit]

It has worked until now, and now it fails. I solved the same issue the other day by setting the scale of the k8s deployment to 0 and restarting, but it's not working now

Domenico
  • 3
  • 2

1 Answers1

0

LRS disks are no zone redundant and need to exist in the same zone as the VM they are being attached to. ZRS disks should work as per the AKS docs:

Azure disk availability zone support

Volumes that use Azure managed LRS disks are not zone-redundant resources, those volumes cannot be attached across zones and must be co-located in the same zone as a given node hosting the target pod. Volumes that use Azure managed ZRS disks(supported by Azure Disk CSI driver v1.5.0+) are zone-redundant resources, those volumes can be scheduled on all zone and non-zone agent nodes.

Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and Kubernetes will take care of scheduling any pod that claims this PVC in the correct availability zone.

Sam Cogan
  • 38,158
  • 6
  • 77
  • 113