2

At a customers site we have 2 (+1 Backup and Test) ESXi server, working independently (without vSphere). The customers request is to replace an SSD RAID with 500GB with a 2TB one.

ESXi is installed on that SSD RAID. We used the 3rd server (test one) to test our workflow as described here: https://kb.vmware.com/s/article/2002461

We DD'ed the original raid to the new raid so we have an exact copy of the original hard drive.

We booted ESXi successfully. It lost the mount of the datastore but esxcfg-volume -M succeeds. So everything is working again.

Now we tried to resize the partition and filesystem containing the datastore.

vmkfstools -P /vmfs/volumes/datastore1

gave us the name and partition, in this case

naa.600605b00e7ef41025b05be20a1ac269:3

partedUtil get /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac269

returned

243133 255 63 3905945600 1 64 8191 0 128 5 8224 520191 0 0 6 520224 1032191 0 0 7 1032224 1257471 0 0 8 1257504 1843199 0 0 9 1843200 7086079 0 0 2 7086080 15472639 0 0 3 15472640 975699934 0 0

partedUtil getUsableSectors /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac

returned

34 3905945566

so we did

partedUtil resize /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac269 3 15472640 3905945566

and expected by the KB we did

partedUtil fixGpt /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac269

for a backup copy of the partition table

Everything checked again looking perfectly fine and as expected. We have a working hard drive with a grown partition and ESXi still reports ~500GB SSD as expected because the final step would be resizing the vmfs.

vmkfstools --growfs /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac269:3 /vmfs/devices/disks/naa.600605b00e7ef41025b05be20a1ac269:3

returns this:

Not found Error: No such file or directory

And this is where we don't know whats the problem. We triple checked the paths, we used /dev/disks instead, we CD'ed into the directories and used the file without absolute path, etc. no different output. We tried using " and ' but I don't expect a problem with the :

We checked the logs on the scratch partition but no reason there.

I searched for like an hour online but the only help I found either had no responses or referenced the KBs with the hint that he/she made a mistake somewhere.

So we double checked all our actions again and I can't find any mistake I could have made. Essentially this is the same workflow as with any other linux system -> DD, resize partition, resize FS (unmounted).

(Yea we also tried it mounted and unmounted)

If you can see any mistake I made I can't see, please tell me. If you need any information, just ask.

If this case is successful the 2 live servers need to follow in about 2 weeks. But I need to be sure that the process works as expected.

Thank you for any help and have a nice day.

Silberling
  • 41
  • 7

1 Answers1

1

Full post on Reddit Sharing the important bit here:

When vmkfstools --growfs "/vmfs/devices/disks/devicename:partition#" "/vmfs/devices/disks/devicename:partition#" says "Not Found", it means that the vmfs volume UUID on that partition do not match. How it happens, who knows, but the fix is to resignature the volume.

In order to do this, you must move/unregister any vm's on the datastore and unmount the datastore. I don't know how to do that from CLI, so I just used the GUI.

[Edit] Command is: esxcli storage filesystem unmount [-uUUID | -l label | -p path ]

Once the datastore is unmounted, esxcfg-volume --list to verify your UUID/label. esxcfg-volume --resignature <VMFS UUID|label> to resignature it

vmkfstools -V

vmkfstools --growfs "/vmfs/devices/disks/devicename:partition#" "/vmfs/devices/disks/devicename:partition#"

chongo2002
  • 15
  • 8
Amauros
  • 11
  • 1