0

I tried installing an update to a 6.0 host and it appears to have failed. After rebooting, 2 of the NFS data stores are inactive (unmounted). Right-clicking them only offers to unmount. When I ssh into the host, though, I can see the shares in /vmfs/volumes. Some of them are using guid/uuid for names for some reason (random string like 198c1ce2-9f3e3448-3aef-6f902142e212). I can browse these in shell and see the vmx files that the non-responding machines use.

I'm afraid to unmount and re-mount the drives because I don't want to lose the machines. Interestingly, the machine I'm most concerned about has 3 storage devices listed in vSphere clietn summary: the hosts datastore, and 2 nfs shares. One of them is connected, and the other is one of the 2 "unmounted" drives. If go into "Edit settings" and look at the kard drives, they both point to vmdk; one is Thick provisioned lazy 0 at 0MB; SCSI (0:1) Hard Disk 2, modes unchecked. The other is Thick provisioned lazy 0 200 GB SCSI (0:0) Hard disk 1. Mode unchecked as well. I believe this is the OS drive?

I'm wondering if I added this second disk as additional storage for some reason and don't really need it to boot. Would I be risking anything to delete this drive and try to boot? Would it make sense to take a snapshot before if it does make sense? Any other ideas here?

stormdrain
  • 1,377
  • 7
  • 28
  • 51

1 Answers1

0

What I ended up doing what re-mounting the datastore under another name to the datastores and deleting the disk and re-adding it using existing disk pointing to the newly re-added datastore file. Seems like it's some kind of networking issue; the unmounted datastore is not seeing the NFS via nfs.domain.local:/mnt/array/drive, but I was able to re-add it using nfs:/mnt/array/drive. Odd because nslookup works fine on the command line in both instances.

stormdrain
  • 1,377
  • 7
  • 28
  • 51