1

This is a follow-on to How can I grow a 3Ware 9650SE RAID1 under ESXi 5.0?

I've successively replaced 1TB drives in my RAID1 with 2TB drives hoping that I can grow the datastore I've got in ESXi 5.0. After replacing the drives, and letting the rebuild finish, I can boot into ESXi (the RAID is also the boot partition) but partition tools (both the ESXi maintenance partedUtil and a gParted boot disk) show the RAID being the original sub ~1TB size.

What do I need to do to allow OSs, particularly ESXi, see the unused portions of the drives?

EDIT As MDMarra suggested below, I had tried the CLI KB article but confusing results. I think my question still stands. Worded differently: Why are partition tools unable to read the full size of the drives in a raid, and how can enable them too?

/dev/disks # partedUtil getptbl /vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000
gpt
121575 255 63 1953103872
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 1953103838 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

Looking at the line 121575 255 63 1953103872 the last number is supposed to be the LBA size of the disk (in 512 byte units), in this case just under 1TB. Forging ahead anyway ...

~ # vmkfstools --growfs "/vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000:3" "/vmfs/devices/disks/naa.600050e0f7f321007eb30000401b0000:3"
Underlying device has no free space
Error: No space left on device

SO I'm left thinking I need to do something to allow the OS to see the true size of RAID array.

EDIT 2 Output of tw_cli

~ # /tmp/tw_cli /c0
Error: (CLI:003) Specified controller does not exist.
~ # /tmp/tw_cli show

Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU
------------------------------------------------------------------------
c6    9650SE-4LPML 4         2        1       0       1       1      -

~ # /tmp/tw_cli /c6 show

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       931.312   RiW    ON

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     1.82 TB     3907029168    WD-WCAY00283502
p1     OK               u0     1.82 TB     3907029168    WD-WCAY00286752
p2     NOT-PRESENT      -      -           -             -
p3     NOT-PRESENT      -      -           -             -

~ #
Jamie
  • 1,274
  • 7
  • 22
  • 39

3 Answers3

3

Your expansion attempt has not been successful so far.

It may have failed - this would have produced an appropriate entry in the controllers' logs. Take a look at the "Controller log" section of the tw_cli show diag output.

Or you may have used the wrong command set. In your special case it seems somewhat tricky. Intuitively, using

 tw_cli /c6/u0 migrate type=raid1

should launch the expansion, but a migration from raid1 to raid1 is unsupported according to the matrix from the latest/greatest CLI guide for 10.2 (which seems to date from 2010):

valid migration paths for tw_cli

As I would be not too sure that this is still current and correct information, I would simply try the former command for migration. Should this fail, the route to go would probably be

 tw_cli /c6/u0 migrate type=single

which would break the mirror, and running

 tw_cli /c6 show

to see which disk has ended up in u0 and which has been separated out to another unit. Deleting the newly created unit by issuing

 tw_cli /c6/u<newUnitNumber> del

Then running

 tw_cli /c6/u0 migrate type=raid1 disk=<whatever disk number is not in u0 any more>

to re-mirror should finally expand the capacity for the array. But honestly, this is where I would open a call with LSI tech support just to make sure I don't screw up the array by a careless move.

And one more important point: make sure you have recent backups you can restore from.

the-wabbit
  • 40,319
  • 13
  • 105
  • 169
  • Yeah, I just saw that table ... you're quick! This is what I was on the path to trying now. +1, but for this question I have to go with ewhites answer - he set me down the path first. Very informative, and extremely helpful. thanks alot. – Jamie Feb 08 '13 at 14:09
  • 1
    @Jamie I've just seen that [**somebody seems to have tried**](http://serverfault.com/a/361592/76595) the procedure described above and it apparently yielded the desired result. Unfortunately, I cannot test it myself as I only occasionally see and administer 3Ware controllers (customers' sites) and do not have any of them in "my" infrastructure. – the-wabbit Feb 08 '13 at 14:15
2

You simply need to increase the size of your logical disk/unit (u0).

Some form of the tw_cli /c0/u0 migrate command would seem to work for you, but see this knowledge base article that gives conflicting information.

Step 5: A 3ware support engineer will create a script for you that will rewrite the disk drive RAID table information. The new RAID table information (or DCBs) will allow the controller to see and use the new, higher capacity drives.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • But it turns out I'd need two more drives. Migrating from R1 to R1 doesn't appear to be supported directly, but Migrating from R1 to single and single to R1 is ... hmmm. – Jamie Feb 08 '13 at 13:42
  • the link does not seem to work anymore. can we have the link up again? – nass Feb 28 '14 at 13:31
1

You need to use the CLI to extend the partition and grow the VMFS volume. You can't do this from the GUI with local storage, so you'll have to get dirty with the vCLI.

MDMarra
  • 100,183
  • 32
  • 195
  • 326
  • Thanks for responding. I should have mentioned the genesis of this question. I had tried going through the link you suggested (maybe I made a mistake) but it didn't really pan out. See the edit to the question. I believe the RAID card is complicating the issue. – Jamie Feb 08 '13 at 04:31