1

I have a 3ware 9650se with 2x 2TB disks in raid-1 topology.

I recently replaced the disks with 2 larger (3TB) ones, one by one. The whole migration went smoothly. The problem I have now is, I don't know what more I have to do to make the system aware of the increase in size of this drive.

Some info:

root@samothraki:~# tw_cli /c0 show all

/c0 Model = 9650SE-4LPML
/c0 Firmware Version = FE9X 4.10.00.024
/c0 Driver Version = 2.26.02.014
/c0 Bios Version = BE9X 4.08.00.004
/c0 Boot Loader Version = BL9X 3.08.00.001

....

Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       139.688   Ri     ON     
u1    RAID-1    OK             -       -       -       **1862.63**   Ri     ON     

VPort Status         Unit Size      Type  Phy Encl-Slot    Model
------------------------------------------------------------------------------
p0    OK             u0   139.73 GB SATA  0   -            WDC WD1500HLFS-01G6 
p1    OK             u0   139.73 GB SATA  1   -            WDC WD1500HLFS-01G6 
p2    OK             u1   **2.73 TB**   SATA  2   -            WDC WD30EFRX-68EUZN0
p3    OK             u1   **2.73 TB**   SATA  3   -            WDC WD30EFRX-68EUZN0

Note that the disks p2 & p3 are correctly identified as 3TB, but the raid1 array u1 is still seeing the 2TB array.

After following the guide on LSI 3ware 9650se 10.2 codeset (note: the codeset 9.5.3 user guide contains exactly the same procedure).

I triple sync my data and umount the raid array u1. Next I remove the raid array from command line using the command:

tw_cli /c0/u1 remove

and finally I rescan the controller to find the array again:

tw_cli /c0 rescan

unfortunately the new u1 array still identified the 2TB disk.

What could be wrong?

Some extra info. the u1 array corresponds to dev/sdb/ , which in turn corresponds to a physical volume of a larger LVM disk. Now that I replaced both the drives it appears that the partition table is empty. Yet the LVM disk works fine. Is that normal?!

root@samothraki:~# fdisk -l /dev/sdb 

Disk /dev/sdb: 2000.0 GB, 1999988850688 bytes
255 heads, 63 sectors/track, 243151 cylinders, total 3906228224 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

root@samothraki:~# 
peterh
  • 4,914
  • 13
  • 29
  • 44
nass
  • 548
  • 4
  • 10
  • 24

5 Answers5

2

You would need to update the u1 size before increasing the filesystem from within the OS. The latter will not "see" the new size until the 3ware controller notify it.

The unit capacity expansion in 3ware is called migration. I am certain it works for RAID5 and 6, didn't try it with RAID1. Here is an example of migration command to run:

# tw_cli /c0/u1 migrate type=raid1 disk=p2-p3

When this completes fdisk -l /dev/sdb should yield 3TB and vgdisplay <VG name> will list some empty space. From there you would increase the VG size, then the respective LV and finally the filesystem within the LV.

Edit: I think you are out of luck - see page 129 on the User Guide.
You could migrate your RAID1 to different array type.

Here is an alternative (it carries some risk, so make sure your backups are good):

  1. tw_cli /c0/u1 migrate type=single - this will break apart your u1 unit into two single drives;
  2. tw_cli /c0/u1 migrate type=raid1 disk=2-3 - this should migrate your single unit back to RAID1 with the correct size

Of course, there are alternative approaches to this, the one I listed above is in case you want your data online all the time.

grs
  • 2,235
  • 6
  • 28
  • 36
  • hi there, the command from above, causes an error: `Error: (CLI:144) Invalid drive(s) specified.` What can you make of it? – nass Feb 19 '14 at 23:13
  • My syntax is wrong, precisely the `disk=p2-p3` part. Don't remember exactly, maybe it should be `disk=2-3` instead. You could see the help page `tw_cli /c0/u1 help`. – grs Feb 20 '14 at 15:10
  • i have already seen it, it is not exactly intuitive what I should type there. `disk=` . not sure exactly how to intepret that... – nass Feb 21 '14 at 10:58
  • Maybe http://www.cyberciti.biz/files/tw_cli.8.html will help – grs Feb 22 '14 at 16:49
  • nope `tw_cli /c0/u1 migrate type=raid1 disk=2:3` (or `2-3`) yield the response: `Error: (CLI:008) Invalid ID specified. Found a specified ID already in use.` – nass Feb 23 '14 at 18:23
  • thank you for your continued support. I have indeed split the array. Then after many failed attempts (based on your edit), I ended up deleting the unit `sdc` (from BIOS). Then I recreated a new single unit from BIOS again. Finally this new unit has capacity 3TB, but the old unit (`sdb`) is still 2TB, i'll have to delete that too from BIOS (i can't find the corresponding tw_cli cmd). Then what eludes me is how to convert `u2 - (sdc)` to a raid1 and attach `u1 - (sdb)` to it. – nass Feb 28 '14 at 13:43
2

ok this answer appends to grs's answer. so credits do go there for 70% of the answer.

Notes:

  • if this answer suits you , GET a backup NOW.
  • if you own a UPS connect it on the pc in question NOW.
  • The following procedure was carried out in linux, on DATA disk arrays. It may need some modifications to work on OS/boot arrays.
  • The following procedure requires several restarts which I don't state since I managed to complete the procedure over a time span of a couple of weeks where I tried and failed on noumerous occasions. Good thing though, while the pc was ON, I did not have anymore downtime and did not lose data (ie. I did not need to rely on my backups).

sum up of the situation:

  • can't migrate from raid1 to raid1 on a 3ware 9650se system.
  • can't split the disks and expect that the /c0/uX will automagically update its array size.
  • you must delete a unit and recreate it for it to detect the larger disks.

So the key is to delete one drive at a time and recreate a new array every time . Overall:

  1. split the raid1 array. This will generate 2 arrays with the old size of disks (2TB in my case).

    tw_cli /c0/u1 migrate type=single
    

    the precious /dev/sdX which was pointing to the raid1 /u1, should still exist (and work!). and you'll also get a new unit /u2 which is based on the 2nd drive of the mirror.

  2. delete the disk of the mirror that is not used any longer (it belongs to a new unit /u2 in my case and must have acquired a new /dev/sdX file descriptor after a restart).

    tw_cli /c0/u2 del
    
  3. create a new single unit with the unused disk. NOTE: I did this step from BIOS so I am not sure this is how it should be done as I state below. In BIOS I did "create unit" not "migrate". Someone please verify this.

    tw_cli /c0/u2 migrate type=single disk=3
    

    the new /u2 unit should 'see' all the 3TB.

  4. go ahead and transfer the data from the 2TB disk to the 3TB disk.

  5. once the data are on the new unit update all references to the new /dev/sdX.

  6. the remaining 2TB disk is (should be!) now unused so go ahead and delete it.

    tw_cli /c0/u1 del
    
  7. create a new single unit with the unused disk.

    tw_cli /c0/u1 migrate type=single disk=2
    

    the new /u1 unit should have 3TB space now, too.

  8. finally, take a deep breath and merge the 2 single disks to the new expanded raid1

    tw_cli /c0/u2 migrate type=raid1 disk=2
    

    /u1 should now disappear and unit /u2 should start rebuilding.

  9. Enjoy life. Like, seriously.

nass
  • 548
  • 4
  • 10
  • 24
  • I'm just about to attempt this so want to check some of your statements please. When you "transfer the data from the 2TB disk to the 3TB disk" are you just doing a `dd`? I guess you are also rebooting between steps 5 and 6? – Dogsbody Apr 15 '14 at 19:03
  • @Dogsbody I just did a simple copy. not `dd`. – nass Apr 16 '14 at 09:44
1

Maybe your kernel did not receive updates from the controller.

Try to update the disks info by typing :

partprobe /dev/sdb

It will force the kernel to re-read the partition tables and disks properties.

Also try :

hdparm -z /dev/sdb

and/or:

sfdisk -R /dev/sdb

cause partprobe not always works...

DrGkill
  • 936
  • 6
  • 7
  • unfortunately none of these worked :( .. it must be that the problem is still on the controllers side and not on the kernel... – nass Feb 19 '14 at 23:12
0

These are just some notes adding to nass's answer. Going from memory here, so this might not all be completely correct, and there was some rebooting done throughout these steps.

  • Steps 1-2:

  • Step 3: Adding a new single unit from the cli: tw_cli /c0 add type=single disk=3

  • Step 4: I used dd if=/dev/sdX of=/dev/sdY bs=64K to clone the disk. To determine which were the correct devices, before Step 3 I tried mounting some devices (e.g. sudo mount -t ntfs /dev/sda1 /mnt/a) and exploring the contents to see which was my source device from unit /c0/u1. (There's probably a better way of determining this.)

    Also before Step 3, I ls /dev/sd*, noted which device had an existing sdY1 but no sdY, and then after Step 3 checked again for which sdY had been created. I also used sudo hdparm -I /dev/sdY on each device before/after Step 3 to confirm things looked right. NOTE: Rebooting might change which device is which, so avoid doing that between checking and dd'ing.

  • Steps 5-6:

  • Steps 7-8: Creating a new single unit from the unused disk and then migrating didn't work for me (Invalid disk error or something along those lines). Instead, skipping Step 7 and going straight to Step 8 should work.

  • Step 9: Will do. Thanks for the help!

Some other notes from my experience with this:

  • I used a Knoppix Live CD to do most of this. To install tw-cli on it:
    sudo nano /etc/apt/sources.list
    Add deb http://hwraid.le-vert.net/ubuntu precise main at the top.
    sudo apt-get update
    sudo apt-get install tw-cli 3dm2

  • I was doing this on the boot drive of a Windows installation, going from 2TB drives to 4TB drives. One thing I forgot to check before starting was whether the disk was MBR or GPT. Turns out it was MBR, meaning that I can't access most of the extra space on the drive without converting to GPT.

aplum
  • 1
  • 1
0

The long and short of this post is contact LSI support to get a migration script.

I'm pretty sure I've got the same controller in both the 2 and 4 port configurations, and when I wanted to update from a 1 Gig Raid 1 to 2 Gig, I replaced one of the disks with 2 Gig, and then replaced the other disk after the rebuild.

At this point I still had a 1 Gig Raid 1, but sitting on 2 Gig disks. I then sent some drive dimension specifics off to LSI as a support request and they, in turn, sent me a (very technical) script that when executed, did the migration for me.

I never was satisfied why this migration couldn't be done without LSI support, but in the end it worked out fine.

Jamie
  • 1,274
  • 7
  • 22
  • 39