I just bought a new disk. How do I extend an existing RAID array without losing data?
1 Answers
If you make a mistake, you can lose all your data. Backup first. Then continue.
Use storcli /c0 show
to see what drives and volumes you have. The TOPOLOGY
table is a good start:
TOPOLOGY :
========
----------------------------------------------------------------------------
DG Arr Row EID:Slot DID Type State BT Size PDC PI SED DS3 FSpace TR
----------------------------------------------------------------------------
0 - - - - RAID5 Optl N 10.914 TB dflt N N none N N
0 0 - - - RAID5 Optl N 10.914 TB dflt N N none N N
0 0 0 252:0 10 DRIVE Onln N 2.728 TB dflt N N none - N
0 0 1 252:1 9 DRIVE Onln N 2.728 TB dflt N N none - N
0 0 2 252:2 11 DRIVE Onln N 2.728 TB dflt N N none - N
0 0 3 252:3 8 DRIVE Onln N 2.728 TB dflt N N none - N
0 0 4 252:4 12 DRIVE Onln N 2.728 TB dflt N N none - N
----------------------------------------------------------------------------
This shows you which disks are already in the RAID array. I only have a single RAID array (Arr
) with the ID 0
.
PD LIST
shows you the disks:
PD LIST :
=======
--------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
--------------------------------------------------------------------------------
252:0 10 Onln 0 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68AX9N0 U -
252:1 9 Onln 0 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68AX9N0 U -
252:2 11 Onln 0 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
252:3 8 Onln 0 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
252:4 12 Onln 0 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
252:6 14 GHS - 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 D -
252:7 13 UGood - 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 D -
--------------------------------------------------------------------------------
The newly added disk should show up as UGood
(unconfigured good). In the example, that's the disk 13
in slot 7
of enclosure 252
.
To add the disk to the RAID array:
storcli /c0/v0 start migrate type=raid5 option=add drives=252:13
/c0
is the controller, /v0
is the RAID volume (see TOPOLOGY
above or VD LIST
) to change, start migrate
is the command to issue, type=raid5
means "keep it RAID5", we want to add a disk (option=add
) and drives
is the list of disks to add in the form EID:Slt
(see PD LIST
).
The process can take several days. You can continue to use the file system(s) on the RAID volume while the controller does the work in the background. You can even reboot the server, the controller will just continue from where it was.
To check the progress, use storcli /c0/v0 show migrate
which will print something like:
VD Operation Status :
===================
-----------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-----------------------------------------------------------
0 Migrate 38 In Progress 49 Minutes
-----------------------------------------------------------
Note: The estimation is way off; those 49 minutes were 3 hours in my case. My feeling is that that first estimation of 2 days 8 hours was way more accurate.
When the migration is finished, the controller will do another background job ("background initialization"). Not sure what that is.
When it's done, it will print:
VD Operation Status :
===================
-----------------------------------------------------------
VD Operation Progress% Status Estimated Time Left
-----------------------------------------------------------
0 Migrate - Not in progress -
-----------------------------------------------------------
Use storcli /c0 show
to see the new size of your RAID volume:
VD LIST :
=======
--------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
--------------------------------------------------------------
0/0 RAID5 Optl RW Yes RWBD - OFF 10.914 TB data
--------------------------------------------------------------
^^^^^^
I'm using LVM to manage the disk. pvscan
shows that the disk size hasn't changed:
PV /dev/sdb VG data lvm2 [8,19 TiB / 526,00 GiB free]
^^^^^^^^
Time to reboot (at least I couldn't find a way to make Linux rescan the disk).
At least, lsblk
now shows the correct disk size:
sdb 8:16 0 10,9T 0 disk
^^^^^
LVM still can't see it (pvscan
):
PV /dev/sdb VG data lvm2 [8,19 TiB / 526,00 GiB free]
pvdisplay
gives more details:
--- Physical volume ---
PV Name /dev/sdb
VG Name data
PV Size 8,19 TiB / not usable 3,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 2145791
Free PE 134655
Allocated PE 2011136
PV UUID vM1WQP-CZXu-FrWJ-kRti-hMa2-I1rh-Mga6Xg
We can test the next operation before executing it: pvresize --test -v /dev/sdb
TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.
Using physical volume(s) on command line.
Test mode: Skipping archiving of volume group.
Resizing volume "/dev/sdb" to 23437770752 sectors.
Resizing physical volume /dev/sdb from 0 to 2861055 extents.
Updating physical volume "/dev/sdb"
Test mode: Skipping backup of volume group.
Physical volume "/dev/sdb" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Test mode: Wiping internal cache
Wiping internal VG cache
2861055
extents a 4 MiB
translates to 10.91 TiB
(2861055*4096/1024/1024/1024).
Resize the logical volume: pvresize -v /dev/sdb
Finally, LVM sees the new capacity:
# pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name data
PV Size 10,91 TiB / not usable 3,00 MiB
Allocatable yes
PE Size 4,00 MiB
Total PE 2861055
Free PE 849919
Allocated PE 2011136
PV UUID vM1WQP-CZXu-FrWJ-kRti-hMa2-I1rh-Mga6Xg
You can now continue to grow the file systems on the volume group.
- 954
- 1
- 13
- 24
-
1A little addition to great Aaron's answer: You can add disk to array level 0/1/5/6 only. You cannot add disk to array 10 or 50 (look at page 37 [in documentation](https://support.huawei.com/enterprise/en/doc/EDOC1000004186/472fd163/common-storcli-commands#EN-US_TOPIC_0121811691)) – sibsonx Jan 28 '20 at 12:04