1

We have a Synology RS3614rpxs NAS head containing (9) 3TB hard drives in a RAID 6 + 1 hot spare. The storage became exhausted and we added an expansion chassis where 5TB hard drivers were installed with the intention of creating a second array (also, RAID 6).

Synology appears to be use standard linux md to form RAID arrays withLVM on top to form volume groups (comprised of underlying md devices) and then logical volumes.

During the introduction of the expansion chassis, 3 of the disks were accidentally added to the first array (md2). A problem as we are losing usable space on the 5TB disks. The remaining disks appear to have been added to a second array (md3). md3 appears to have been added to the existing volume group vg1.

Our objectives:

  1. Remove md3 from vg1 and resize logical volume if necessary.
  2. Destroy md3 and make its disks available to be repurposed.

QUESTION: How might we best accomplish these objectives?

For Context:

Output of "df -h"

Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 637.9M 1.6G 28% / /tmp 1.9G 404.0K 1.9G 0% /tmp /run 1.9G 3.8M 1.9G 0% /run /dev/shm 1.9G 0 1.9G 0% /dev/shm /dev/vg1/volume_3 2.4T 1.2T 1.2T 49% /volume3 /dev/vg1/volume_1 5.8T 2.9T 2.9T 49% /volume1 /dev/vg1/volume_2 10.7T 10.2T 443.5G 96% /volume2

Output of "lvdisplay"

--- Logical volume --- LV Name /dev/vg1/syno_vg_reserved_area VG Name vg1 LV UUID agGo1D-0811-miWz-ro0e-Nsvo-YdO9-XRJQY4 LV Write Access read/write LV Status available # open 0 LV Size 12.00 MB Current LE 3 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 384 Block device 253:0 --- Logical volume --- LV Name /dev/vg1/volume_1 VG Name vg1 LV UUID 3oehZK-Bv5V-T1RL-MWfY-VQnh-tsrr-tXn3v9 LV Write Access read/write LV Status available # open 1 LV Size 5.86 TB Current LE 1536000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:1
--- Logical volume --- LV Name /dev/vg1/volume_2 VG Name vg1 LV UUID 3VMQE8-BG0Y-K0jC-Y2Rz-ID09-0dAs-XqTavU LV Write Access read/write LV Status available # open 1 LV Size 10.74 TB Current LE 2816000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:2
--- Logical volume --- LV Name /dev/vg1/volume_3 VG Name vg1 LV UUID mGs4IT-7QM8-PFF2-TD3O-SGzo-QaKp-33DrrW LV Write Access read/write LV Status available # open 1 LV Size 2.47 TB Current LE 647706 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 4096 Block device 253:3

Output of "vgdisplay"

--- Volume group --- VG Name vg1 System ID
Format lvm2 Metadata Areas 2 Metadata Sequence No 9 VG Access read/write VG Status resizable MAX LV 0 Cur LV 4 Open LV 3 Max PV 0 Cur PV 2 Act PV 2 VG Size 40.88 TB PE Size 4.00 MB Total PE 10715889 Alloc PE / Size 4999709 / 19.07 TB Free PE / Size 5716180 / 21.81 TB VG UUID 9i82gX-6djB-1KC3-jbZK-nEJ2-9jJh-KvNgJp

Output of "pvdisplay" pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1 PV Size 27.25 TB / not usable 3.56 MB Allocatable yes PE Size (KByte) 4096 Total PE 7142441 Free PE 2142732 Allocated PE 4999709 PV UUID SmZrd0-jC5T-2QwU-Ecnh-PuY0-O9u6-sqDW1E --- Physical volume --- PV Name /dev/md3 VG Name vg1 PV Size 13.63 TB / not usable 1.62 MB Allocatable yes PE Size (KByte) 4096 Total PE 3573448 Free PE 3573448 Allocated PE 0 PV UUID aQmMu2-gg8j-Be1T-IofO-bOuk-aL0s-ysiR6j

sardean
  • 833
  • 3
  • 14
  • 34

1 Answers1

1

disclaimer: you should read lvm manual carefully and understand what each step does. however, there should be very little risk unless you encounter errors.

this is what I do usually i this case.

if there is a chance that someone else might do something to mess you up, you want to block any login while doing the maintenance (touch /etc/nologin etc. per your maintenance procedure and company policy).

pvmove /dev/md3 # make sure all used extents are moved away

pvs -o+pv_used # make sure that no extents are used in /dev/md3

vgreduce vg1 /dev/md3 # now remove the physical volume

johnshen64
  • 5,747
  • 23
  • 17
  • Thank you. We recreated a similar scenario in a VM and tested your recommendation with success. We will report back with results from our production environment. – sardean Apr 25 '15 at 01:25