0

I have a server using proxmox (6.1) with VM living in a ZFS RAID1 called DATARAID. I have a new hard disk to use there, and I want to expand to RAIDZ-1, that seems to be faster that RAID1 (https://calomel.org/zfs_raid_speed_capacity.html , https://icesquare.com/wordpress/zfs-performance-mirror-vs-raidz-vs-raidz2-vs-raidz3-vs-striped/ ...), and, I know that ZFS doesn't have this support "asis", but.. How can I move it?

lsblk says (sdb and sdc are the ZFS RAID1):

root@HV02:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0 465.8G  0 disk
├─sda1         8:1    0     1M  0 part
├─sda2         8:2    0   256M  0 part /boot/efi
└─sda3         8:3    0 465.5G  0 part
  ├─pve-root 253:0    0    96G  0 lvm  /
  └─pve-data 253:1    0 369.5G  0 lvm  /var/lib/vz
sdb            8:16   0 931.5G  0 disk
├─sdb1         8:17   0 931.5G  0 part
└─sdb9         8:25   0     8M  0 part
sdc            8:32   0 931.5G  0 disk
├─sdc1         8:33   0 931.5G  0 part
└─sdc9         8:41   0     8M  0 part
zd0          230:0    0    32G  0 disk
├─zd0p1      230:1    0   153M  0 part
├─zd0p2      230:2    0     2G  0 part
└─zd0p3      230:3    0  29.9G  0 part
zd16         230:16   0   128K  0 disk
zd32         230:32   0   128K  0 disk
zd48         230:48   0     1M  0 disk
zd64         230:64   0    60G  0 disk
├─zd64p1     230:65   0   300M  0 part
├─zd64p2     230:66   0    99M  0 part
├─zd64p3     230:67   0   128M  0 part
└─zd64p4     230:68   0  59.5G  0 part
zd80         230:80   0    20G  0 disk
├─zd80p1     230:81   0   512M  0 part
└─zd80p2     230:82   0  19.5G  0 part
zd96         230:96   0   128K  0 disk
zd112        230:112  0   128K  0 disk
zd128        230:128  0    40G  0 disk
├─zd128p1    230:129  0   300M  0 part
├─zd128p2    230:130  0    99M  0 part
├─zd128p3    230:131  0   128M  0 part
└─zd128p4    230:132  0  39.5G  0 part
zd144        230:144  0   100G  0 disk
├─zd144p1    230:145  0   128M  0 part
└─zd144p2    230:146  0  99.9G  0 part
zd160        230:160  0    32G  0 disk
├─zd160p1    230:161  0   200M  0 part
├─zd160p2    230:162  0   512K  0 part
└─zd160p3    230:163  0    31G  0 part
zd176        230:176  0   100G  0 disk
zd192        230:192  0    24G  0 disk
zd208        230:208  0     1M  0 disk
zram0        252:0    0     3G  0 disk [SWAP]

zfs list says:

root@HV02:~# zfs list
NAME                     USED  AVAIL     REFER  MOUNTPOINT
DATARAID                 421G   478G       96K  /DATARAID
DATARAID/vm-191-disk-0  20.6G   487G     11.2G  -
DATARAID/vm-191-disk-1  2.12M   478G      188K  -
DATARAID/vm-196-disk-0  33.0G   499G     11.9G  -
DATARAID/vm-196-disk-1  2.12M   478G      188K  -
DATARAID/vm-196-disk-2   103G   555G     25.9G  -
DATARAID/vm-197-disk-0  33.0G   505G     5.86G  -
DATARAID/vm-197-disk-1     3M   478G      192K  -
DATARAID/vm-198-disk-0  24.8G   502G     1.06G  -
DATARAID/vm-198-disk-1     3M   478G      192K  -
DATARAID/vm-291-disk-0  61.9G   488G     52.1G  -
DATARAID/vm-291-disk-1   103G   536G     44.8G  -
DATARAID/vm-291-disk-2  2.12M   478G      188K  -
DATARAID/vm-292-disk-0  41.3G   494G     25.6G  -
DATARAID/vm-292-disk-1  2.12M   478G      188K  -

I didn't add yet the new disk, but it's a blank 1Tb disk

And I have to move without losing anything of config...

Edit with zpool status:

root@HV02:~# zpool status
  pool: DATARAID
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:51:17 with 0 errors on Sun Jan 12 01:15:19 2020
config:

        NAME                        STATE     READ WRITE CKSUM
        DATARAID                    ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x5000c500b2de566e  ONLINE       0     0     0
            sdc                     ONLINE       0     0     0

errors: No known data errors
kprkpr
  • 1
  • 2
  • Probably stop everything, `zfs send` to a temp location, build a new pool, then `zfs receive`. – Zoredache Jan 30 '20 at 18:03
  • Late there, but the method used for benchmarking at https://icesquare.com/wordpress/zfs-performance-mirror-vs-raidz-vs-raidz2-vs-raidz3-vs-striped/ is almost laughably bad. `dd` ***only*** does streamed IO and that rarely has much if any applicability to real-world usage of a file system. Try doing random IO operations like a filesystem under normal usage patterns does. – Andrew Henle Mar 03 '21 at 13:46

1 Answers1

0

First of all, you post misses the crucial zpool status output. Second, the general approach is to zpool split or zpool deatch enough vdevs to create the new raidz, then migrate to it online, then resize it.

Since the zpool status is missing from the question, this is the most precise answer you can get.

P.S. Yeah, you can use zfs send/receive sequence but this is purely offline, so basically - no, you should not.

drookie
  • 8,051
  • 1
  • 17
  • 27
  • Hi I edited with zpool status.. split and deatch works all online? – kprkpr Feb 03 '20 at 08:08
  • The definitely are. – drookie Feb 03 '20 at 09:43
  • I tried with a VM doing: `zpool detach DATARAID sdc` DATARAID has now one disk, no mirror, nothing And then: `zpool add DATARAID raidz sdc sdd -f` (-f because sdc has the partition yet) But in zpool status I see DATARAID having a raidz-1 with two disks and other disk outside.. – kprkpr Feb 03 '20 at 12:00
  • `NAME STATE READ WRITE CKSUM`
    `DATARAID ONLINE 0 0 0 `
    `sdb ONLINE 0 0 0`
    `raidz1-1 ONLINE 0 0 0`
    `sdc ONLINE 0 0 0 `
    `sdd ONLINE 0 0 0`
    – kprkpr Feb 03 '20 at 16:10
  • You need at least three disks for raidz. – drookie Feb 04 '20 at 00:18
  • Well, if I add other and make `zpool add DATARAID raidz sdc sdd sde`, i finish in same way, having a disk outside raidz1 and separated.. Like DATARAID->sdb, and raidz1-1->sdc-sdd-sde – kprkpr Feb 04 '20 at 08:41
  • "then migrate to it online". Now make the *send/receive* sequence online. Then you will have to send incremental delta, then boot from new pool/use it. – drookie Feb 05 '20 at 17:09