Sorry, but no, I don't think you can.
But if you're keen on experimenting, First
BACK UP YOUR POOL!
This post is an academic exercise designed to give you ideas, and carries no assurance that it will work in the real world, and you are explicitly cautioned against incurring data loss caused by actions you take based on these suggestions.
I do know for certain that zpool
will warn you against mixing mirrored and raidz vdevs. See the -f
flag of zpool add
for example. Your proposed vdev combined
is essentially a "raidz-0
" or concatenated vdev, while mirror1
is obviously a mirror.
With all that said, after you've backed up your pool, study the man page closely and note the -n
flags on some commands. That will allow you to see what the effect of a command will be without actually doing anything to your pool.
Futher evidence in the case against your solution as proposed is found in the zpool
man page:
Virtual devices cannot be nested, so a mirror or raidz virtual device
can only contain files or disks. Mirrors of mirrors (or other
combinations) are not allowed.
However, if you are on FreeBSD you can use gstripe
to concatenate disk3
and disk4
to create the combined
device. You can then add that device to the mirror, since ZFS will see it as just another disk.
Here's a suggestion on how to experiment with this, assuming that you're running ZFS on FreeBSD. We'll use simulated drives of 2GB instead of TB, but other than that....
# mkdir zfs-test; cd zfs-test
# truncate -s 2G drive1; truncate -s 2G drive2
# truncate -s 1G drive3; truncate -s 1G drive4
Create pseudo-devices md1 through md4 corresponding to the drive[1-4] files:
# for N in $(jot 4); do mdconfig -u $N -t vnode -f drive$N; done
# mdconfig -lv
md1 vnode 2048M /home/jim/zfs-test/drive1
md2 vnode 2048M /home/jim/zfs-test/drive2
md3 vnode 1024M /home/jim/zfs-test/drive3
md4 vnode 1024M /home/jim/zfs-test/drive4
Your existing mirror is simple to create:
# zpool create mypool mirror md1 md2
# zpool status mypool
pool: mypool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
md1 ONLINE 0 0 0
md2 ONLINE 0 0 0
errors: No known data errors
Here is where you are currently stuck. With this sandbox, you could now experiment with various commands using zpool
's -n
flag, but I don't think anything will work, except this:
# gstripe label -h combined md3 md4
# gstripe status
Name Status Components
stripe/combined UP md3
md4
# zpool attach mypool md2 stripe/combined
cannot attach stripe/combined to md2: device is too small
That is likely to happen to you, also, if your 2TB drives are exactly double the size of the 1TB drives. The slight space loss in concatenating the two 1TB drives results in a combined
drive slightly smaller than either of the two native 2TB drives. diskinfo(8)
confirms that md1
and md2
each have 4194304 sectors, but stripe/combined
is 256 sectors smaller, at only 4194048:
# diskinfo -v md1 md2 stripe/combined
md1
512 # sectorsize
2147483648 # mediasize in bytes (2.0G)
4194304 # mediasize in sectors
0 # stripesize
0 # stripeoffset
MD-DEV5473951480393710199-INO24 # Disk ident.
Yes # TRIM/UNMAP support
Unknown # Rotation rate in RPM
md2
512 # sectorsize
2147483648 # mediasize in bytes (2.0G)
4194304 # mediasize in sectors
0 # stripesize
0 # stripeoffset
MD-DEV5473951480393710199-INO24 # Disk ident.
Yes # TRIM/UNMAP support
Unknown # Rotation rate in RPM
stripe/combined
512 # sectorsize
2147352576 # mediasize in bytes (2.0G)
4194048 # mediasize in sectors
65536 # stripesize
0 # stripeoffset
No # TRIM/UNMAP support
Unknown # Rotation rate in RPM
But in my play sandbox, I can fix that problem.
First, I'll blow away the combined
stripe and its component psuedo-devices /dev/md3 and /dev/md4:
# gstripe destroy combined
# mdconfig -d -u3; mdconfig -d -u4
# mdconfig -lv
md1 vnode 2048M /home/jim/zfs-test/drive1
md2 vnode 2048M /home/jim/zfs-test/drive2
Now I can re-create disk3 and disk4 to make them each slightly larger than 1GB, re-create the /dev/md3 and md4 devices, stripe them together to create the /dev/stripe/combined device, and attach that device to the mirror:
# truncate -s 1025M drive3
# truncate -s 1025M drive4
# mdconfig -u3 -t vnode -f drive3
# mdconfig -u4 -t vnode -f drive4
# gstripe label -h combined md3 md4
# zpool attach mypool md2 stripe/combined
# zpool status mypool
pool: mypool
state: ONLINE
scan: resilvered 81.5K in 0 days 00:00:04 with 0 errors on Thu May 23 15:27:26 2019
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
md1 ONLINE 0 0 0
md2 ONLINE 0 0 0
stripe/combined ONLINE 0 0 0
errors: No known data errors
Wow, I must say that is an extensive answer! Your insight is very much appreciated. With this info in mind however I suppose I will buy another 2TB drive instead, especially since I'm on macos in this case... Thank you. – John Smith – 2019-05-24T08:20:57.283