You can add devices to a pool after it has been created, however not really in the way you seem to envision.
With ZFS, the only redundant configuration that you can add devices to is the mirror. It is currently not possible to grow a raidzN vdev with additional devices after it has been created. Adding devices to a mirror increases the redundancy but not the available storage capacity.
It is possible to work around this to some degree, by creating a raidzN vdev of the desired configuration using sparse files for the redundancy devices, then deleting the sparse files before populating the vdev with data. Once you have drives available, you would zpool replace
the (now non-existent) sparse files with those. The problem with using this approach as more than a migration path toward a more ideal solution is that the pool will constantly show as DEGRADED
which means you have to look much more closely to recognize any actual degredation of the storage; hence, I don't really recommend it as a permanent solution.
Naiively adding devices to a ZFS pool actually comes at a serious risk of decreasing the pool's resilience to failure, because all top-level vdevs must be functional in order for the pool to be functional. These top-level vdevs can have redundant configurations, but do not need to; it is perfectly possible to run ZFS in a JBOD-style configuration, in which case a single device failure is highly likely to bring down your pool. (Bad idea if you can avoid it, but still gives you many ZFS capabilities even in a single-drive setup.) Basically, a redundant ZFS pool is made up of a JBOD combination of one or more redundant vdevs; a non-redundant ZFS pool is made up of a JBOD combination of one or more JBOD vdevs.
Adding top-level vdevs also doesn't cause ZFS to balance the data onto the new devices; it eventually happens for data that gets rewritten (because of the file system's copy-on-write nature and favoring vdevs with more free space), but it doesn't happen for data that just sits there and is read but never rewritten. You can make it happen by rewriting the data (for example through use of zfs send | zfs recv
, assuming deduplication is not turned on for the pool) but it does require you to take specific action.
Based on the numbers in your post, you have:
Since you say that you want a redundant configuration, given these constraints (particularly the set of drives available) I'd probably suggest grouping the drives as mirror pairs. That would give you a pool layout like this:
- tank
- mirror-0
- mirror-1
- mirror-2
This setup will have a user-accessible storage capacity of approximately 8 TB, give or take metadata overhead (you have two mirrors providing 2 TB each, plus one mirror providing 4 TB, for a total of 8 TB). You can add more mirror pairs later to increase the pool capacity, or replace a pair of 2 TB drives with 4 TB drives (though be aware that resilvering in case of a drive failure in a mirror pair puts severe stress on the remaining drive(s), in the case of two-way mirrors greatly increasing the risk of complete failure of the mirror). The downside of this configuration is that the pool will be practically full right from the beginning, and the general suggestion is to keep ZFS pools below about 75% full. If your data is mostly only ever read, you can get away with less margin, but performance will suffer greatly particularly on writes. If your dataset is write-heavy, you definitely want some margin for the block allocator to work with. So this configuration will "work", for some definition of the word, but will be suboptimal.
Since you can freely add additional mirror devices to a vdev, with some planning it should be possible to do this in such a way that you don't lose any of your data.
You could in principle replace mirror-0 and mirror-1 above with a single raidz1 vdev eventually made up of the four 2 TB HDDs (giving you 6 TB usable storage capacity rather than 4 TB, and the ability to survive any one 2 TB HDD failure before your data is at risk), but that means committing three of those drives initially to ZFS. Given your usage figures it sounds like this might be possible with some shuffling data around. I wouldn't recommend mixing vdevs of different redundancy levels, though, and I think the tools even force you in that case to say effectively "yes, I really know what I'm doing".
Mixing different sized drives in a pool (and especially in a single vdev, except as a migration path to larger-capacity drives) is not really recommended; in both mirror and raidzN vdev configurations, the smallest constituent drive in the vdev determines the vdev capacity. Mixing vdevs of different capacity is doable but will lead to an unbalanced storage setup; however, if most of your data is rarely read, and when read is read sequentially, the latter should not present a major problem.
The best configuration would probably be to get an additional three 4 TB drives, then create a pool made up of a single raidz2 vdev made up of those five 4 TB drives, and effectively retire the 2 TB drives. Five 4 TB drives in raidz2 will give you 12 TB of storage capacity (leaving a good bit of room to grow) and raidz2 gives you the ability to survive the failure of any two of those drives, leaving the mirror setup in the dust in terms of resilience to disk problems. With some planning and data shuffling, it should be easy to migrate to such a setup with no data loss. Five drive raidz2 is also near optimal in terms of storage overhead according to tests performed by one user and published on the ZFS On Linux discussion list back in late April, showing a usable storage capacity at 96.4% of optimal when using 1 TB devices, beaten only by a six-drives-per-vdev configuration which gave 97.3% in the same test.
I do realize that five 4 TB drives might not be practical in a home setting, but keep in mind that ZFS is an enterprise file system, and many of its limitations (particularly in this case, the limitations on growing redundant vdevs after creation) reflect that.
And always remember, no type of RAID is backup. You need both to be reasonably secure against data loss.
Got backups? ZFS is a nice filesystem, and once implemented gets you some limited fault tolerance, but it's not a replacement for backups, especially when performing operations that might have higher than average odds of eating your data; playing musical drives certainly qualifies as one such operation. – Ecnerwal – 2014-06-14T03:46:03.520
Good point. The largest stuff is mostly mythtv recordings - I've backed up all the media I'd miss if it were lost and can't replace. – Fred Hamilton – 2014-06-14T04:21:55.030