From the ZFS Best Practices Guide:
For production systems, use whole disks rather than slices for storage pools for the following reasons:
- Allows ZFS to enable the disk's write cache for those disks that have write caches. If you are using a RAID array with a non-volatile write cache, then this is less of an issue and slices as vdevs should still gain the benefit of the array's write cache.
- For JBOD attached storage, having an enabled disk cache, allows some synchronous writes to be issued as multiple disk writes followed by a single cache flush allowing the disk controller to optimize I/O scheduling. Separately, for systems that lacks proper support for SATA NCQ or SCSI TCQ, having an enabled write cache allows the host to issue single I/O operation asynchronously from physical I/O.
- The recovery process of replacing a failed disk is more complex when disks contain both ZFS and UFS file systems on slices.
ZFS pools (and underlying disks) that also contain UFS file systems on slices cannot be easily migrated to other systems by using zpool import and export features.
- In general, maintaining slices increases administration time and cost. Lower your administration costs by simplifying your storage pool configuration model.
To sum it up, it is much much slower and more difficult to correctly handle, replace and grow.
Additionally, you still have to care about your pool layout. Your RAIDZ1 setup would still suffer from the RAID5 write hole problem while replacing the slice, and it will still suffer if you choose non-optimal amounts of slices for your RAIDZ level (also from the recommendations in the guide):
- (N+P) with P = 1 (raidz), 2 (raidz2), or 3 (raidz3) and N equals 2, 4, or 6
- The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple groups.