I generally use one of the following two filesystems:
For your use case I would use ZFS, especially considering that Ubuntu 18.04 already ships it. As you can easily attach another mirror leg to an already existing device, ZFS fits the bill very well. For example, let name your disk nvme0p1
:
zpool create tank /dev/nvme0p1
create your single vdev pool called “tank”;
zpool attach tank <newdev> /dev/nvme0p1
enables mirroring.
If, for some reasons, you don't/can't use ZFS, then MDRAID and XFS are your friends:
mdadm --create /dev/md200 -l raid1 -n 2 /dev/nvme0p1 missing
will create a RAID1 array with a missing leg (see #1);
mdadm --manage /dev/md200 --add <newdev>
attaches a new mirror leg (forming a complete RAID1, see #2)
After creating the array, you can format it with XFS via mkfs.xfs
I do not suggest using BTRFS, as both performance and resilience are subpar. For example, from the Debian wiki:
There is currently (2019-07-07, linux ≤ 5.1.16) a bug that causes a
two-disk raid1 profile to forever become read-only the second time it
is mounted in a degraded state—for example due to a
missing/broken/SATA link reset disk
Please also note that commercial NAS vendor using BTRFS (read: Synology) do not use its own, integrated RAID feature; rather, they use the proven Linux MDRAID layer.
EDIT: while some maintain that XFS is prone to data loss, this is simply not correct. Well, compared to ext3, XFS (and other filesystems supporting delayed allocation) can lose more un-synched data in case of uncontrolled poweroff. But synced data (ie: important writes) are 100% safe. Moreover, a specific bug exacerbating XFS data loss was corrected over 10 years ago. That bug apart, any delayed allocation filesystem (ext4 and btrfs included) will lose a significant number or un-synched data in case of uncontrolled poweroff.
Compared to ext4, XFS has unlimited inode allocation, advanced allocation hinting (if you need it) and, in recent version, reflink support (but they need to be explicitly enabled in Ubuntu 18.04, see mkfs.xfs man page for additional information)
1: Example /proc/mdstat
file with missing device:
Personalities : [raid1]
md200 : active raid1 loop0[0]
65408 blocks super 1.2 [2/1] [U_]
unused devices: <none>
2: /proc/mdstat
file after adding a second device
Personalities : [raid1]
md200 : active raid1 loop1[2] loop0[0]
65408 blocks super 1.2 [2/2] [UU]
unused devices: <none>