0
I would like to cleanup my currently tangled setup of LVM to better suit my work process. You can see I'm only using about half of the devices/storage I have available. Yet I now converted my old windows /dev/sdd1 1TB disk to also use it for linux LVM.
I'm a bit of mad man about using my disks as you can see... :-D
Current lsblk shows this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111,8G 0 disk
├─sda1 8:1 0 16M 0 part
└─sda2 8:2 0 111,8G 0 part
sdb 8:16 0 111,8G 0 disk
└─sdb1 8:17 0 111,8G 0 part
├─vghyper-DataTwoCache_cdata 254:0 0 32G 0 lvm
│ └─vghyper-ArchDataTwo 254:3 0 363,4G 0 lvm /media/DataTwo
└─vghyper-DataTwoCache_cmeta 254:1 0 32M 0 lvm
└─vghyper-ArchDataTwo 254:3 0 363,4G 0 lvm /media/DataTwo
sdc 8:32 0 931,5G 0 disk
└─sdc2 8:34 0 931,5G 0 part
├─vghyper-ArchDataTwo_corig 254:2 0 363,4G 0 lvm
│ └─vghyper-ArchDataTwo 254:3 0 363,4G 0 lvm /media/DataTwo
├─vghyper-DataOne 254:4 0 125G 0 lvm /media/Data
├─vghyper-ProjectData_corig 254:8 0 100G 0 lvm
│ └─vghyper-ProjectData 254:9 0 100G 0 lvm /media/Projektit
├─vghyper-win7 254:11 0 100G 0 lvm
└─vghyper-win10 254:12 0 111,8G 0 lvm
sdd 8:48 0 931,5G 0 disk
└─sdd1 8:49 0 931,5G 0 part
sde 8:64 0 1,8T 0 disk
├─sde1 8:65 0 931,5G 0 part /media/DataExt
├─sde2 8:66 0 1G 0 part
└─sde3 8:67 0 100G 0 part
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1p1 259:1 0 2G 0 part /boot
└─nvme0n1p2 259:2 0 236,5G 0 part
├─vghyper-HyperiorRoot 254:5 0 128G 0 lvm /
├─vghyper-ProjectDataCache_cdata 254:6 0 32G 0 lvm
│ └─vghyper-ProjectData 254:9 0 100G 0 lvm /media/Projektit
├─vghyper-ProjectDataCache_cmeta 254:7 0 32M 0 lvm
│ └─vghyper-ProjectData 254:9 0 100G 0 lvm /media/Projektit
└─vghyper-SwapNVMe 254:10 0 16G 0 lvm [SWAP]
Here is some details about the device usages:
- /dev/sda is SATA SSD reserved for my win10 installation (for gaming)
- /dev/sdb is SATA SSD initialized fully for LVM PV and is used partly for lvm-cache space. It is ortherwise mosly unused.
- /dev/sdc is SATA HDD my main linux storage disk. I'm bit terrified if I lost it, especially the vghyper-ProjectData LV.
- /dev/sdd is SATA HDD now sitting idle doing nothing.
- /dev/sde is External 2TB USB3 drive for backups and small linux box installed into for recovery/travelling purposes. it has 830GB of unreserved space left.
- nvme0n1p1 is NVMe UEFI FAT32 boot partition.
- nvme0n1p2 is The Linux main PV and btrfs LV on it. Problem here is the root filesystem running tight of space. (only ~30GB free)
My goal here would be to try better gain some performance of my disks:
- Create mirror/raid1 device from /dev/sdc and /dev/sdd (for safetly and read speed)
- Make vghyper-ProjectData as fast as possible since I daily compile few gigs of I/O to it. Current dm-cache 32G show 99% usage.
- Lastly use that fast NVMe/SSD space to cache the spinning rust in three levels:
- Tier0: Linux rootfs being cached by NVMe instead of being directly on it.
- Tier1: Misc data cached with NVMe and SSD?
- Tier2: Regular spinning rust under dm-mirror/RAID1 for VMs
As first step I had idea that I should propably dismantle all dm-caches I currently have. (easy, done it already)
Following steps I'm not so sure about: How to build dm-mirror/RAID1 from the /dev/sdc and /dev/sdd devices without losing any data. (I'm francly terrified to touch the /dev/sdc2 PV)
How do I convert the /dev/sdc2 into RAID1/dm-mirrored device safely?