1

On a CentOS 7 server with 4X Samsung 800 SD with hard raid 10 I am using KVM for virtualization. I created a thin LVM storage for VMs and using "lvcreate -l 100%FREE --type thin-pool --thinpool thin_pool vgssd" gave me the following:

Thin pool volume with chunk size 1.00 MiB can address at most 253.00 TiB of data
WARNING: Pool zeroing and 1.00 MiB large chunk size slows down thin provisioning
WARNING: Consider disabling zeroing (-Zn) or using smaller chunk size (<512.00 KiB).
Logical volume "thin_pool" created.

What do you think about the chunk size? Is Zeroing necessary? VM performance is important to me, they are mostly Windows.

Amin
  • 79
  • 1
  • 2
  • 4

1 Answers1

0

I can only recommend using smaller chunk sizes and do not disable Zeroing!

My fuckup story:
Proxmox Server with an LVM storage setup as
lvcreate -l 100%FREE --thinpool myVolumeGroup/myLogicalVolume -Zn
All my VMs had an OS disk and a separate data disk. Regular backups were running via Proxmox Backup. The restore of the backups seemed fine upon first look, but the data disk was corrupted:
Error: Both the primary and backup GPT tables are corrupt. Try making a fresh table, and using Parted's rescue feature to recover partitions.
Backups were worthless. Time is ticking.
In a proxmox forum thread someone had the same issue: The troublemaker has been found: Due to a hint when creating the thinpool, I added flag -Zn so that the first 4k of volumes on their creation are not set to zero.

Solution if you already disabled Zeroing on a production server:
lvchange -Z y myVolumeGroup/myLogicalVolume