2

I need to set up an XFS filesystem on top of LVM on top of a hardware RAID-6 (10x 6TB + 2 parity) and I found the guideline on http://xfs.org/index.php/XFS_FAQ ("How to calculate the correct sunit,swidth values for optimal performance") which recommends:

When creating XFS filesystem on top of LVM on top of hardware raid please use sunit/swith values as when creating XFS filesystem directly on top of hardware raid.

I understand that the value for sunit is the stripe size defined during RAID volume creation and swidth is the number of data disks (e.g. 10 in the example above).

If I want to create e.g. 2 logical volumes with 12 TB and 30 TB, would I still use swidth=10 for both XFS filesystems or would I use swidth=2 and swidth=5 consistent with the 2 and 5 data disks which (mathematically) make up those logical volumes?

Michael
  • 280
  • 3
  • 15

1 Answers1

0
  1. You must use swidth=10, for all partitions
  2. Modern linux systems can calculate and use sunit/swith sizes automatically. There is no need to calculate it manualy, in most cases.
  • considering modern linux systems can't even detect the presence of disks using a 512e vs. 512-byte sectors, I'd question how well linux can determine correct params. It's possible the above answer _was_ correct, but not so sure about now. Bought 24x512e disks for use in a RAID10. When I first bought them, values in /sys confirmed that 512 was emulated and underlying HW used 4K sectors. But looking at those values in /sys now -- the original physical size is no longer visible. Only via Megacli[64] can I see actual info. – Astara May 06 '20 at 20:24