0

We need to create xfs file-system on kafka disk

The special thing about kafka disk is the disk size

kafka disk have 20TB size in our case

I not sure about the following mkfs , but I need advice to understand if the following cli , is good enough to create xfs file system on huge disk ( kafka machine )

 DISK=sdb
 mkfs.xfs -L kafka  /dev/$DISK -f      

kafka best practice

FileSystem Selection Kafka uses regular files on disk, and such it has no hard dependency on a specific file system. We recommend EXT4 or XFS. Recent improvements to the XFS file system have shown it to have the better performance characteristics for Kafka’s workload without any compromise in stability. Note: Do not use mounted shared drives and any network file systems. In our experience Kafka is known to have index failures on such file systems. Kafka uses MemoryMapped files to store the offset index which has known issues on a network file systems.

shalom
  • 451
  • 12
  • 26

1 Answers1

1

Yes, mkfs.xfs defaults should be sufficient.

XFS has been tested on volumes hundreds of TiB in size for years.

Kafka's XFS notes state that

The XFS filesystem has a significant amount of auto-tuning in place, so it does not require any change in the default settings, either at filesystem creation time or at mount.

(Docs then proceed to discuss largio and nobarrier but dismiss them as unnecessary, so ignore those.)

As always, test your workload. Probably not easy to do synthetic testing up to the load level of production. At least monitor your I/O performance.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32
  • I'd just add, that `nobarrier` mount option should be used only with cap/battery backed storage so you can be sure, that in case of sudden power failure, all disk content is synced later. – Jaroslav Kucera Dec 16 '19 at 08:08