65

I've run XFS filesystems as data/growth partitions for nearly 10 years across various Linux servers.

I've noticed a strange phenomenon with recent CentOS/RHEL servers running version 6.2+.

Stable filesystem usage became highly variable following the move to the newer OS revision from EL6.0 and EL6.1. Systems initially installed with EL6.2+ exhibit the same behavior; showing wild swings in disk utilization on the XFS partitions (See the blue line in the graph below).

Before and after. The upgrade from 6.1 to 6.2 occurred on Saturday. xfs graph

The past quarter's disk usage graph of the same system, showing the fluctuations over the last week. enter image description here

I started to check the filesystems for large files and runaway processes (log files, maybe?). I discovered that my largest files were reporting different values from du and ls. Running du with and without the --apparent-size switch illustrates the difference.

# du -skh SOD0005.TXT
29G     SOD0005.TXT

# du -skh --apparent-size SOD0005.TXT
21G     SOD0005.TXT

A quick check using the ncdu utility across the entire filesystem yielded:

Total disk usage: 436.8GiB  Apparent size: 365.2GiB  Items: 863258

The filesystem is full of sparse files, with nearly 70GB of lost space compared to the previous version of the OS/kernel!

I pored through the Red Hat Bugzilla and change logs to see if there were any reports of the same behavior or new announcements regarding XFS.

Nada.

I went from kernel version 2.6.32-131.17.1.el6 to 2.6.32-220.23.1.el6 during the upgrade; no change in minor version number.

I checked file fragmentation with the filefrag tool. Some of the biggest files on the XFS partition had thousands of extents. Running on online defrag with xfs_fsr -v during a slow period of activity helped reduce disk usage temporarily (See Wednesday in the first graph above). However, usage ballooned as soon as heavy system activity resumed.

What is happening here?

ewwhite
  • 194,921
  • 91
  • 434
  • 799

1 Answers1

79

I traced this issue back to a discussion about a commit to the XFS source tree from December 2010. The patch was introduced in Kernel 2.6.38 (and obviously, later backported into some popular Linux distribution kernels).

The observed fluctuations in disk usage are a result of a new feature; XFS Dynamic Speculative EOF Preallocation.

This is a move to reduce file fragmentation during streaming writes by speculatively allocating space as file sizes increase. The amount of space preallocated per file is dynamic and is primarily a function of the free space available on the filesystem (to preclude running out of space entirely).

It follows this schedule:

freespace       max prealloc size
  >5%             full extent (8GB)
  4-5%             2GB (8GB >> 2)
  3-4%             1GB (8GB >> 3)
  2-3%           512MB (8GB >> 4)
  1-2%           256MB (8GB >> 5)
  <1%            128MB (8GB >> 6)

This is an interesting addition to the filesystem as it may help with some of the massively fragmented files I deal with.

The additional space can be reclaimed temporarily by freeing the pagecache, dentries and inodes with:

sync; echo 3 > /proc/sys/vm/drop_caches

The feature can be disabled entirely by defining an allocsize value during the filesystem mount. The default for XFS is allocsize=64k.

The impact of this change will probably be felt by monitoring/thresholding systems (which is how I caught it), but has also affected database systems and could cause unpredictable or undesired results for thin-provisioned virtual machines and storage arrays (they'll use more space than you expect).

All in all, it caught me off-guard because there was no clear announcement of the filesystem change at the distribution level or even in monitoring the XFS mailing list.


Edit:
Performance on XFS volumes with this feature is drastically improved. I'm seeing consistent < 1% fragmentation on volumes that previously displayed up to 50% fragmentation. Write performance is up globally!

Stats from the same dataset, comparing legacy XFS to the version in EL6.3.

Old:

# xfs_db -r -c frag /dev/cciss/c0d0p9
actual 1874760, ideal 1256876, fragmentation factor 32.96%

New:

# xfs_db -r -c frag /dev/sdb1
actual 1201423, ideal 1190967, fragmentation factor 0.87%
ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 4
    A million upvotes and my kingdom to you – Joel E Salas Jul 10 '12 at 00:21
  • 1
    Thank you! We just upgraded from Debian Squeeze to Ubuntu and had been wondering why du and ls were showing such wildly different values for largish files (eg. 50Mb vs 64Mb) – Giles Thomas Feb 20 '13 at 12:38
  • 1
    @ewwhite Did you turn this feature off to reclaim the space? Or is this article just saying, hey, this feature is what was causing the discrepancy in reported sizes? It sounds like "on database systems, or thin-provisioned VMs, consider turning this off", but I'm not sure what you decided to do, ultimately. – JDS May 12 '14 at 15:16
  • 2
    @jds I leave it on. It eliminates fragmentation and has had a performance boost to my applications. – ewwhite May 12 '14 at 15:35
  • @ewwhite Thanks for the closure. We plan on turning it off on Netapp-backed, thin-provisioned vmdks that host mysql databases as they lose up to a third or more of their space (comparing the --apparent-size to plain du output) with this on. we expect that netapp will provide the required performance with or without this feature turned on. – JDS May 12 '14 at 15:57
  • Good point. I just tend to make volumes larger to counter the space requirements. But I'm still surprised at the lack of visibility on this issue. – ewwhite May 12 '14 at 15:59
  • 3
    Oh, wonderful find. This was using 750GB on 35GB of files. After ```xfs_fsr``` it's back down to about 35GB. I'll have to keep an eye on that –  Sep 15 '15 at 22:18