1

I am having a problem in which casing the df results unreliable. I'm using the xfs files system on sles 11 sp3.
Basically there is a big difference (a few GBs) between the free size before and after I clear the disk cache. Anyone know why the diskcache using extra storage.
For example:

VideoEdge:/ # df
Filesystem     1K-blocks      Used Available Use% Mounted on
...
/dev/sdb2      870942208 824794856  46147352  95% /mediadb
/dev/sdc1      975746564 924536548  51210016  95% /mediadb1
/dev/sdd1      975746564 153177500 822569064  16% /mediadb2

VideoEdge:/ # echo 3 > /proc/sys/vm/drop_caches 

VideoEdge:/ # df
Filesystem     1K-blocks      Used Available Use% Mounted on
/dev/sdb2      870942208 822225756  48716452  95% /mediadb
/dev/sdc1      975746564 923374888  52371676  95% /mediadb1
/dev/sdd1      975746564 148323524 827423040  16% /mediadb2
VideoEdge:/ # df

Seeing from the above, there is more available space after clear the disk space.

We use the df to estimate how much space can be used and trying to remove old data when df says stporage is 95% full. Because the disk cache taking unpredictable storage space, it's causing problem.

Any one knows why the disk cache would consume storage temporately? Is there a way to calculate how much is taken by the disk cache or the maximum that may be taken by the disk cache?

We don't want to clear disk cache which may hit performace from time to time.

VideoEdge:/ # df
Filesystem     1K-blocks      Used Available Use% Mounted on
rootfs           8259484   5592116   2247724  72% /
udev             2021220       228   2020992   1% /dev
tmpfs            2021220       144   2021076   1% /dev/shm
/dev/sda1        8259484   5592116   2247724  72% /
/dev/sda3      463282160  75389072 387893088  17% /var
/dev/sdb1      104804356     32928 104771428   1% /var/opt/americandynamics/venvr/clipexport
/dev/sdb2      870942208 821370196  49572012  95% /mediadb
/dev/sdc1      975746564 923423496  52323068  95% /mediadb1
/dev/sdd1      975746564 148299180 827447384  16% /mediadb2

/dev/sdb2 on /mediadb type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,allocsize=4096k,noquota)
/dev/sdc1 on /mediadb1 type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,allocsize=4096k,noquota)
/dev/sdd1 on /mediadb2 type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,allocsize=4096k,noquota)
ewwhite
  • 194,921
  • 91
  • 434
  • 799
Song
  • 11
  • 1

1 Answers1

4

Please see:

Why are my XFS filesystems suddenly consuming more space and full of sparse files?

This is a result of the dynamic preallocation features of XFS. These are basically file buffers that coalesce writes to prevent file fragmentation. There are a couple of workarounds.

  • du --apparent-size can be helpful.
  • Mount options for the XFS filesystem as detailed in the linked question.

In both cases, your filesystem is at a dangerously-full level (95%+). The small amount of buffer space is irrelevant, considering that you should be well below 80% utilization. You may as well use the df results because that's what's really in use at any given time.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • Thanks a lot for the explaination and pointing me to the link. – Song Aug 24 '15 at 13:28
  • 1
    Two comments: disabling the dynamic feature may not fix my issue as preallocation would still happen, but just use the fixed size. df would still report more used size than the files actually used. Due to performance issue, we can't sue "du --apparent-size" as it takes too long on drives with multiple terabytes and huge amount of dirs/files. – Song Aug 24 '15 at 13:42