5

I have been hit by xfs' No space left on device. According to the FAQ:

http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_I_receive_No_space_left_on_device_after_xfs_growfs.3F

The only way to fix this is to move data around to free up space below 1TB. Find your oldest data (i.e. that was around before even the first grow) and move it off the filesystem (move, not copy). Then if you copy it back on, the data blocks will end up above 1TB and that should leave you with plenty of space for inodes below 1TB.

But how do I identify the data to move? I cannot go by age, because the first 10 TB was filled the same day using rsync.

I have tried:

xfs_db -r -c "blockget -i 1 -n -v" /dev/md3

But I only seem to get the basename of the file and not the full path to the file. And since a lot of my files are called the same (but in different dirs) then that is not very useful. Also it seems to give me more information that just inode 1.

I have a feeling that I can use xfs_db and get that to tell me what files are using blocks in the first 1 TB, but I have been unable to see how.

(By using the mount option inode64 the file system will not give No space left on device, but if you later forget to use mount option inode64 then you will get No space left on device again. I would like to avoid the mount option inode64 as the filesystem may be mounted by other people on other systems, and they will forget that and thus get a surprising No space left on device).

Shane Madden
  • 112,982
  • 12
  • 174
  • 248
Ole Tange
  • 2,836
  • 5
  • 29
  • 45

2 Answers2

3

Try to (re)mount your filesystem with the -o inode64 option and see if this fixes your problem already, but note man mount:

inode64
 Indicates  that XFS is allowed to create inodes at any location in the filesystem, including those
 which will result in inode numbers occupying more than 32 bits of significance.  This is  provided
 for  backwards compatibility, but causes problems for backup applications that cannot handle large
 inode numbers.
Sven
  • 97,248
  • 13
  • 177
  • 225
  • It works as a work-around, but when file system is later remounted without `inode64` (which we can assume other people will do) then you get disk full again. – Ole Tange Jan 08 '13 at 10:22
  • You might want to check if "xfs: record 64 bit inode filesytsems in the superblock" patch gets into your kernel version (or if it's in Linus repo). – kupson Jan 12 '13 at 09:10
3

Quick & Dirty example (remove inline comments, adjust numbers):

# select filesystem
find / -xdev -type f -print0 | \
  xargs -0r -I{} \
    # execute xfs_bmap on every file (and prefix output with path for later processing)
    sh -c "xfs_bmap -v {} | awk '{gsub(/\.\./,\" \"); print \"{}: \" \$0}'" | \
    # remove some cruft
    awk -F: '$4 != ""{print $1 " " $4}' | \
    # print line if last block < 1TB/512B/block and size (in 512B blocks) > 100.
    awk '$3 < 1024*1024*1024*1024/512 && $7 > 100{print}'
Ole Tange
  • 2,836
  • 5
  • 29
  • 45
kupson
  • 3,388
  • 18
  • 18
  • It is somewhat faster if run with GNU Parallel: find /mnt/disk -xdev -type f -print | parallel --tag xfs_bmap -v | sed 's/\.\./ /g' | awk '$6 < 1024*1024*1024*1024/512 && $10 > 100{print $1}' – Ole Tange Jan 15 '13 at 11:17
  • note that you actually need GNU Parallel for that --tag option, moreutils parallel doesn't have it – Josip Rodin Feb 23 '16 at 13:23