2

I have a 4Tb disk with 1 xfs partition (sda1). I wanted to copy almost all the data there (2,8Tb from the 3,6Tb used) into a new disk (sdc1). First I have prepared sdc in the same way sda:

parted -l

  Model: ATA WDC WD40EZRX-00S (scsi)
  Disk /dev/sda: 4001GB
  Sector size (logical/physical): 512B/4096B
  Partition Table: gpt
  Disk Flags: 

  Number  Start   End     Size    File system  Name     Flags
   1      1049kB  4001GB  4001GB  xfs          primary

  ...

  Model: ATA ST4000DM000-1F21 (scsi)
  Disk /dev/sdc: 4001GB
  Sector size (logical/physical): 512B/4096B
  Partition Table: gpt
  Disk Flags: 

  Number  Start   End     Size    File system  Name     Flags
   1      1049kB  4001GB  4001GB  xfs          primary

Then, I use rsync to copy 2.8Tb from sda1 to sdc1, but I run out of space in sdc1:

df -h
  Filesystem      Size  Used Avail Use% Mounted on
  /dev/sdc1       3.7T  3.7T   20K 100% /home/alexis/STORE
  /dev/sda1       3.7T  3.6T   52G  99% /home/alexis/OTHER                   

What is happening?. Here I post some output that I collected. Consider in your answer that I'm only giving this data because I'm just guessing, but I don't know what it really means (I would like to know!). For instance, I noted a difference in sectsz, but nothing changes in parted -l ... What does this mean? I also noted the difference in the number of nodes.... Why?

Thanks a lot!

df -i
  Filesystem        Inodes   IUsed     IFree IUse% Mounted on
  /dev/sdc1         270480  270328       152  100% /home/alexis/STORE
  /dev/sda1      215387968  400253 214987715    1% /home/alexis/OTHER


xfs_info STORE
  meta-data=/dev/sdc1              isize=256    agcount=4, agsize=244188544 blks
           =                       sectsz=4096  attr=2, projid32bit=1
           =                       crc=0        finobt=0
  data     =                       bsize=4096   blocks=976754176, imaxpct=5
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
  log      =internal               bsize=4096   blocks=476930, version=2
           =                       sectsz=4096  sunit=1 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0

xfs_info OTHER/
  meta-data=/dev/sda1              isize=256    agcount=4, agsize=244188544 blks
           =                       sectsz=512   attr=2, projid32bit=0
           =                       crc=0        finobt=0
  data     =                       bsize=4096   blocks=976754176, imaxpct=5
           =                       sunit=0      swidth=0 blks
  naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
  log      =internal               bsize=4096   blocks=476930, version=2
           =                       sectsz=512   sunit=0 blks, lazy-count=1
  realtime =none                   extsz=4096   blocks=0, rtextents=0




hdparm -I /dev/sdc | grep Physical
        Physical Sector size:                  4096 bytes
hdparm -I /dev/sda | grep Physical
        Physical Sector size:                  4096 bytes

EDIT

This is not a duplicate of Unable to create files on large XFS filesystem. I have 2 similar disks, I don't have space, neither inodes, and I never increase the sized of any partition.

To my other quesstions, I add this one: Why my two partitions have different number of inodes if I used the same procedure (parted, mkfs.xfs) to create them?

EDIT2

Here the allocation-group usage:

xfs_db -r -c "freesp -s -a 0" /dev/sdc1
   from      to extents  blocks    pct
      1       1      20      20   2.28
      2       3      26      61   6.96
      4       7      31     167  19.06
      8      15      35     397  45.32
     16      31      12     231  26.37
total free extents 124
total free blocks 876
average free extent size 7.06452

xfs_db -r -c "freesp -s -a 0" /dev/sda1
   from      to extents  blocks    pct
      1       1      85      85   0.00
      2       3      68     176   0.01
      4       7     438    2487   0.10
      8      15     148    1418   0.06
     16      31      33     786   0.03
     32      63      91    4606   0.18
     64     127      94    9011   0.35
    128     255      16    3010   0.12
    256     511       9    3345   0.13
    512    1023      18   12344   0.49
   1024    2047      10   15526   0.61
   2048    4095      72  172969   6.81
   4096    8191      31  184089   7.25
   8192   16383      27  322182  12.68
  16384   32767      15  287112  11.30
 262144  524287       2  889586  35.02
 524288 1048575       1  631150  24.85
total free extents 1158
total free blocks 2539882
average free extent size 2193.34
alexis
  • 131
  • 6
  • You should read the information you have provided and attempt to understand it. The clue to your problem is in the output of `df -i`. – user9517 Mar 02 '16 at 16:04
  • @lain I did, but clearly I didn't understand it, why if not I'm asking you guys? The number of free nodes is low, but it is also the space on device, so I don't know how to interpret this clue.... can you give me other hint? – alexis Mar 02 '16 at 16:11
  • Have you tried the solutions in that duplicate question? Your problem is writing files on an XFS filesystem. Your problem isn't with the source according to anything you've said or shown. I'm baffled as to the logic that tells you your problem is completely different. If you don't understand the output of `df -i`, you aren't qualified to make that assessment without at least trying the proposed solutions. –  Mar 02 '16 at 16:37
  • Sorry guys! I had a typo mixing `sda` with `sdc`, which does not corresponds with the output of `df -i`. I'm copyng from `sda` to `sdc`. I fixed now. – alexis Mar 02 '16 at 16:45
  • @yoonix I still don't see how the question are related... `df -i` tells that 100% of inodes are used in the destiny, right? `df -h` says that also 100% of space is used. In the other question they said that 1.5Tb of the disk is free... Anyway, I tried to remount with inode64, but this does not change the fact that the disk is full. – alexis Mar 02 '16 at 16:54
  • Check your allocation-group usage. The `xfs_db` command for that was given in the other question. It should look something like `xfs_db -r -c "freesp -s -a 0" /dev/sdc1` – sysadmin1138 Mar 02 '16 at 17:03
  • I'd start looking into the following then: Does du match df for space usage? Are there any open files that were deleted but are still open by a process? Were any of the source files sparse files that when copied were expanded to their full size? Where there other attempts at copying that were aborted that may have left files around? The sector size is a good catch. If you think that may be the cause, just reformat the new device to match the old one and try again. It'll either eliminate the issue or tell you it's a red herring and to look elsewhere. –  Mar 02 '16 at 17:06
  • @yoonix Thanks a lot for you suggestion. I will try to do that. Actually I noticed that the size of some files increase.. – alexis Mar 02 '16 at 17:15

1 Answers1

4

You're out of inodes.

df -i
  Filesystem        Inodes   IUsed     IFree IUse% Mounted on
  /dev/sdc1         270480  270328       152  100% /home/alexis/STORE
  /dev/sda1      215387968  400253 214987715    1% /home/alexis/OTHER

The sdc1/STORE file-system has 270,480 inodes on it, and you've used them all. That's why you're getting out of space warnings.

Why does STORE have much fewer inodes than OTHER?

The only structural difference between the two is the sector-size. Which shouldn't matter, since both volumes use a 4096b block-size. The issue comes in with how XFS does its inode allocation. It's dynamic.

The answer is hidden in the question: Unable to create files on large XFS filesystem

The issue turns out to be in how XFS allocates inodes. Unlike most file systems, allocation happens dynamically as new files are created. However, unless you specify otherwise, inodes are limited to 32-bit values, which means that they must fit within the first terabyte of storage on the file system. So if you completely filled that first terabyte, and then you enlarge the disk, you would still be unable to create new files, since the inodes can't be created on the new space.

You may be better served using xfs_copy or xfs_dump/xfs_restore to copy the data over, and then pruning out the data you didn't want copied.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • Thank you for the explanation, I will try those tools. When I saw the 100% in `df -i` I searched over the web and in all the questions I found the person asking claims to have a lot of space in device. `df -h` doesn't show that for me, why is that?. Even if I get more inodes, why the same data does not fit in a device with the same size? – alexis Mar 02 '16 at 17:03
  • I wish people would put as much effort into reading the information provided and attempting to understand it as to complaining that the dupes are different when they are clearly not. – user9517 Mar 02 '16 at 17:08
  • @alexis The actual-space left is a diagnostic. One of the others suggested you may have some sparse-files on the source that were fully expanded during the copy. If that's the case, the xfs tools I pointed to should handle that case better than rsync. Though, if you specify `-S` for rsync, it should deal with sparse-files better. – sysadmin1138 Mar 02 '16 at 17:08
  • Solved! I had notice that some files were heavier than before, so I added -S to `rsync`. I also find some hardlinks that were expanded too! so I also include -H to `rsync`. I guess that `xfs_copy` could avoid this issues too? – alexis Mar 02 '16 at 19:52
  • @Iain The question is not a dupe. His disk space does not looks like mine. Finally the answer wasn't in the inode numbers. Anyway, even if the answer where the same, the questions are clearly not, which makes your claim an [absurd](http://meta.stackoverflow.com/a/292372/1342186). You complaint about people, but remember that you are a person too. Be happy that the interest in your field of expertise is growing! and hope your questions be heard in other fields too. – alexis Mar 02 '16 at 19:54