6

Going to completely rephrase this question since it's still an outstanding production issue several months later.

I have a FreeNAS 0.7.2 box, based on FreeBSD 7.3-RELEASE-p1, running ZFS with 4x1TB SATA drives in RAIDz1.

I appear to have lost 1TB of usable space after creating and deleting a 1TB sparse file. This happened months ago.

This table lays out the situation as it stands.

command         actual             expected           ok/not ok

du -c           1.47TB used        1.47TB used        ok

zfs list        used 2.48TB        used 1.47TB        not ok
                avail 206GB        avail 1.2TB        not ok

zpool list      size 3.64TB        size 3.64TB        ok
                used 3.31TB        used 1.95TB        not ok
                avail 334GB        avail 1.69TB       not ok

Windows right   Disk size 2.67TB   Disk size 2.67TB   ok
  click disk,   Used 2.47TB        Used 1.47          not ok
  properties    free 206GB         free 1.2TB         not ok

Windows select  total file size    total file size
  all files,      1.48TB             1.48TB           ok
  right click, 
  properties
  • No snapshots anywhere in the pool
  • Compression is off
  • De-dupe is off
  • ZFS pool verion is 13
  • ZFS FS version is 3
  • Using the "embedded" version of FreeNAS
  • File was created with dd using /dev/zero as input, deleted using rm, all as root
  • File has definitely been deleted
  • Windows can see the folder via SMB
  • Windows version is 7
  • Not sure how to determine whether the bug suggested by an answerer below has been resolved in the ZFS pool and FS versions in the FreeBSD I am using

Ask away any questions you like, I can get shell on the box from anywhere.

Really appreciate any advice or thoughts. Tom

tomfanning
  • 3,308
  • 6
  • 33
  • 34

5 Answers5

13

Solution eventually came via the zfs-discuss mailing list - this post.

It appears the output of zfs list -t snapshot changed at some point, and there was a hidden snapshot consuming the extra space:

There was a change where snapshots are no longer shown by default.
This can be configured back to the old behaviour setting the zpool 
"listsnapshots" property to "on"

Otherwise, you need to use the "-t snapshot" list.

But, a much better method of tracking this down is to use: 
    zfs list -o space

That will show the accounting for all dataset objects.
 -- richard

Thought it would be worth posting this up here and marking it as the answer, even after this time.

tomfanning
  • 3,308
  • 6
  • 33
  • 34
3

listsnapshots is a property which only controls whether the default output of zfs list show the snapshots or not. It does not "enable or disable snapshots".

To list everything including snapshots, use this command:

zfs list -t all

To list only snapshots, use this command:

zfs list -t snapshot

edit: you may have run into this ZFS bug. To confirm that this bug is the cause, try again with a non-sparse file. The bug should only occur for large sparse files, like those created by mkfile or by copying from /dev/zero.

This bug has been fixed in Solaris but maybe it still exist in the the FreeBSD version you are using.

Wim Coenen
  • 235
  • 1
  • 5
1

There is a delta file between BEs. Use beadm list to check if you have BEs. When you remove a previous BE it should merge/commit the delta data to the disk and release that hidden space. You may see the space back only after you destroy a whole set of sub-BEs marked as same date.

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
user452706
  • 11
  • 1
0

In my case, moving files away from zfs didn't help, but it turned out that there were a lots of old snapshots. I used this command to delete some of the old snapshots:

/sbin/zfs list -H -o name -t snapshot |grep tank |grep 2021 |xargs -n1 /sbin/zfs destroy
-2

On my machine, it usually takes about 15 sec before the disk usage stats are updated. Maybe you were just not patient enough.

Ringding
  • 97
  • 1