24

I have read a lot of information about planning RAM requirements forZFS deduplication. I've just upgraded my file server's RAM to support some very limited dedupe on ZFS zvols which I cannot use snapshots and clones on (as they're zvols formatted as a different filesystem) yet will contain much duplicated data.

I want to make sure that the new RAM I added will support the limited deduplication I intend to be doing. In planning, my numbers look good but I want to be sure.

How can I tell the current size of the ZFS dedupe tables (DDTs) on my live system? I read this mailing list thread but I'm unclear on how they're getting to those numbers. (I can post the output of zdb tank if necessary but I'm looking for a generic answer which can help others)

ewwhite
  • 194,921
  • 91
  • 434
  • 799
Josh
  • 9,001
  • 27
  • 78
  • 124

2 Answers2

22

You can use the zpool status -D poolname command.

The output would look similar to:

root@san1:/volumes# zpool status -D vol1
  pool: vol1
 state: ONLINE
 scan: scrub repaired 0 in 4h38m with 0 errors on Sun Mar 24 13:16:12 2013

DDT entries 2459286, size 481 on disk, 392 in core

bucket              allocated                       referenced          
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    2.23M   35.6G   19.0G   19.0G    2.23M   35.6G   19.0G   19.0G
     2     112K   1.75G   1005M   1005M     240K   3.75G   2.09G   2.09G
     4    8.03K    129M   73.8M   73.8M    35.4K    566M    324M    324M
     8      434   6.78M   3.16M   3.16M    4.61K   73.8M   35.4M   35.4M
    16      119   1.86M    811K    811K    2.33K   37.3M   15.3M   15.3M
    32       24    384K   34.5K   34.5K    1.13K   18.1M   1.51M   1.51M
    64       19    304K     19K     19K    1.63K   26.1M   1.63M   1.63M
   128        7    112K      7K      7K    1.26K   20.1M   1.26M   1.26M
   256        3     48K      3K      3K     1012   15.8M   1012K   1012K
   512        3     48K      3K      3K    2.01K   32.1M   2.01M   2.01M
    1K        2     32K      2K      2K    2.61K   41.7M   2.61M   2.61M
    2K        1     16K      1K      1K    2.31K   36.9M   2.31M   2.31M
 Total    2.35M   37.5G   20.1G   20.1G    2.51M   40.2G   21.5G   21.5G

The important fields are the Total allocated blocks and the Total referenced blocks. In the example above, I have a low deduplication ratio. 40.2G is stored on disk in 37.5G of space. Or 2.51 million blocks in 2.35 million block's worth of space.

To get the actual size of the table, see:

DDT entries 2459286, size 481 on disk, 392 in core

2459286*392=964040112 bytes Divide by 1024 and 1024 to get: 919.3MB in RAM.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1
    I especially like @ewwhite's example DDT, because it also makes for a stellar example of a ratio that precludes using dedupe at all. I'd zfs send/recv the datasets on this pool, from deduped datasets to non-deduped datasets, and count myself lucky they were still small enough to make that manageable. :) Be careful assuming your zvols will dedupe. As a block-level dedupe, a single offset difference could skew the whole thing. If I have any advice, it is move mountains to test the production dataset in a TEST lab /before/ putting ZFS dedupe into any production environment. – Nex7 Sep 12 '13 at 19:03
  • http://constantin.glez.de/blog/2011/07/zfs-dedupe-or-not-dedupe has some good information on calculating your expected wins from dedup and your expected costs. – jlp Sep 15 '13 at 20:51
  • This answer needed an update, it wasn't quite complete. See below for more detailed answer – Stilez Mar 12 '17 at 11:39
  • Can you say what the different columns mean? ie. LSIZE, PSIZE, DSIZE? – Douglas Gaskell Jan 09 '18 at 18:43
6

After reading the original email thread and @ewwhite's answer which clarified it, I think this question needs an updated answer, as the answer above only covers half of it.

As an example, let's use the output on my pool. I used the command zdb -U /data/zfs/zpool.cache -bDDD My_pool. On my system I needed the extra -U arg to locate the ZFS cache file for the pool, which FreeNAS stores in a different location from normal; you may or may not need to do that. Generally try zdb without -U first, and if you get a cache file error, then use find / -name "zpool.cache" or similar to locate the file it needs.

This was my actual output and I've interpreted it below:

DDT-sha256-zap-duplicate: 771295 entries, size 512 on disk, 165 in core

bucket              allocated                       referenced
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     2     648K   75.8G   68.6G   68.8G    1.39M    165G    149G    149G
     4    71.2K   8.07G   6.57G   6.62G     368K   41.7G   34.1G   34.3G
     8    28.1K   3.12G   2.34G   2.36G     281K   31.0G   23.1G   23.4G
    16    5.07K    424M    232M    241M     110K   9.10G   5.06G   5.24G
    32    1.09K   90.6M   51.8M   53.6M    45.8K   3.81G   2.21G   2.28G
    64      215   17.0M   8.51M   8.91M    17.6K   1.39G    705M    739M
   128       38   2.12M    776K    872K    6.02K    337M    118M    133M
   256       13    420K   21.5K     52K    4.63K    125M   7.98M   18.5M
   512        3      6K      3K     12K    1.79K   3.44M   1.74M   7.16M
    1K        1    128K      1K      4K    1.85K    237M   1.85M   7.42M
    2K        1     512     512      4K    3.38K   1.69M   1.69M   13.5M

DDT-sha256-zap-unique: 4637966 entries, size 478 on disk, 154 in core

bucket              allocated                       referenced
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    4.42M    550G    498G    500G    4.42M    550G    498G    500G


DDT histogram (aggregated over all DDTs):

bucket              allocated                       referenced
______   ______________________________   ______________________________
refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
------   ------   -----   -----   -----   ------   -----   -----   -----
     1    4.42M    550G    498G    500G    4.42M    550G    498G    500G
     2     648K   75.8G   68.6G   68.8G    1.39M    165G    149G    149G
     4    71.2K   8.07G   6.57G   6.62G     368K   41.7G   34.1G   34.3G
     8    28.1K   3.12G   2.34G   2.36G     281K   31.0G   23.1G   23.4G
    16    5.07K    424M    232M    241M     110K   9.10G   5.06G   5.24G
    32    1.09K   90.6M   51.8M   53.6M    45.8K   3.81G   2.21G   2.28G
    64      215   17.0M   8.51M   8.91M    17.6K   1.39G    705M    739M
   128       38   2.12M    776K    872K    6.02K    337M    118M    133M
   256       13    420K   21.5K     52K    4.63K    125M   7.98M   18.5M
   512        3      6K      3K     12K    1.79K   3.44M   1.74M   7.16M
    1K        1    128K      1K      4K    1.85K    237M   1.85M   7.42M
    2K        1     512     512      4K    3.38K   1.69M   1.69M   13.5M
 Total    5.16M    638G    576G    578G    6.64M    803G    712G    715G

dedup = 1.24, compress = 1.13, copies = 1.00, dedup * compress / copies = 1.39

What it all means, and working out the actual dedup table size:

The output shows two sub-tables, one for blocks where a duplicate exists (DDT-sha256-zap-duplicate) and one for blocks where no duplicate exists (DDT-sha256-zap-unique)/. The third table below them gives an overall total across both of these, and there's a summary row below that. Looking only at the "total" rows and the summary gives us what we need:

DDT size for all blocks which appear more than once ("DDT-sha256-zap-duplicate"):
771295 entries, size 512 bytes on disk, 165 bytes in RAM ("core")

DDT size for blocks which are unique ("DDT-sha256-zap-unique"):
4637966 entries, size 478 bytes on disk, 154 bytes in RAM ("core")

Total DDT statistics for all DDT entries, duplicate + unique ("DDT histogram aggregated over all DDTs"):

                    allocated                       referenced
          (= disk space actually used)      (= amount of data deduped 
                                                 into that space)
______   ______________________________   ______________________________
         blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE

 Total    5.16M    638G    576G    578G    6.64M    803G    712G    715G

Summary:
dedup = 1.24, compress = 1.13, copies = 1.00, dedup * compress / copies = 1.39

Let's do some number crunching.

  • The block count works like this: Number of entries related to duplicate blocks = 771295, number of entries related to unique blocks = 4637966, total entries in DDT table should be 771295+4637966 = 5409261. So the number of blocks in millions (binary millions that is!) would be 5409261 / (1024^2) = 5.158 million. In the summary we find there are 5.16M blocks total.

  • RAM needed works like this: The 771295 entries for duplicate blocks each occupy 165 bytes in RAM, and the 4637966 entries for unique blocks each occupy 154 bytes in RAM, so the total RAM needed for the dedup table right now = 841510439 bytes = 841510439 / (1024^2) MBytes = 803 MB = 0.78 GB of RAM.

    (The on-disk size used can be worked out the same way, using the "size on disk" figures. Clearly ZFS is trying to use disk I/O efficiently and taking advantage of the fact that disk space taken up by the DDT isn't normally an issue. So it looks like ZFS is simply allocating a complete 512 byte sector for each entry, or something along those lines, instead of just 154 or 165 bytes, to keep it efficient. This might not include any allowance for multiple copies held on disk, which ZFS usually does.)

  • The total amount of data stored, and the benefit from deduping it: From the total DDT statistics, 715 Gbytes ("715G") of data is stored using just 578 GBytes ("578G") of allocated storage on the disks. So our dedup space saving ratio is (715 GB of data) / (578 GB space used after deduping it) = 1.237 x, which is what the summary is telling us ("dedup = 1.24").

Stilez
  • 664
  • 6
  • 14
  • there are constant warnings about zfs and dedup being a memory hog, on a system with 48G of memory, and a 24TB pool with a dupe ratio of 2.59x, is it realistic that the number I get back for this calculation is a mere 3GB? `dedup: DDT entries 8953198, size 2829 on disk, 342 in core` even to put the entire ddt table into core would be a mere 27G? maybe that is what's happening as 37G is in use and no process owns more than 70mb ea. I've been struggling with abysmal resilver write performance. 45 w/s 850wkb/s 75%util scan ~ 15M/s -- 500 hours per disk replacement :( can flash a drive in <16 hr – ThorSummoner Oct 14 '21 at 04:20
  • @ThorSummoner- Ram is only half your issue. The others are 1) DDT ejection (it shares ram in the form of ARC with all other cached data and metadata), and 2) speed of access to DDT (you may have enough ram but the raw data must be read/updated as well, this can totally overwhelm most devices, even some SSDs). Brief advice, - use tunable `zfs_arc_meta_min` to reserve metadata storage within ARC, and move all pool metadata to special vdev mirrors, hosted on good SSDs. If that doesnt work, or isnt enough, open a new question here and ping me with a reply to this to draw my attention to it. – Stilez Oct 14 '21 at 07:23
  • Also consider the pool structure (RAIDZ or mirrors), and CPU core count (Resilver and scrub can be very demanding on cpu if dedup is in use. The TrueNAS forums have explored this quite a lot. Also is that 24TB soace in use, or total pool size? May be best to.open a new question. – Stilez Oct 14 '21 at 07:26