2

I'm compressing a dd img of a 3TB drive onto a zvol in ZFS for Linux. I enabled compression (lz4) and let it transfer. The pool just consists of one 3TB drive (for now). I am expecting to have 86Gigs more in zfs list than I appear to. Here are some figures:

$ zfs --version
zfs-0.8.3-1ubuntu12
zfs-kmod-0.8.3-1ubuntu12
$ zfs list
NAME                          USED  AVAIL     REFER  MOUNTPOINT
tank                         2.46T   176G       96K  /tank
tank/justin                   100K   176G      100K  /tank/justin
tank/seagate_3tb_01_20_2020  2.46T   176G     2.46T  -
$ zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  2.72T  2.46T   263G        -         -     0%    90%  1.00x    ONLINE  -
$ zfs get all tank/seagate_3tb_01_20_2020 
NAME                         PROPERTY              VALUE                  SOURCE
tank/seagate_3tb_01_20_2020  type                  volume                 -
tank/seagate_3tb_01_20_2020  creation              Mon Apr 27  0:26 2020  -
tank/seagate_3tb_01_20_2020  used                  2.46T                  -
tank/seagate_3tb_01_20_2020  available             176G                   -
tank/seagate_3tb_01_20_2020  referenced            2.46T                  -
tank/seagate_3tb_01_20_2020  compressratio         1.08x                  -
tank/seagate_3tb_01_20_2020  reservation           none                   default
tank/seagate_3tb_01_20_2020  volsize               3T                     local
tank/seagate_3tb_01_20_2020  volblocksize          8K                     default
tank/seagate_3tb_01_20_2020  checksum              on                     default
tank/seagate_3tb_01_20_2020  compression           lz4                    local
tank/seagate_3tb_01_20_2020  readonly              off                    default
tank/seagate_3tb_01_20_2020  createtxg             10771                  -
tank/seagate_3tb_01_20_2020  copies                1                      default
tank/seagate_3tb_01_20_2020  refreservation        none                   local
tank/seagate_3tb_01_20_2020  guid                  17633099490469485439   -
tank/seagate_3tb_01_20_2020  primarycache          all                    default
tank/seagate_3tb_01_20_2020  secondarycache        all                    default
tank/seagate_3tb_01_20_2020  usedbysnapshots       0B                     -
tank/seagate_3tb_01_20_2020  usedbydataset         2.46T                  -
tank/seagate_3tb_01_20_2020  usedbychildren        0B                     -
tank/seagate_3tb_01_20_2020  usedbyrefreservation  0B                     -
tank/seagate_3tb_01_20_2020  logbias               latency                default
tank/seagate_3tb_01_20_2020  objsetid              906                    -
tank/seagate_3tb_01_20_2020  dedup                 off                    default
tank/seagate_3tb_01_20_2020  mlslabel              none                   default
tank/seagate_3tb_01_20_2020  sync                  standard               default
tank/seagate_3tb_01_20_2020  refcompressratio      1.08x                  -
tank/seagate_3tb_01_20_2020  written               2.46T                  -
tank/seagate_3tb_01_20_2020  logicalused           2.65T                  -
tank/seagate_3tb_01_20_2020  logicalreferenced     2.65T                  -
tank/seagate_3tb_01_20_2020  volmode               default                default
tank/seagate_3tb_01_20_2020  snapshot_limit        none                   default
tank/seagate_3tb_01_20_2020  snapshot_count        none                   default
tank/seagate_3tb_01_20_2020  snapdev               hidden                 default
tank/seagate_3tb_01_20_2020  context               none                   default
tank/seagate_3tb_01_20_2020  fscontext             none                   default
tank/seagate_3tb_01_20_2020  defcontext            none                   default
tank/seagate_3tb_01_20_2020  rootcontext           none                   default
tank/seagate_3tb_01_20_2020  redundant_metadata    all                    default
tank/seagate_3tb_01_20_2020  encryption            off                    default
tank/seagate_3tb_01_20_2020  keylocation           none                   default
tank/seagate_3tb_01_20_2020  keyformat             none                   default
tank/seagate_3tb_01_20_2020  pbkdf2iters           0                      default

I have no snapshots, have disabled reservation and refreservasion, but still have some sort of space discrepancy between the zpool and list.(Yes, I've read the articles, I know they won't be the same, but this is a single disk in the pool, surely ~86G is too much).It seems that I'm not regaining my compression savings to reuse elsewhere (I can only use 176G instead of 263G)

I'm hoping it's a config change I'm missing.

EDIT (zpool get all tank, created with just zpool create tank /dev/sda)

$ zpool get all tank
NAME  PROPERTY                       VALUE                          SOURCE
tank  size                           2.72T                          -
tank  capacity                       90%                            -
tank  altroot                        -                              default
tank  health                         ONLINE                         -
tank  guid                           901113366047988914             -
tank  version                        -                              default
tank  bootfs                         -                              default
tank  delegation                     on                             default
tank  autoreplace                    off                            default
tank  cachefile                      -                              default
tank  failmode                       wait                           default
tank  listsnapshots                  on                             local
tank  autoexpand                     off                            default
tank  dedupditto                     0                              default
tank  dedupratio                     1.00x                          -
tank  free                           263G                           -
tank  allocated                      2.46T                          -
tank  readonly                       off                            -
tank  ashift                         0                              default
tank  comment                        -                              default
tank  expandsize                     -                              -
tank  freeing                        0                              -
tank  fragmentation                  0%                             -
tank  leaked                         0                              -
tank  multihost                      off                            default
tank  checkpoint                     -                              -
tank  load_guid                      9196014585464561985            -
tank  autotrim                       off                            default
tank  feature@async_destroy          enabled                        local
tank  feature@empty_bpobj            active                         local
tank  feature@lz4_compress           active                         local
tank  feature@multi_vdev_crash_dump  enabled                        local
tank  feature@spacemap_histogram     active                         local
tank  feature@enabled_txg            active                         local
tank  feature@hole_birth             active                         local
tank  feature@extensible_dataset     active                         local
tank  feature@embedded_data          active                         local
tank  feature@bookmarks              enabled                        local
tank  feature@filesystem_limits      enabled                        local
tank  feature@large_blocks           enabled                        local
tank  feature@large_dnode            enabled                        local
tank  feature@sha512                 enabled                        local
tank  feature@skein                  enabled                        local
tank  feature@edonr                  enabled                        local
tank  feature@userobj_accounting     active                         local
tank  feature@encryption             enabled                        local
tank  feature@project_quota          active                         local
tank  feature@device_removal         enabled                        local
tank  feature@obsolete_counts        enabled                        local
tank  feature@zpool_checkpoint       enabled                        local
tank  feature@spacemap_v2            active                         local
tank  feature@allocation_classes     enabled                        local
tank  feature@resilver_defer         enabled                        local
tank  feature@bookmark_v2            enabled                        local
jrcichra
  • 141
  • 1
  • 4
  • Can you post the output of zpool get all tank? I suspect the reason is two-fold:1) metadata uses space and 2) if your ashift=12 (4KB), and volblocksize=8KB, any block that doesn't compress down to 4KB will not compress at all. If you want to reduce both your metadata usage and improve your compression ratios, you will need to use a significantly bigger volblocksize. Whether that will cost you a lot of performance in RMW overhead rather depends on how you intend to use this disk image and how much you intend to write to it. – Gordan Bobić Apr 27 '20 at 22:39
  • @GordanBobic - I've updated the question with zpool get all tank – jrcichra Apr 27 '20 at 22:47

1 Answers1

0

I asked this exact question on the openzfs github and got my answer:

https://github.com/openzfs/zfs/issues/10260#issuecomment-620332829

TLDR: Look at your spa_slop_shift setting.

jrcichra
  • 141
  • 1
  • 4