14

I have a ZFS zpool on linux under kernel 2.6.32-431.11.2.el6.x86_64 which has a single vdev. The vdev is a SAN device. I expanded the size of the SAN, and despite the zpool having autoexpand set to on, even after rebooting the machine, exporting/importing the pool, and using zpool online -e, I was unable to get the pool to expand. I am sure the vdev is larger because fdisk shows it has increased from 215GiB to 250 GiB. Here's a sample of what I did:

[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE  -
[root@timestandstill ~]# zpool import -d /dev/disk/by-id/
   pool: dfbackup
     id: 12129781223864362535
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

    dfbackup             ONLINE
      virtio-sbs-XLPH83  ONLINE
[root@timestandstill ~]# zpool import -d /dev/disk/by-id/ dfbackup
[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE  -
venuebackup   248G   244G  3.87G    98%  1.00x  ONLINE  -
[root@timestandstill ~]# zpool get autoexpand dfbackup
NAME      PROPERTY    VALUE   SOURCE
dfbackup  autoexpand  on      local
[root@timestandstill ~]# zpool set autoexpand=off dfbackup
[root@timestandstill ~]# zpool set autoexpand=on dfbackup
[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE  -
venuebackup   248G   244G  3.87G    98%  1.00x  ONLINE  -
[root@timestandstill ~]# zpool status -v dfbackup
  pool: dfbackup
 state: ONLINE
  scan: none requested
config:

    NAME                 STATE     READ WRITE CKSUM
    dfbackup             ONLINE       0     0     0
      virtio-sbs-XLPH83  ONLINE       0     0     0

errors: No known data errors
[root@timestandstill ~]# fdisk /dev/disk/by-id/virtio-sbs-XLPH83

WARNING: GPT (GUID Partition Table) detected on '/dev/disk/by-id/virtio-sbs-XLPH83'! The util fdisk doesn't support GPT. Use GNU Parted.


WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/disk/by-id/virtio-sbs-XLPH83: 268.4 GB, 268435456000 bytes
256 heads, 63 sectors/track, 32507 cylinders
Units = cylinders of 16128 * 512 = 8257536 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

                             Device Boot      Start         End      Blocks   Id  System
/dev/disk/by-id/virtio-sbs-XLPH83-part1               1       27957   225443839+  ee  GPT

Command (m for help): q
[root@timestandstill ~]# zpool online -e dfbackup /dev/disk/by-id/virtio-sbs-XLPH83
[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE  -
venuebackup   248G   244G  3.87G    98%  1.00x  ONLINE  -
[root@timestandstill ~]# zpool status -v dfbackup
  pool: dfbackup
 state: ONLINE
  scan: none requested
config:

    NAME                 STATE     READ WRITE CKSUM
    dfbackup             ONLINE       0     0     0
      virtio-sbs-XLPH83  ONLINE       0     0     0

errors: No known data errors

How can I expand this zpool?

Josh
  • 9,001
  • 27
  • 78
  • 124

2 Answers2

12

I'm running ZFS on Ubuntu 16.04 and after much trial and error, this is is what worked for expanding the disk and pool size without rebooting. My system is hosted in the cloud at Profitbricks and uses libvirt (not SCSI) drives.

Get pool and device details:

# zpool status -v
   ...
    NAME        STATE     READ WRITE CKSUM
    pool        ONLINE       0     0     0
      vdb       ONLINE       0     0     0

# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool  39.8G  27.1G  12.7G         -    49%    68%  1.00x  ONLINE  -

Activate autoexpand:

# zpool set autoexpand=on pool

Now login to Profitbricks control panel and increase disk size from 40GB to 50GB.

Notify system of disk size change and expand pool:

# partprobe
Warning: Not all of the space available to /dev/vdb appears to be used,
you can fix the GPT to use all of the space (an extra 10485760 blocks) or 
continue with the current setting?

# zpool online -e pool vdb

# partprobe

# zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool  49.8G  27.1G  21.7G         -    40%    55%  1.00x  ONLINE  -

I'm not sure why, but it is sometimes necessary to run partprobe and/or zpool online -e pool vdb twice in order to make the changes effective.

lfjeff
  • 221
  • 2
  • 3
  • 1
    Seems like your solution was the same as mine? namely, `zpool online -e pool vdb` is the command that does the trick. I am now using ZFS-on-Linux on a number of libvirt servers and that works for me (without partprobe) – Josh Mar 16 '17 at 19:38
  • I was also having to reboot to make the changes effective, then I discovered that `partprobe` (run before and after `zpool online`) eliminated the need for a reboot. – lfjeff Mar 16 '17 at 19:41
11

I read on the freebsd forums a post which suggested to use zpool online -e <pool> <vdev> (without needing to offline the vdev first)

This ultimately was the solution, but it required that ZFS autoexpand be disabled first:

[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      214G   207G  7.49G    96%  1.00x  ONLINE  -
[root@timestandstill ~]# zpool get autoexpand
NAME         PROPERTY    VALUE   SOURCE
dfbackup     autoexpand  on      local
[root@timestandstill ~]# zpool set autoexpand=off dfbackup
[root@timestandstill ~]# zpool online -e dfbackup /dev/disk/by-id/virtio-sbs-XLPH83
[root@timestandstill ~]# zpool list
NAME          SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
dfbackup      249G   207G  42.5G    82%  1.00x  ONLINE  -

Using zpool set autoexpand=off followed by zpool online -e was required to get the zpool to expand for me, using ZFS on linux (in kernel, not using FUSE)

jakar
  • 103
  • 3
Josh
  • 9,001
  • 27
  • 78
  • 124
  • That does not make sense. The ZFS mailing list points to needing to reload the kernel module before being able to run a pool expansion. – ewwhite Jul 04 '15 at 02:18
  • Well, some combination of three reboots, multiple exports and imports, `zpool online -e` and `zpool set autoexpand=off` did it for me @ewwhite... I have the full history available in my terminal. Not sure what the problem was then. – Josh Jul 04 '15 at 02:19
  • See: http://serverfault.com/a/540975/13325 and https://github.com/zfsonlinux/zfs/issues/808 – ewwhite Jul 04 '15 at 02:49
  • Thanks @ewwhite. I am not using a newer version, this version is at least 15 months old. I am unsure how specifically to find the version. – Josh Jul 04 '15 at 02:51