7

Forgive me if this seems like a fundamental question, but I couldn't really find anything concrete on Google, and I'm not a system administrator by trade.

We are setting up a SAN at our office using NexentaStor with an 8-disk RAID Z3 configuration (8 x 1.36 TB drives) and are in the process of configuring everything.

Right now, in terms of total disk space, we have about 10.8 TB of "real" storage on the SAN, all allocated in a single zpool/zvol. I was considering thin-provisioning the zvol with (say for the sake of argument) 100 TB of space to account for future growth.

It seems simple enough in theory: when we are close to running out of actual disk space, we just add some new drives and it will "just work": no file-system resizing or downtime to worry about.

However, how do we know when we need to add more capacity, short of logging into the SAN every few hours and making sure we still have free space left?

For example, is this normally handled by setting up a cron job, or does NexentaStor (or ZFS itself) provide warnings when you are near capacity, or is it expected that you should just "know" how much space you have left at any given time and have to keep track of it yourself?

If it helps, the 10.8 TB zvol will be used as backing storage (over iSCSI) for our virtual servers and test virtual machines (which are also thin-provisioned), so part of the problem I see is that it could be easy to run out of disk space if we are constantly creating/snapshotting/restoring VM's (which we do a lot of when testing different machine configurations and software environments).

Mike Spross
  • 465
  • 1
  • 5
  • 13

3 Answers3

8

On the Nexenta side, there's a volume-check script that's setup to run hourly by default. It will:
Check volume health and capacity, clear correctable device errors, validate mountpoints.
It also sends a weekly summary report via email.

However, there are some things you should consider when planning a Nexenta storage solution for the purposes you've listed.

  • You may want to consider having multiple pools for flexibility. A single pool works, but sometimes it's necessary to move data around or just have the option of a second pool on local storage.
  • ZFS zvols can be expanded/contracted on the fly. For instance, if you allocate 20TB to a thin-provisioned zvol, you can change it to 30TB or 100TB very easily. You don't need to over-provision 100TB for the future if you don't have it at present.
  • With thin-provisioned zvols, once the space is used, you can't reclaim it. If you thin provision a 2TB zvol in a 10TB pool, fill the zvol up, then delete the VMs on that zvol, your pool will still only show 8TB free. That 2TB is going to remain.
  • Will you be using ZFS compression or deduplication or both? One situation where it DOES make sense to over-provision is if you're using inline compression and highly-compressible data. Same for data that is deduped. In my case, the data sets I work with compress 60%-80%, so I present larger zvols than the amount of storage I actually have.
  • Using mirrors versus raidz1/2/3 makes it easier to expand underlying storage. You can add mirrored disk pairs to a zpool, but you can't expand raidz1/2/3 unless you add another vdev (group of raidz(x) disks). You'd also want to rebalance the data within to redistribute across the disks.
  • Which virtualization technology will you be using? If VMWare, you can thin-provision. You will see datastore warnings near 80% utilization, I believe. VMware also complains if you're in a dangerous situation with snapshot size growth.
  • If you are doing a lot of VM testing, or have VMs that fluctuate in size, I'd suggest using iSCSI and zvols for the relatively static VMs and NFS for the test VMs (if that's an option for your preferred virtualization solution). With NFS, you can make more efficient use of your storage space since you see the zpool's full available size and don't have any size ceiling to worry about.

In short... I wouldn't overprovision to account for future growth. It's not necessary. There are hourly checks in Nexenta to alert to space utilization. Also think about whether you will use compression or not (deduplication requires a bit more planning). test things out and see what the VM footprint will look like before going into production. It will be more difficult to change afterwards.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • As a side note: once space is used, a simple way of reclaiming most of it is enabling compression on the dataset and zeroing out the used space within the virtual disk – the-wabbit Oct 07 '11 at 21:14
  • How do you zero-out the space? – ewwhite Oct 07 '11 at 22:32
  • starting up the virtual machine and running a `dd if=/dev/zero of=/dev/sda` is rather simplistic, but works quite well. Of course this is difficult once the VM is gone, but even then you still could issue a `dd if=/dev/zero of=/vmfs/volumes/yourclutteredvolume/zerofile bs=8M; rm /vmfs/volumes/yourclutteredvolume/zerofile` from the ESX(i) console. The zeroed blocks will be correctly recognized as "empty" by zfs. – the-wabbit Oct 08 '11 at 12:32
  • Just dd'ing at the disk is obviously no good if there's DATA on it. For zvols containing filesystems, you need to get a bit more creative. For Windows users, you can run "sdelete -c" (Google "sdelete"). For Linux, ext2/3 (and I assume 4?) there's a utility out there called "zerofree", I know Ubuntu has it in default repos. Obviously you need to run these on a client machine with the disk mounted up, not on Nexenta itself. – Nex7 Feb 28 '12 at 00:22
6

If you have some monitoring system like Nagios in place, you easily could write a check evaluating the output of zpool list and checking it against thresholds within your comfort zone.

If you don't have a monitoring system, you should use this opportunity to install one - a SAN is a critical piece of infrastructure equipment which needs constant monitoring if you don't want end up with downtimes or data loss due to defective disks, out-of-space conditions, hardware failures or connectivity problems.

the-wabbit
  • 40,319
  • 13
  • 105
  • 169
  • +1 for Nagios, and many SANs provide SNMP commands to check things like that which makes it super easy to set up. Just make sure to pay attention to Nagios – Smudge Oct 07 '11 at 07:30
0

Just to be mentioned, if you go with RAID-Z, you may not easily "add some more drives" for any of RAID-Z.

Alexander
  • 724
  • 2
  • 11
  • 19