6

Does the 80% capacity rule of thumb for ZFS pools apply regardless of the size of the pool? If I have a pool of 10TB, that means I have to keep 2TB free. Fair enough, the loss isn't too great. But in a 50TB pool, that means I need to keep 10TB free. That's a lot of free space left over...

dmuir
  • 311
  • 4
  • 5

1 Answers1

3

Yes. The 80% rule applies... (with some exceptions)

Think about it the same way you'd approach monitoring of a volume. 90% full is 90% regardless of the actual capacity and would still yield an alert.

This is no different. You don't want to plan or expect to run at that high of a capacity.

ewwhite
  • 194,921
  • 91
  • 434
  • 799
  • 1
    I don't know about you, but I don't care about a volume being X% full, so much as I care about, "is this going to fill up and cause me grief any time soon?" If I've got a 100TB volume, with a usage growth rate of, say, 10GB/day, and a week's lead time on provisioning new capacity, I'm not going to give a flying fig when it's got 10TB (or 20TB) free. I'll start to care when it's down to 1-2TB free. – womble Nov 06 '15 at 02:18
  • 4
    Except copy-on-write filesystem performance tends to degrade heavily above 85% usage. If snapshots are involved, it's VERY easy for an inadvertent action to exhaust filesystem space. This is why people [strongly suggest remaining below 80% on ZFS filesystems](https://serverfault.com/questions/733817/does-the-max-80-use-target-suggested-for-zfs-for-performance-reasons-apply-to-s). – ewwhite Nov 06 '15 at 02:21
  • 1
    Sure, but that's unrelated to an analogy of monitoring volume capacity. – womble Nov 06 '15 at 03:07
  • 3
    No it isn't unrelated - it's directly saying performance degrades above x% regardless of how long it would take to fill the remaining space, so there's a specific reason to care. You can still say "I don't care about that" but that implies you don't give a flying fig about storage performance... – TessellatingHeckler Nov 06 '15 at 03:17