2

We have Oracle 11.2 databases on SAN storage (fibre channel on EMC) with Solaris 11.3. For the development environments, the space used on the filesystems is over 80% most of the time.

How important is the '80%' rule for databases? Nearly all the filesystem activity is updating at random locations within existing files, usually 30GB files. The total database size is around 400-500GB.

2 Answers2

0

I guess it depends -- how much do you care about latency and throughput? If not very much, then you probably won't notice. If a lot, then you will notice. This answer provides much more depth and also suggests using echo metaslab_debug/W1 | mdb -kw as a workaround to keep your space maps (the things that need to get loaded in a very full pool that make it slow) in memory. The graph shown is a bit out of date (and not for Oracle's version of ZFS), but it shows the dramatic falloff in performance you can likely expect.

Dan
  • 270
  • 2
  • 8
0

Presumably you refer to recommendations that came from the ZFS Best Practices Guide, from the (now defunct) solarisinternals.com wiki. Discussed previously on Server Fault, see the answer Dan linked.

For performance reasons, maintain free space in pools, maybe 85% used. Dataset free space is less performance critical. But still, 85% used makes me nervous from a capacity planning perspective.

Consider monitoring and enforcing <85% full pools on all environments, including development and test. A consistent capacity management approach will also avoid this full pool performance hit.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32