4

For example, we're running about 50 servers.

Let's say I'd like to be able to see if any of them are getting close to filling a partition. I can make a task that runs df -h on each of them, but the output is very verbose.

I'd like to be able to capture the output, process it, and just return the servers that are over X%

Is there any way to do this with fabric?

growse
  • 7,830
  • 11
  • 72
  • 114
  • 4
    You'd probably get more bang for the buck by using a proper monitoring system. Something like `monit` is lightweight and will do disk space usage monitoring out of the box. `nagios` is a lot more heavy weight and requires more upfront setup and planning, but is very flexible and you will wind up using it for monitoring other things on your servers. – cjc May 08 '12 at 17:40
  • we have monitoring setup (collectd), although I might look into monit or nagios. df was just an example. for example we might need to check the version of a package – Adrian Mester May 08 '12 at 17:46

1 Answers1

4
$ cat fabfile.py
from fabric.api import *

def crit_disk(warn=80,crit=90):
    x = run("df -hP | awk 'NR>1{print $1,$5}' | sed -e's/%//g'")
    drives = dict([y.split() for y in x.split('\n')])
    for drive,percent in drives.iteritems():
         if warn<int(percent)<crit:
             print("WARN: %s at %d%%" % (drive,percent))
         if int(percent)>crit:
             print("CRIT: %s at %d%%" % (drive,percent))

That's a quick attempt at showing how you could use fabric for that.

Morgan
  • 191
  • 3