-2

I run my python scripts and Scrapy framework for web scraping project on my Ubuntu 12.04 precise server. These scripts run whole day.

This project is under developing/testing stage. So i dont know what will be the system requirement of this project.

I started it with 512MB RAM and 30GB Hard Disk. The system crashed because of less hard disk space. so once again i have to setup the server and restarted my project with 512MB RAM and 100GB Hard Disk.

Again the system crashed because of less RAM/SWAP size.

Is there a way to apply checks on RAM, Hard Disk ? So that beforeserver crash it will kill all the programs and send me mail that all programs are killed because of less space increase your RAM/HARD DISK size.


I am looking for a shell script which is handled by cronjob running in every minute to check whether disk is full.

Binit Singh
  • 101
  • 2
  • your above mentioned question dose not answer my question. It is very general. Basically i am looking for a shell script to do this task – Binit Singh Aug 21 '13 at 08:39
  • 1
    @binit: You need to write that script for yourself then and base the actions it takes on your knowledge of how your system performs. – user9517 Aug 21 '13 at 09:33
  • 1
    @binit Server fault is not going to "give you teh c0dez" -- we can't, because we don't know your environment. You can certainly write an appropriate shell script (or have your monitoring system take appropriate action), and [someone here even gave you a good starting point](http://serverfault.com/a/532557/32986), but final implementation in your environment is up to you. – voretaq7 Aug 21 '13 at 13:49

3 Answers3

2

Monitoring

AWS has a service called CloudWatch that can monitor the health of your system. http://aws.amazon.com/cloudwatch/

You could script things like "killing programs and sending emails" yourself based off cloudwatch metrics.

Of course there are plenty of other tools to monitor the health of your system such as:

But monitoring isn't the only thing you can do!

There's also plenty of other things you could do.

You could store your data in AWS S3. Then you wouldn't run out of disk space.

You could use multiple EC2 instances, in an auto scale group, or in a number of other configurations that allow them to share the load.

AWS have a number of database services that are managed and can scale for you.

Drew Khoury
  • 4,569
  • 8
  • 26
  • 28
1

The Monit tool can do what you need -- monitor RAM and disk usage, kill offending process and alert you.

sendmoreinfo
  • 1,742
  • 12
  • 33
1

Actually, it seems like you're looking for an "easy and brute" solution and that could be something similar to:

#!/bin/bash

partition="/dev/sda1";
minRAM=512
minDisk=1024
maxSwapUsed=1024

freeMem=$(free -m | grep Mem | awk {'print $4'});
usedSwap=$(free -m | grep Swap | awk {'print $3'});
freeDisk=$(df -m  | grep $partition | awk {'print $4'});

echo "RAM free = " $freeMem;
echo "Swap used = " $usedSwap;
echo "Disk free = " $freeDisk;

if [ "$freeMem" -gt "$minRAM" ] && [ "$freeDisk" -gt "$minDisk" ] && [ "$usedSwap" -lt "$maxSwap" ]; then {
    echo  "Everything is ok";
} else {
    echo "Kill da pid";
fi

Add it to your crontab, setup mins/max/partition and which is the action to take in pkill. Anyway nagios / snmpchecks would have been more proper and obviously more complex and precise.

user1293137
  • 242
  • 1
  • 3