Possible Duplicate:
How to diagnose causes of oom-killer killing processes
I have an Ubuntu webserver (Apache + MySQL + PHP) on a very small machine on Amazon Web Services (EC2 micro instance). Website runs fine, very fast. So, our little traffic doesn't seems to slow the server at all.
Anyway, MySQL randomly goes down very often (once a week at least) and I can't get why. Apache instead keeps running fine. I have to log on via SSH and restart it, then all runs fine:
$ sudo service mysql status
mysql stop/waiting
$ sudo service mysql start
mysql start/running, process 25384
I've installed Cacti for performance monitoring, and I can see every time MySQL goes down, I have a high single peak in load average (up to 10, when normally is lower than 1). This is strange because it doesn't seem to occur during cronjobs or so.
I also tried to inspect MySQL logs: slow query log (that is enabled, I'm sure), /var/log/mysql.log
and /var/log/mysql.err
are all empty. I thought that maybe the system automatically shut down it because of low available memory; is that possible?
Now I'm trying to setup a bigger EC2 instance, but I just found something that looks critical (but I can't understand) in /var/log/syslog
. I pasted the relevant part is here (MySQL went down at 11:47).