0

Possible Duplicate:
How to diagnose causes of oom-killer killing processes

I have an Ubuntu webserver (Apache + MySQL + PHP) on a very small machine on Amazon Web Services (EC2 micro instance). Website runs fine, very fast. So, our little traffic doesn't seems to slow the server at all.

Anyway, MySQL randomly goes down very often (once a week at least) and I can't get why. Apache instead keeps running fine. I have to log on via SSH and restart it, then all runs fine:

$ sudo service mysql status
mysql stop/waiting
$ sudo service mysql start
mysql start/running, process 25384

I've installed Cacti for performance monitoring, and I can see every time MySQL goes down, I have a high single peak in load average (up to 10, when normally is lower than 1). This is strange because it doesn't seem to occur during cronjobs or so.

I also tried to inspect MySQL logs: slow query log (that is enabled, I'm sure), /var/log/mysql.log and /var/log/mysql.err are all empty. I thought that maybe the system automatically shut down it because of low available memory; is that possible?

Now I'm trying to setup a bigger EC2 instance, but I just found something that looks critical (but I can't understand) in /var/log/syslog. I pasted the relevant part is here (MySQL went down at 11:47).

lorenzo-s
  • 347
  • 4
  • 10
  • 19

2 Answers2

2

Yeah, seems that your box ran out of free ram, and the kernel killed it to protect the system stability. Try an instance with more ram!

Cubox
  • 118
  • 1
  • 2
  • 12
1

Yes the oom killer has killed your mysqld. For this to happen your server is badly configured or something else is leaking memory. Looking at the numbers I suspect you've simply allowed too much memory for mysql / are allowing too many apache connections for the amount of ram you've got.

You need to tune the memory usage of the running proceses and limit the number of concurrent connecions to apache and mysql - or get more memory.

symcbean
  • 19,931
  • 1
  • 29
  • 49