0

I tried increasing innodb_buffer_pool_size via my.cnf from default 128m to 256m, but on restart attempt, mysql shutdown failed with:

130125 11:49:55 InnoDB: Initializing buffer pool, size = 256.0M
130125 11:49:55 InnoDB: Completed initialization of buffer pool
InnoDB: Unable to lock ./ibdata1, error: 11

MySQL is up and running but the any attempt to "mysql -u root -p" via terminal blows up with:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (111)

I touch'd/chown'd mysql.sock and mysqld.pid in appropriate locations (as they were missing, not good), but still no luck getting into mysql.

Have a dump from last night, but would love to get a dump from today to see if ibdata1 is corrupted (read/write operations seem fine, so if it was corrupted mysql would shutdown, no?)

Needless to say, worried about trying to restart! We have a Java application that connects to MySQL via a connection pool; is the locking happening there?

At any rate, ideas for how to approach the situation appreciated. Might clone the VM MySQL runs on and import /var/lib/mysql directory to see what's going on, but if it's simply a matter of recreating the sock and pid files and making the restarting, no point in wasting an afternoon.

virtualeyes
  • 665
  • 3
  • 10
  • 28
  • $ perror 11 OS error code 11: Resource temporarily unavailable This is not a MySQL error but an OS error.. A quick google for the exact error you posted above shows several with a similar problem. Something thinks that file is currently locked. Try moving the ibdata1 file to a new name, then copying it back to the old name creating a new file. –  Jan 25 '13 at 18:55

2 Answers2

4

Something else is holding a file lock on ibdata1. Use lsof on ibdata1 and figure out who is holding the lock.

longneck
  • 22,793
  • 4
  • 50
  • 84
  • Perhaps mysqltuner.pl or being logged in to a separate terminal to mysql> session had something to do with the locking, not sure. At any rate lsof shows "mysqld 23922 mysql 3uW REG 253,5 144703488 104 ibdata1" root mysqld_safe and mysql mysqld are the current running processes – virtualeyes Jan 25 '13 at 19:04
  • I can't tell from your comment if you know what to do from here. Do you need more help? Stop mysql using your normal method. Then do `losf` to see of the file is still open. Also use `ps` to make sure you don't have any more mysqld processes running. Judiciously kill any mysqld processes that are running or anything holding open ibdata1. Then start your mysqld server. – longneck Jan 25 '13 at 19:55
  • Going to wait until this evening before I attempt a restart as the service is up & running reads/writes "seem" fine. When I made the my.cnf change I did notice several mysql processes running, at least 1 of which was a separate mysql> terminal session. The culprit may have been mysqltuner.pl which I had run a few times today; it requires one to login for the script to run, so there may have been a couple of extra mysql process running as a result – virtualeyes Jan 25 '13 at 20:14
  • Careful! Not `mysql`! `mysql` is the client process. `mysqld` is the server, which is what you need to kill. `mysql` will never have ibdata1 or any other database file open. Only `mysqld` will. – longneck Jan 25 '13 at 20:17
  • sorted, additional mysqltuner.pl created mysqld process must have been the culprit – virtualeyes Jan 26 '13 at 00:44
0

Got this resolved, mysql.sock and mysql.pid files went AWOL so I:

touch /var/lib/mysql/mysql.sock
chown mysql.mysql /var/lib/mysql/mysql.sock
touch /var/run/mysqld/mysqld.pid
chown mysql.mysql /var/run/mysqld/mysqld.pid
echo [pid of running mysqld] > /var/run/mysqld/mysqld.pid

mysqld service was running so client site had been fine, but given the alarming errors in the log I was under the impression that ibdata file had been corrupted: "Could not open or create data files...InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful do not remove old data files which contain your precious data!"

I mean, it's hard not to think the sky is falling given the voluminous warnings and errors -- looks like the sh*t has hit the fan when, in this case, it hasn't at all.

Ran the above via terminal session followed by a service mysql restart and voila, still in business with buffer pool size increased to 256MB as I had originally intended when I made the change to my.cnf this morning.

As for the cause of the problem, I had run mysqltuner.pl to check on performance bottlenecks; this must create a new mysqld process in addition to the running mysqld process, which remains connected after the script runs (grep'ing running processes at the time of failed restart there were 4 mysqld processes 2 for root mysqld_safe and 2 for mysql mysqld).

Killing the mysqltuner.pl created process did not solve the problem as mysql.sock and mysql.pid files went with it, and then I couldn't get into mysql client. Looking into the logs I feared the worst and spent a couple of hours scouring the net.

Much ado about nothing ;-)

virtualeyes
  • 665
  • 3
  • 10
  • 28