0

I have a small VPS with Centos 7. This is running django using MySQL as the database, and it's also running Grafana which reads from MySQL.

When I open my grafana dashboard, the MySQL server instantly crashes. Sometimes it comes back up and other times it gets stuck in a "server starting" state.

I tried to check the logs but I have found nothing useful and my experience with MySQL is limited, so maybe there's somewhere else to check that I don't know about.

Logs:

Query Log:

2019-05-30T19:22:35.356783Z     7 Connect   
2019-05-30T19:22:35.364031Z     7 Query SELECT @@skip_networking, @@skip_name_resolve, @@have_ssl='YES', @@ssl_key, @@ssl_ca, @@ssl_capath, @@ssl_cert, @@ssl_cipher, @@ssl_crl, @@ssl_crlpath, @@tls_version
2019-05-30T19:22:35.377731Z     7 Quit  
[root@tamales ~]# myquery
2019-05-30T19:27:03.729553Z    25 Quit  
2019-05-30T19:27:03.942917Z    24 Query SELECT
  created AS "time",
  temperature
FROM weather_weather
ORDER BY created
2019-05-30T19:27:03.953663Z    23 Query SELECT
  created AS "time",
  temperature
FROM weather_weather
ORDER BY created
2019-05-30T19:27:04.005351Z    29 Connect   grafana@localhost on myproject using TCP/IP
2019-05-30T19:27:04.005499Z    28 Connect   grafana@localhost on myproject using TCP/IP
2019-05-30T19:27:04.005592Z    27 Connect   grafana@localhost on myproject using TCP/IP
2019-05-30T19:27:04.005775Z    26 Connect   grafana@localhost on myproject using TCP/IP
2019-05-30T19:27:04.083534Z    26 Query SELECT
  created AS "time",
  temperature
FROM weather_weather
ORDER BY created
2019-05-30T19:27:04.084228Z    27 Query SELECT
  created AS "time",
  humidity
FROM weather_weather
ORDER BY created
2019-05-30T19:27:04.086203Z    28 Query SELECT
  created AS "time",
  humidity
FROM weather_weather
ORDER BY created
2019-05-30T19:27:04.087198Z    29 Query SELECT
  created AS "time",
  humidity
FROM weather_weather
ORDER BY created
2019-05-30T19:27:06.411145Z    29 Quit  
2019-05-30T19:27:06.451327Z    24 Quit  
2019-05-30T19:27:06.493827Z    23 Quit  
2019-05-30T19:27:06.496899Z    28 Quit  
2019-05-30T19:27:06.527625Z    26 Quit  
2019-05-30T19:27:06.538793Z    27 Quit  
/usr/sbin/mysqld, Version: 8.0.14 (MySQL Community Server - GPL). started with:
Tcp port: 0  Unix socket: /var/lib/mysql/mysql.sock
Time                 Id Command    Argument
/usr/sbin/mysqld, Version: 8.0.14 (MySQL Community Server - GPL). started with:
Tcp port: 0  Unix socket: /var/lib/mysql/mysql.sock
Time                 Id Command    Argument
/usr/sbin/mysqld, Version: 8.0.14 (MySQL Community Server - GPL). started with:
Tcp port: 0  Unix socket: /var/lib/mysql/mysql.sock
Time                 Id Command    Argument

Error Log:

2019-05-30T19:22:25.589770Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.14)  MySQL Community Server - GPL.
2019-05-30T19:22:32.288142Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.14) starting as process 798
2019-05-30T19:22:35.059292Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2019-05-30T19:22:35.110710Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.14'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server - GPL.
2019-05-30T19:22:35.377699Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/var/run/mysqld/mysqlx.sock' bind-address: '::' port: 33060

Systemctl Status After Crash:

* mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
   Active: activating (start) since Thu 2019-05-30 15:31:00 EDT; 3s ago
     Docs: man:mysqld(8)
           http://dev.mysql.com/doc/refman/en/using-systemd.html
  Process: 3965 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 3988 (mysqld)
   Status: "SERVER_BOOTING"
   CGroup: /system.slice/mysqld.service
           `-3988 /usr/sbin/mysqld

Systemd log:

May 30 15:22:23 tamales systemd[1]: Stopping MySQL Server...
May 30 15:22:25 tamales systemd[1]: Stopped MySQL Server.
May 30 15:31:00 tamales systemd[1]: Stopped MySQL Server.
May 30 15:31:00 tamales systemd[1]: Starting MySQL Server...
May 30 15:31:05 tamales systemd[1]: mysqld.service: main process exited, code=killed, status=9/KILL
May 30 15:31:05 tamales systemd[1]: Failed to start MySQL Server.
May 30 15:31:05 tamales systemd[1]: Unit mysqld.service entered failed state.
May 30 15:31:05 tamales systemd[1]: mysqld.service failed.
May 30 15:31:05 tamales systemd[1]: mysqld.service holdoff time over, scheduling restart.
May 30 15:31:05 tamales systemd[1]: Stopped MySQL Server.
May 30 15:31:05 tamales systemd[1]: Starting MySQL Server...
May 30 15:31:10 tamales systemd[1]: mysqld.service: main process exited, code=killed, status=9/KILL
May 30 15:31:10 tamales systemd[1]: Failed to start MySQL Server.
May 30 15:31:10 tamales systemd[1]: Unit mysqld.service entered failed state.
May 30 15:31:10 tamales systemd[1]: mysqld.service failed.
May 30 15:31:10 tamales systemd[1]: mysqld.service holdoff time over, scheduling restart.
May 30 15:31:10 tamales systemd[1]: Stopped MySQL Server.
May 30 15:31:10 tamales systemd[1]: Starting MySQL Server...
  • The server also has an api that constantly writes new entries in the database. The crash seems to be when both things are happening at once. I stopped the writes and it's all fine and dandy on my dashboard. I'll keep digging. – motionsickness May 31 '19 at 15:53
  • How much RAM? What is the value of `innodb_buffer_pool_size`? – Rick James Jun 01 '19 at 04:47
  • RAM is 256 MB. the `innodb_buffer_pool_size` says 134217728. Your question made me look at my ram usage and MySQL is eating over half of it. Could lack of ram lead to it crashing? – motionsickness Jun 01 '19 at 10:22

1 Answers1

1

256MB of RAM is almost too small to handle MySQL. And it would be crowded to have anything else running on the same machine. (I don't know the memory footprint of Grafana.)

128MB allocated for the buffer_pool adds to the disaster. MySQL needs RAM for things other than that, so MySQL is actually using most of RAM.

You could try these:

innodb_buffer_pool_size = 40M
max_connections = 4

as a way to cut back on RAM usage. It might or might not work. If the buffer_pool is set too small, other issues could crop up to cause trouble.

Rick James
  • 2,058
  • 5
  • 11
  • Thank you so much, yeah i started checking and the lack of ram must be the problem. After putting those options it's still chugging most of the ram but I'll see if i can move it somewhere else with more ram. – motionsickness Jun 01 '19 at 18:44