22

We have been running into a problem where querying a table that has approximately 50 million rows, and has an index size of 4 GB (table size of about 6 GB) results in the database server swapping memory, and slowing down dramatically. I'm pretty sure this has to do with the temp table size being exceeded, and it being swapped to disk.

If I upgraded my database server from 32 GB of RAM to 64 GB RAM, I'm wondering if the MySQL database will be able to fully take advantage of this additional memory and not swap. I've gone through a few of the variables (e.g. KEY_BUFFER_SIZE, etc...), and they seem to support setting values over 64 GB. However, the MySQL documentation says that the tmp_table_size maxes out at 4 GB.

So would the memory upgrade be worth it? Would the "querying-large-table" problem benefit from this, or would it not help because of the 4 GB limit? I know there are potentially other solutions like restructuring the table to be partitioned out in different ways, etc... but without changing anything about the table, would additional memory help?

Also, in general, are there any other memory related variables that MySQL wouldn't be able to take advantage of when moving from 32 to 64 GB RAM?

We are using 64bit linux (Ubuntu) as our database server.

Thanks, Galen

Galen
  • 323
  • 2
  • 5

4 Answers4

11

Yes - if you use InnoDB and have read-intensive workload you can absolutely take advantage of large amounts of RAM [assuming your data set will fit in mem - your server will be blazing-blazing fast].

I'm using MySQL with InnoDB storage on 8-16 GB servers with working set fitting in memory.

splattne
  • 28,348
  • 19
  • 97
  • 147
pQd
  • 29,561
  • 5
  • 64
  • 106
9

Perhaps it would be worth putting some extra time and effort into researching what is causing the system to swap before spending money on memory?

32GB of memory leaves plenty of available memory even after loading the entire table, the index, and the maximum temp_table into memory. A quick search brought up these two pieces of documentation which might be relevant:

Martin
  • 490
  • 2
  • 5
5

If you're using InnoDB, the most important variable to set is innodb_buffer_pool_size. I would set it to approximately 80% of your system memory. Once you're caches warm up after some usage, your most active data (working data set) will be in memory (innodb_buffer_pool_size) and your operations on it should be very fast. With 64GB of memory, you can definitely fit a lot in there. Memory is always a good buy for DB servers.

vmfarms
  • 3,077
  • 19
  • 17
0

If you think it's due to a very large temp table being created, you may want to consider ways you can improve the query to avoid temporary tables.

You could post on Stackoverflow a post containing the schema, the query, the explain plan and some details fo the problem.

MarkR
  • 2,898
  • 16
  • 13