I have a mysql table with 150k rows and do regular updates. Updates can take 1-3seconds. What options do i have to reduce this?

0

I have a table (mysql) that has 150k rows. During the day there are multiple updates (1000s) that change an int value to another int value (update table set field = ? where table.id = ?). nothing complicated in the UPDATE query.

Would moving to something like Mongo db be faster? The database gets 100s (1000s?) more reads (selects) than these writes. What are my options?

slycat

Posted 2013-02-23T04:35:26.710

Reputation: 321

Answers

1

  • Look at tuning MySQL for the machine its on. Out of the box MySQL uses very conservative memory figures so it will run everywhere, it needs to be tuned before going into production every time
  • Use analysis tools, like EXPLAIN, to see how many indexes the query is hitting. If you can reduce the number of indexes that need to be modified that will increase your update performance
  • There are ways to rebuild or optimize the indexes themselves

Moving database can be considered, but you first want to understand the problem and attack your easy wins. There are much larger MySQL databases than yours that run comfortably, its not like you have hit the limit of the technology itself. As with most performance problems - although its not always the nicest solution it can normally be solved by tuning or adding cache.

hellomynameisjoel

Posted 2013-02-23T04:35:26.710

Reputation: 158

0

You have not provided a lot of context, so it is difficult to provide an answer, but some thoughts, which I believe when combined, should give you an answer

  • There are a number of optimizations you can look at for the updates. Among others, are your tables indexed appropriately on table.id and do you have enough memory allocated to do what you are attempting efficiently ? Are you using "prepared" statements ? Database optimization is a massive subject.

  • Mongo db is not a relational database, so does it make sense to rewrite your code to use Mongo or can you achieve this more easily by refactoring parts of your code (maybe storing the 150k row as an in-memory array and simply modifying this array, periodically backing it up to disk ?????)

davidgo

Posted 2013-02-23T04:35:26.710

Reputation: 49 152

0

Assuming you have optimized your query and indexed properly.

you can:

  1. throw additional hardware to decrease time
  2. attempt to shorten the connection distance between the update caller and the database (assuming they are not on the same box)
  3. preempt to cache(if possible) the changing values if known beforehand

pyker

Posted 2013-02-23T04:35:26.710

Reputation: 174