1

I know this is a horrible, generalized question with no good answer, and I apologize ahead of time, but I wonder if someone could take a stab at a very broad estimate.

Let's say you have a dedicated MySQL server running on about $1K worth of modern hardware.

Let's say the average user makes 20 read requests and 5 write requests per minute -- all straightforward queries, no joins; mostly along the lines of 'select this row by UUID' out of an indexed table of ~10,000,000 rows.

Very, very, very, very roughly how many concurrent users would you expect a server like that to handle before you're 'pushing it.'

Greg
  • 187
  • 2
  • 8

3 Answers3

5

As you noted, this is a very broad estimate.

The big question is what you use that $1000 to buy. Provided that you spring for a bit more memory and less processor power with an average hard disk, I would say that a reasonably coded application (where reasonable = mainly using whatever abstracting libraries your language provides) with those parameters should be able to handle about 500 concurrent users. I would have guessed more except for the size of the row set (since the more rows that fit directly into RAM, the less going to disk you have to do even for immediate writes).

The type of data that is in the rows and the amount of RAM you are able to afford are absolutely going to be the biggest factors in this scenario. If you could get by with fewer writes and lower the size of the indexed table, I should think that you might get away with 1,000 users simultaneously.

The two problems you will see:

  1. The amount of data you are able to cache in RAM and therefore the amount of RAM you have above the minimum needed to run the basic OS and database server operations is going to determine what you can get away with. More RAM, fewer OS needs, and a smaller useful amount of data that should be kept in RAM for queries and writes will mean the difference between acceptable performance and lots, and lots of thrashing.

  2. The design of your app is absolutely critical here. One write spread over 500-1000 users has a huge, huge impact. Similarly, if your calls are not simple and efficient then you are going to cause a train wreck quickly. I based my estimate off a number of mySQL apps that I have seen in play, with a little knowledge of how they work. If the app has fundamental issues, then you may not even get to 40 users. If you code it efficiently and take into account the hardware limits you may be able to scale beyond 2,000 easily.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
Johnnie Odom
  • 1,199
  • 7
  • 10
3

I can answer that. I just stepped off that ride, will ride again.

$650-$800 will buy an 8 GB RAM moderate speed quad-core AMD running a 1TB SATA2 drive. $170 will buy a second relatively fast 1TB drive. Keep in mind that this is off the shelf hardware from most electronics/office store/best-buy type places. You can get more bang elsewhere, but not bad for the money. You can get a faster quad core for little more.

Now, regarding the app, I will assume you're running linux/BSD/unix for the OS and avoid the ms vs unix debate. Here's what I've found:

We have no problems specifying 200+ active users/sessions on any of our applications without blinking, no matter how weak the app is. In fact, we haven't been able to drop/crash/bog-down an application on any quad-core server we've run for some time.... but we learned some lessons back in the single-core 200mhz days.

For example, our sister company sells quite a few mysql-based communications monitoring systems with 1300+ users per machine and on average several hundred concurrent sessions per hour. Logging and reporting is done in real-time (well, some buffering occurs) and they've been running on 3Ghz dual-core machines on [gasp] slow PATA drives... Indeed, 133Mhz P-ata drives. Longest user interface lag ever was about 2 seconds. They dumped MSSQL for MySQL a decade ago and immediately got results.

Keep in mind, these machines are running webapps+databases.... so you do the math. It just works. Also, I've replaced a number of oracle/MS/xxxx applications with these, and never come close to running out of steam. Also, let me expand on what the other guy said from a dba perspective... Here are 6 tips from the trenches.

  1. Writes will kill performance, especially if locking penalty is a foreign concept to your coders.
  2. Doing everything in one massive table will kill you.
  3. Excessive Normalization is not your friend. If your coders are going full 3rd normal form, your app will run badly. Denormalized data takes more space, but makes it possible to accomplish great things with simple queries.
  4. One big table will choke on frequent writes. See 1.
  5. If you write your app to use one (or more) tables for data display, and a different table for writes which can be synced to the read tables when system loads permit, you can get away with all kinds of things. We use a small number of tables to buffer writes, and are able to cope with an amazing number of transactions because no one is getting stepped on.
  6. Use indexes. If you know what parts of the queries are used as keys, anyway.
  7. Tune the database installation based on memory. See the MySQL docs online. Truthfully, if you have less than 1000 active sessions, you can often just bump the number of connections up and just go for it. http://dev.mysql.com/doc/refman/5.1/en/too-many-connections.html

You can see stupid SQL in action by looking at most wordpress/etc. plugins. Most are written by folks who don't get sql, and they will bring a server to it's knees in no time with just a handful of users.

voltaire
  • 166
  • 2
0

$884 server, 8gb RAM, dual quadcore xeon, 300gb 7200rpm SATA drive, 40% idle, 5% iowait

Uptime: 780727  Threads: 276  Questions: 1884267879  Slow queries: 3964303  Opens: 60474
Flush tables: 1  Open tables: 440  Queries per second avg: 2413.478

while serving 220mb/sec