1

I'm setting up a server that involves lots of database writing during regular updates, and I have wildly varying results between different machines. I'm trying to find out what I can expect from different machines (and hosting providers) without having to install the entire software stack to measure performance.

I've used hdparm -tT on the disks, but that measures sequential disk access.

Is there an equivalent test that's better for testing database-style random-access reads and writes? Or should I just rely on the manufacturer numbers?

gravitystorm
  • 111
  • 3

2 Answers2

2

Although SQLIO and IOMETER are great I strongly recommend iozone for detailed information in this area.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Bonnie is also a good test, but its always better to use multiple tools. Also remember the effect of cache on those tests. This is the URL for Bonnie since I fail to link stuff at comments: http://www.coker.com.au/bonnie++/ – coredump Jul 23 '10 at 16:45
0

You pretty much can rely on the factory numbers, but the problem is that you do not know what you talk about ;) Not in the negative sense.

what I can expect from different machines (and hosting providers)

Ok, simple: low performance as they save money. Point. I personally use a SuperMicro setup for a performance database with at the moment 8 discs, all 10k rpm. I doubt a hostingp provider will over that without serious pushing.

More complex: This will vary widely (your idea to measure is right), but even then you are not along on the system - so the performance can again ary a lot more depending what other people do at the time.

You have rightly identified the bottleneck being the discs.

SQLIO is my favourite. Remember to give the tool enough large a test file to actually move the head a lot.

TomTom
  • 50,857
  • 7
  • 52
  • 134
  • Sorry, maybe "hosting providers" is the wrong phrase - I'm only looking at dedicated servers, so the machines aren't shared with anyone. – gravitystorm Jul 25 '10 at 15:31