6

In some ways I guess this is a piece of string question, however even if there is a "this fits most situations" answer I have no idea what it is, so...

I have a SAN on evaluation, an HP P4000. I'd like to use IOMeter to do some benchmarking to see what it's capable of.

However, I have no idea what combination of block size, read/write split, and random/sequential split is applicable to different usages.

For example how would you simulate some Exchange activity, some SQL activity, some general VM activity and so on.

I know how to add workers and set them loose with different settings, but what settings should I use?

Thanks.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
flooble
  • 2,364
  • 7
  • 28
  • 32

3 Answers3

3

Storage System Performance Analysis with Iometer: http://communities.vmware.com/docs/DOC-3961

bindbn
  • 5,153
  • 2
  • 26
  • 23
3

From a SQL Server perspective

On a SQL Server box you would preferably test the disks with the following parameters, depending on where you will be storing the MDF, NDF, LDF and TEMPDB files:

All disks (MDF, NDF, LDF, TEMPDB)

  • 8 KiB Transfer Request Size
  • 80% Read Ratio
  • 95% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

Serially written disks (LDF, TEMPDB)

  • 8 KiB Transfer Request Size
  • 20% Read Ratio (Logs and TEMPDB are write intensive)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

Serially read disk (MDF, NDF)

64 KiB Extent Read

  • 64 KiB Transfer Request Size
  • 80% Read Ratio (MDF & LDF files are mostly read from)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

128 KiB Read-Ahead

  • 128 KiB Transfer Request Size
  • 80% Read Ratio (MDF & LDF files are mostly read from)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

256 KiB Read-Ahead

  • 256 KiB Transfer Request Size
  • 80% Read Ratio (MDF & LDF files are mostly read from)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

512 KiB Read-Ahead

  • 512 KiB Transfer Request Size
  • 80% Read Ratio (MDF & LDF files are mostly read from)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

1024 KiB Read-Ahead (Enterprise Edition)

  • 1024 KiB Transfer Request Size
  • 80% Read Ratio (MDF & LDF files are mostly read from)
  • 0% Random Reads
  • Ramp Up Time 10 (to circumvent storage cache)
  • Maximum Disk Size (4'000'000 Sectors = 2 GiB)

You can vary the percentage of random reads to see if your results vary, but I found the above values to be a pretty good starting point.

SQL Server Best Practices Article (MSDN)

Test a combination of I/O sizes for read/write and sequential/random. For SQL-focused deployments, be sure to include I/O sizes of 8 KB, 64 KB, 128 KB, 256 KB & 1024 for sequential I/O. (Read-ahead can be up to 1024 KB when running SQL Server Enterprise Edition). For random I/O it is generally safe to focus only on 8-KB I/O.

Troubleshooting Slow Disk I/O in SQL Server (Blogs MSDN)

If you suspect you are experiencing poor disk performance you can use internal DMVs combined with a Performance Monitor collection to get a good picture of the health of the disk I/O subsystem and any latency SQL Server is experiencing from its poor performance.

SQL Server Perfmon (Performance Monitor) Best Practices(Brent Ozar)

In the Performance object dropdown, choose Physical Disk, and choose the “% Disk Time” counter. Notice that again on the right side window, we get multiple instances; this time, we get one per physical disk. In performance terms, physical disk means one disk shown in Computer Management’s Disk Management tool. One physical disk may have multiple partitions, each with its own drive letter, but for performance tuning, we want to know how hard that one physical disk is working.

This one “physical disk” may have a bunch of actual physical drives, like in RAID systems. However, Windows isn’t quite smart enough to know exactly how many drives are in the RAID array, so the term “physical disk” is a little misleading here.

How to examine IO subsystem latencies from within SQL Server (SQLSkills)

Most SQL Server’s today are I/O bound – that’s a generally agreed-on statement by DBAs and consultants in the field. This means that the major factor in server performance is its ability to perform I/O operations quickly. If the I/O subsystem cannot keep up with the demand being placed on it, then the SQL Server workload will suffer performance problems.

Now, saying that, one trap that many people fall into is equating increased I/O subsystem latencies with poor I/O subsystem performance. This is often not the case at all. It’s usually the case that the I/O subsystem performs fine when the designed-for I/O workload is happening, but becomes the performance bottleneck when the I/O workload increases past the I/O subsystem design point. The I/O workload increase is what’s causing the problem, not the I/O subsystem – if you design an I/O subsystem to support 1000 IOPS (I/O operations per second – and making sure you’re using the right I/O size and the workload characteristics make sense to be defined in terms of the number of random IOPS) and SQL Server is trying to push 2000 IOPS, performance is going to suffer.

John K. N.
  • 1,955
  • 1
  • 16
  • 26
2

Exchange and SQL activity tends to the frequent, smaller IO/Ops end of the scale. Exchange has quite a few larger I/O bursts as attachments are written/pulled. Backup intervals and long running queries can really play hob as well, and are probably your peak I/O instances. Exchange Online Defrag is our IO-peak for Exchange, and SQL Backups is our I/O peak for our SQL server.

Exchange Online Defrag involves a lot of I/O ops, but not much throughput so the average transfer size is small, 512b small, and a lot of them. Read/Write ratio varies tremendously, but for a well maintained Exchange DB it should be mostly reads. This will be significantly random, but with enough sequential access to keep it interesting (no I don't have exact ratios).

SQL Backups involve a variety of sizes, but unlike online-defrag throughput is actually high as well. Plan for a mix of 512b to 4kb transfer sizes. Read/write ratios depend on where the data is ending up! Writes can be very high speed but (depending on the backup script) almost entirely sequential. The reads are going to be 100% random.

General VM activity depends on what's in the VMs. If you've got Exchange or SQL in there, then monitor for that. If by general you mean "general file-serving" such as web or CIFS sharing, well, that depends on what they're doing; CAD engineers have very different access patterns than an office full of Purchasing Office clerks. There is no generic I/O pattern for 'general VM activity'. You have to plan for what you actually have in VM.

sysadmin1138
  • 131,083
  • 18
  • 173
  • 296
  • Realistically, once you get beyond SQL and Exchange the primary box is a File Server, however our business is so diverse that there is no "one size fits all" usage, hence not knowing how to try and gauge what's needed in terms other than IOPS (which I have plenty of thanks to perfmon). – flooble Oct 11 '10 at 18:33