From a SQL Server perspective
On a SQL Server box you would preferably test the disks with the following parameters, depending on where you will be storing the MDF, NDF, LDF and TEMPDB files:
All disks (MDF, NDF, LDF, TEMPDB)
- 8 KiB Transfer Request Size
- 80% Read Ratio
- 95% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
Serially written disks (LDF, TEMPDB)
- 8 KiB Transfer Request Size
- 20% Read Ratio (Logs and TEMPDB are write intensive)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
Serially read disk (MDF, NDF)
64 KiB Extent Read
- 64 KiB Transfer Request Size
- 80% Read Ratio (MDF & LDF files are mostly read from)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
128 KiB Read-Ahead
- 128 KiB Transfer Request Size
- 80% Read Ratio (MDF & LDF files are mostly read from)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
256 KiB Read-Ahead
- 256 KiB Transfer Request Size
- 80% Read Ratio (MDF & LDF files are mostly read from)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
512 KiB Read-Ahead
- 512 KiB Transfer Request Size
- 80% Read Ratio (MDF & LDF files are mostly read from)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
1024 KiB Read-Ahead (Enterprise Edition)
- 1024 KiB Transfer Request Size
- 80% Read Ratio (MDF & LDF files are mostly read from)
- 0% Random Reads
- Ramp Up Time 10 (to circumvent storage cache)
- Maximum Disk Size (4'000'000 Sectors = 2 GiB)
You can vary the percentage of random reads to see if your results vary, but I found the above values to be a pretty good starting point.
SQL Server Best Practices Article (MSDN)
Test a combination of I/O sizes for read/write and sequential/random. For SQL-focused deployments, be sure to include I/O sizes of 8 KB, 64 KB, 128 KB, 256 KB & 1024 for sequential I/O. (Read-ahead can be up to 1024 KB when running SQL Server Enterprise Edition). For random I/O it is generally safe to focus only on 8-KB I/O.
Troubleshooting Slow Disk I/O in SQL Server (Blogs MSDN)
If you suspect you are experiencing poor disk performance you can use internal DMVs combined with a Performance Monitor collection to get a good picture of the health of the disk I/O subsystem and any latency SQL Server is experiencing from its poor performance.
SQL Server Perfmon (Performance Monitor) Best Practices(Brent Ozar)
In the Performance object dropdown, choose Physical Disk, and choose the “% Disk Time” counter. Notice that again on the right side window, we get multiple instances; this time, we get one per physical disk. In performance terms, physical disk means one disk shown in Computer Management’s Disk Management tool. One physical disk may have multiple partitions, each with its own drive letter, but for performance tuning, we want to know how hard that one physical disk is working.
This one “physical disk” may have a bunch of actual physical drives, like in RAID systems. However, Windows isn’t quite smart enough to know exactly how many drives are in the RAID array, so the term “physical disk” is a little misleading here.
How to examine IO subsystem latencies from within SQL Server (SQLSkills)
Most SQL Server’s today are I/O bound – that’s a generally agreed-on statement by DBAs and consultants in the field. This means that the major factor in server performance is its ability to perform I/O operations quickly. If the I/O subsystem cannot keep up with the demand being placed on it, then the SQL Server workload will suffer performance problems.
Now, saying that, one trap that many people fall into is equating increased I/O subsystem latencies with poor I/O subsystem performance. This is often not the case at all. It’s usually the case that the I/O subsystem performs fine when the designed-for I/O workload is happening, but becomes the performance bottleneck when the I/O workload increases past the I/O subsystem design point. The I/O workload increase is what’s causing the problem, not the I/O subsystem – if you design an I/O subsystem to support 1000 IOPS (I/O operations per second – and making sure you’re using the right I/O size and the workload characteristics make sense to be defined in terms of the number of random IOPS) and SQL Server is trying to push 2000 IOPS, performance is going to suffer.