0

I have a task where I should benchmark two servers, one that has a RAID10 SATA configuration and second server has RAID10 SSD configuration.

Both servers would be placed as gateway mail server and should provide us an approximate value how much mail traffic they could handle and what would happen in case of huge loads - to test this I'm sending in from 100-500 emails per second and monitoring CPU usage/Disk I/O, queue size.

The I/O wait values are similar on both servers ( SATA: peak 10%, SSD peak 11% ), but the SSD server is faster in processing emails out of the queue - I'm wondering if the percentage of Disk I/O has different meanings on SATA and SSD drives due to the write/read difference.

Could you suggest a best way to compare both drives (servers) and if monitoring I/O wait is the best course of action ?

Looking forward to your suggestions!

Tabiko
  • 310
  • 1
  • 8
  • What operating system are you using? What RAID controller and hardware are you using? Which SSD makes/models? How is the RAID controller configured? Etc, etc... Lots of details missing here. – ewwhite Aug 27 '13 at 12:05
  • 5
    Not to state the obvious, but SSD is leaps and bounds faster than spinning drives for a number of reasons...the I/O wait is more controller/driver bound than anything. Definitely need more information. – Nathan C Aug 27 '13 at 12:19
  • The servers and hardware is as following: SSD Server OS: Debian 6 Drives: 12 x Intel S3700 Series SSDSC2BA800G301 800GB RAID controller: Adaptec \ 71605 \ SATA/SAS RAID SATA server OS: Debian 6 Drives: Western Digital WD RE4 WD1003FBYX RAID controller: 12 x Adaptec \ 5805 Z \ SATA/SAS RAID – Tabiko Aug 27 '13 at 12:40
  • SSD are about 100 times faster in their IO budgets. – TomTom Aug 27 '13 at 12:46

1 Answers1

1

It's not that easy to say where is the problem and if there is a problem with disks - need more info.

I'm using same series Adaptec Cards (mine is Q, Maxcache 3.0 enabled). 7 series require updated version of drivers to work properly and there are issues with old firmware too. I've tested both 5 and 7 series with many drives and backplanes and it's very problematic, be sure all are listed in compatibility list from PMC site. Also, there are additional kernel options for aacraid modules, providing specialized caching mode with linux kernel (option cache=6 as i remember, check docs). Both of these cards are very good.

Globally, I recommend you to perform tests with iozone before pushing new hardware to the production - you know the limits then.

What I would recommend to check:

cat /proc/meminfo

Check how much RAM is used:

  1. Cached - disk read cache
  2. Buffers - RAM allocated for write operations
  3. Dirty - pages required to be synced to the disk

Disk cache is important - you need it.

Check page faults:

sar -B 1 100

Or use top, type "F" and then enable MPF and MnPF options.

Check IO activity to measure IOPS:

iostat -x 1

Also, divide r/s and w/s by rkB/s wkB/s - this way you can analyze the type of activity - is there random (low value) to disk or sequential (high value).

I've run single Intel SSD and had 0.0 iowaits where had 50 with 8x RAID10 7.2K SAS disks.

To tell more - need to know more - what is hw configuration, process list, array types, chunk sizes, kernel version, filesystems etc

GioMac
  • 4,444
  • 3
  • 24
  • 41