I'm renting two dedicated servers from a hosting company. Here are the specs:
server1:
HP ProLiant DL165 G7
2x AMD Opteron 6164 HE 12-Core
40 GB RAM
HP Smart Array P410 RAID controller
2x Samsung 830 256 GB SSD
server2:
HP ProLiant DL120 G7
Intel Xeon E3-1270
16 GB RAM
HP Smart Array P410 RAID controller
2x Samsung 830 128 GB SSD
Setup is the same on both servers:
- Debian 6.0.
- No swap.
- File systems use ext3 with no special mount options (only rw) and I'm quite certain the partitions are properly aligned.
- Using noop scheduler.
- RAID 1.
- RAID controller has BBU.
- Drive Write Cache has been enabled in the RAID controllers.
- Read / Write cache ratio is 25% / 75% on both RAID controllers.
I'm currently trying to figure out how to get the most out of the disks in these servers starting with sequential reads/writes. Here are the speeds I'm seeing at the moment:
Writes:
server1:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 5.05089 s, 213 MB/s
server2:~# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.09768 s, 262 MB/s
Reads:
server1:~# echo 3 > /proc/sys/vm/drop_caches
server1:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.14051 s, 259 MB/s
server2:~# echo 3 > /proc/sys/vm/drop_caches
server2:~# dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.33901 s, 322 MB/s
First of all, can anyone explain the big difference between these servers?
Second, should I expect more than this? When I've read about the Samsung 830 SSD I've seen write speeds of over 300 MB/s and read speeds of over 500 MB/s using the same benchmarking method (dd). But then there's no RAID controller involved. Is the penalty of RAID this high or is it a config issue?
UPDATE:
I've did some tests using iozone instead of dd and the results I'm getting make a lot more sense. No big difference between the two servers (server1 is slightly faster now) and I'm getting quite close to the speeds rated on these drives. So I guess I shouldn't have used dd. Lesson learned!
I'll be using noop with nr_requests and read_ahead_kb set at the defaults (128 and 128) to start with. Setting read_ahead_kb higher seems to hurt the random read performance too much on server2. Hopefully I'll get time to revisit this once I've used the servers in production for a while when I have a clearer picture of the usage patterns.