3

I have Ubuntu 12.04 host running Linux 3.2.0-24-generic, libvirt 0.9.8-2ubuntu17, qemu-kvm 1.0+noroms-0ubuntu13. Host uses elevator=deadline, guests use elevator=noop. All KVM guests use virtio, no caching, io mode default, and LVM logical volumes as storage. I use bonnie++ 1.96 to evaluate IO performance.

Hardware:

  • Supermicro X8SIL-F
  • Intel(R) Core(TM) i7 CPU 870
  • 4 x Kingston 4GiB DIMM DDR3 Synchronous 1333 MHz (0.8 ns)
  • 2 x WDC WD10EACS-00D (WD Caviar Green) I have disabled IntelliPark (the 8-second sleep timer) on the harddisks using the wdidle3 tool.

The harddrives are both partitioned as follows:

  • 20 GB, in md RAID-1 for the host root filesystem
  • 640 GB, in md RAID-1, with LVM for guest filesystems
  • 330 GB, in md RAID-0, with LVM for guest filesystems
  • 4 GB, swap for host

Fdisk output:

# fdisk -b 4096 /dev/sda
Note: sector size is 4096 (not 512)

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 15200 cylinders, total 244190646 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    39070000   156271812   fd  Linux raid autodetect
/dev/sda2        39070080  1297361280   738197508   fd  Linux raid autodetect
/dev/sda3      1297361288  1945712472  2593404740   fd  Linux raid autodetect
/dev/sda4      1945712480  1953525160    31250724   82  Linux swap / Solaris

Observations:

  • When I run bonnie++ on the host, on a md RAID-1 backed filesystem, during the "writing intelligently" the system load goes up to about 12 and all systems (host and guests) become unusable slow.

Output:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hostname         2G  1132  91 21439   1 21741   2  5131  86 +++++ +++  1747   8
Latency             10093us     459ms     128us    3928us     113us      83us
Version  1.96       ------Sequential Create------ --------Random Create--------
hostname            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 23851  19 +++++ +++ 28728  17 28138  21 +++++ +++ 31239  19
Latency              1017us     602us    1144us     323us      61us    1196us
  • When I run bonnie++ on a guest, on a md RAID-1 backed filesystem, during the "writing intelligently" the host system load goes up to about 25 and all systems (host and guests) become unusable slow.

Output:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hostname         2G   965  88  9244   0  7981   1  2595  74 54185   4 248.4   4
Latency             16439us   13832ms    4195ms     126ms     280ms     236ms
Version  1.96       ------Sequential Create------ --------Random Create--------
hostname            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  4433   5 +++++ +++  8005   4  8373   8 +++++ +++  7325   4
Latency               101ms    1003us     494us     298us      64us     419us
  • When I run bonnie++ on a guest, on a md RAID-0 backed filesystem, during the "writing intelligently" the host system load goes up to about 50 and all systems (host and guests) become unusable slow.

Output:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
hostname         2G  1019  93 16786   2 12406   2  1747  30 39973   2 659.2   6
Latency             18226us    7968ms    2617ms     445ms     212ms    1613ms
Version  1.96       ------Sequential Create------ --------Random Create--------
hostname            -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ 16709  14
Latency             28112us     440us     442us     166us      96us     123us

Questions:

  • Are such high loads normal when running bonnie++?
  • I have the feeling that IO is really slow both the host and the guests, would you confirm this based on the results, or am I just expecting too much?
  • What can be the cause of this behavior? (Are just the Caviar Green disks as crap as you can read all over the Internet?)
  • Is there anything that I can tune to improve the IO speed/load?
  • Is there a way to "isolate" the results of high IO in a single guest, so that a single guest will not significantly affect the performance of other guests?
Tader
  • 141
  • 1
  • 6

1 Answers1

2

It sounds like the 4k block alignment problem. Did you use the -b option with fdisk?

rnxrx
  • 8,103
  • 3
  • 20
  • 30
  • I have made sure that all partitions started at a sector dividable by 8. Or... since we start counting at 0, I should have started at something dividable by 8 minus 1? Did I make the classical off by one error? – Tader May 22 '12 at 17:15
  • I'm not sure if these drives really have 4k blocks, hdparm says 512 bytes... But then again, S.M.A.R.T. on these drives also lies about the number of Load Cycle Counts. – Tader May 22 '12 at 17:17