1
Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement.
Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS.
Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing (> 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%.
EDIT:
HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections.
Hardware
- Asus F2A85-M Pro
- A10-5700
- 16GB DDR3 1600
- OCZ Vertex 2 128GB SSD
- 2x Generic 1tb 7200 RPM drives as RAID0 (in win7)
- Intel Gigabit Desktop CT
Software
- Host OS: Win7 (SSD)
- VMware Worksation 9 (SSD)
- FreeNAS 8.3 VM (20GB VDisk on SSD)
- CPU: I've tried 1, 2 and 4 cores.
- Virtualisation engine, Preferred mode: Automatic
- 10,24Gb ram
- 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS.
- NIC Bridge, Replicate physical network state
Below are two typical process print-outs while I'm transfering one file to the CIFS share.
last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26
32 processes: 2 running, 30 sleeping
Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd
1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python
last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14
33 processes: 2 running, 31 sleeping
Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free
Swap: 4096M Total, 4096M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd
1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python
I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.
Is that software/BIOS RAID-0? How much I/O is occurring on the host OS hard disks, the RAID0 volume? RAID in software (or BIOS) is terrible; you shouldn't use RAID without a hardware RAID controller, pretty much end of story. It can cause mostly invisible CPU usage and I/O storms on the host that bog down the entire system. – allquixotic – 2012-11-30T17:16:33.070
Copying data from the SSD to the RAID drive (NoVM) goes at around 150+ Mbytes/s with CPU usage going up about 5%, I have a hardware raid I think (how do I check?). I've set the SATA ports to RAID in the BIOS and I've set the RAID defs when it loads the controllers pre win. As far as the VM is concerned I think it sees the RAID as a really fast disk. Copying data from host to CIFS results in a ~35Mbyte/s SSD read and a ~30Mbyte RAID write. Not sure if it helped. – maka – 2012-11-30T17:47:45.453
If you don't know for a fact that you have specifically purchased a Hardware RAID card (for example an Adaptec 6405E), you aren't using hardware RAID. BIOS RAID is just as bad as software RAID. The CPU is used for coordinating the striping. – allquixotic – 2012-11-30T17:50:52.107
Is this related specifically for VM-ware IO? Since I get more than 150Mbytes/s in windows SSD-->RAID0 with almost no CPU usage and I'm happy with the performance there, but maybe VMWare can't handle the type of raid that I'm using? – maka – 2012-12-01T19:18:57.550