2

I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device.

I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s.

I created raw devices using udev pretty straightforwardly:

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"

per device

Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s.

Can anyone point me in a helpful direction?

Thanks in advance,

-- jifa

jifa
  • 41
  • 6
  • Is there a specific reason why you need the Paravirtualized adapter rather than the LSILogic SCSI adapter? Also why do you need to use RDM's here - is this part of some solution that requires direct access to the raw disk? – Helvick Mar 17 '10 at 10:27
  • I read somewhere it gives better performance. I tried switching now, and it drops the small RDMs to ~4MB/s and the raw devs to ~250KB/s. – jifa Mar 17 '10 at 10:51
  • a bit of research leads me to think udev is not the problem, as the raw devices are created. – jifa Mar 17 '10 at 12:29

1 Answers1

1

Turns out my performance assessment was wrong to begin with. I used the great article at http://www.informit.com/articles/article.aspx?p=481867 that explains I/O performance and figured out writing with small block sizes substantially degrades performance. Increasing the measurement block size proved normal r/w speeds - problem solved.

jifa
  • 41
  • 6
  • It's all about the IOPS, the drive heads (wherever they are) are a pretty restrictive bottlekneck. Unless you're using SSD's or an array of hundreds of spinning disks then small IO sizes == low bandwidth. Good to see you figured it out on your own. – Helvick Mar 21 '10 at 13:37