2

I've noticed my new EMC VNX5200 SAN tends to perform better after "burning in" a partition. By this I mean filling the partition with different junk data and deleting everything a few times. Asked my SAN admin about this, but he can't be bothered. He mumbled something about IOPS, also stating that MB/s is irelevant. Now, it might be that, but it's quite important to me that my nightly SQL backups and restores (big contiguous chunks) run at the current 80MB/s instead of the 15MB/s I've started with. Pushed for and got my own LUN for a SQL server. But I have no insight into how many disks are on the backend, RAID config, or if there are any other LUNs on those, if the SAN does any caching or moving frequently used data to some faster storage (could explain the "burn in"). I could dig in the admin console, as I have access but wouldn't know how to read all that info.

I'd be happy to know that all runs well and to accept the current performance as the expected level. But again, max read I see on this expensive SAN atm is 80MB/s, with writes about half that, way under my 3y old desktop lab with 7200rpm drives (no RAID) that can pull a constant 110MB/s backup.

Question is, how to get consistent performance from the SAN ? What should I ask my SAN admin for ? What's the best way to check the performance ? Or is there a trusted online resource where I can check and point him to performance baselines, given the SAN manufacturer, disk models and config data, and load type scenarios ? (something like CPU+GFX scores for identical gaming platforms)

Razvan Zoitanu
  • 635
  • 1
  • 10
  • 26

1 Answers1

4

Your storage admin isn't giving you the whole story. IOPS are what you will be limited by when you're doing normal IO (random seeky reads and small block writes), however you're correct that backups should be a big sequential read and can't be measured helpfully in IO/s.

When you do a backup, are you reading contiguously, or are you doing lots of small reads all over the disk? If it's random reads, that will put your limit back onto the seeking ability of the disks, which is measured in IO/s.

Assuming it's contiguous reads, the speed will be capped by the transfer layer first- if you're using 1 Gb/s iSCSI, you have a maximum speed of something like 80 MB/s. If you are sharing a network and have less than a full link, even less. If your storage front end port is doing backups for more than one client, that can also limit you. Lastly, if the disks you use are shared with a bunch of other clients that are also doing heavy reads or writes, that can cause this.

That said, your storage guy should be able to at least tell you why you are going so slowly so you can work out a way to improve performance. You could try moving your backup window to avoid competing for shared resources, or you could separate your storage out so that it's not competing for whatever resource you're currently being choked on.

Basil
  • 8,811
  • 3
  • 37
  • 73
  • Not sure if it's 1Gbps, as this was deployed late last year (would expect at least multiple 10Gb ports, wth some aggregation). But if so, it's quite an issue. Will look into it, thank you. – Razvan Zoitanu May 14 '15 at 15:33
  • If you come back with some details, I can refine my answer. There are a number of layers that could be choking you. I figured we'd start with the top. – Basil May 15 '15 at 16:49