We are experiencing slowness and I am fairly new to using SANs so I would like some help with this. First off, I don't think this will fully solve our issue, but I would like to focus on our iSCSI connection for now. We have 3 PowerEdge R710s for servers and 2 PowerVault MD3220is. They are connected by 2 PowerConnect Gb (one on 130.x and one on 131.x) switches and each has a spare NIC.
All 3 ESXi hosts pretty much all have the same setup:
vSwitch1 (bound to NIC1)
vmk1 IP: 192.168.130.1
vSwitch2 (bound to NIC2)
vmk2 IP: 192.168.131.1
Each PowerVault has 2 controllers with 4 NICs for 8 NICs total. They are pretty much identically configured as follows:
Controller 0/1: 192.168.130.101
Controller 0/2: 192.168.131.101
Controller 0/3: 192.168.132.101 Unused
Controller 0/4: 192.168.133.101 Unused
Controller 1/1: 192.168.130.102
Controller 1/1: 192.168.131.102
Controller 1/1: 192.168.132.102 Unused
Controller 1/1: 192.168.133.102 Unused
Is this the ideal configuration? It seems like we could get more throughput if we put everything on the same network like so:
vSwitch1 (bound to NIC1)
vmk1 IP: 192.168.130.1
vmk2 IP: 192.168.130.2
vmk3 IP: 192.168.130.3 (adding the unused NIC)
For the PowerVaults:
Controller 0/1: 192.168.130.101
Controller 0/2: 192.168.130.102
Controller 0/3: 192.168.130.103 (adding unused NIC)
Controller 0/4: 192.168.130.104 (adding unused NIC)
Controller 1/1: 192.168.130.105
Controller 1/1: 192.168.130.106
Controller 1/1: 192.168.130.107 (adding unused NIC)
Controller 1/1: 192.168.130.108 (adding unused NIC)
I would wanted to put every other port on switch1 and the rest on switch2.
Would that provide enough redundancy? Would it speed things up? Is there a better way?