0

I have a database server sitting right underneath a virtual machine host server in the rack, and this vm host is primarily responsible for servers hosting a couple different web sites and app servers that all talk to databases on the other server. Right now both servers are connected to the same switch, and I'm pretty happy with the pathing. However, both servers also have an unused network port.

I wondering about the potential benefits of using a short crossover or normal+auto mdix network cable to connect these two servers together directly. Is this a good idea, or would I be doing something that won't show much benefit and is just likely to trip up a future admin who's not looking for this?

The biggest weakness I can see right now is that this would likely require a code change for each vm app to point to the new IP of the database server on this private little network, and if I have a problem with the virtual machine host and have to spin up it's guests elsewhere while I fix it I'll have to change this back before things will work.

Joel Coel
  • 12,910
  • 13
  • 61
  • 99

2 Answers2

1

It's not at all worth the effort. You won't notice any difference in performance (assuming you're sticking with the same transport protocols) and as you already know you will create for yourself a more complex non-standard configuration that must be maintained. Dual homing has special things to take into consideration that you will not benefit from by simply bypassing the switch.

squillman
  • 37,618
  • 10
  • 90
  • 145
0

Connecting directly rather than through the switch shouldn't make a noticeable difference for latency. You could connect the second ports to the switch as well and aggregate the links if you need additional throughput.

sciurus
  • 12,493
  • 2
  • 30
  • 49