1

I have 4 Intel 2 socket Xeon servers. What is the fastest network technology these days to tighty couple these together. Doing scenario planning with throwaway ‘less better’ results. The network use to tell other threads what the best scenario currently is.

And should I use a routing switch stop Ethernet clashes.

kingchris
  • 244
  • 4
  • 13

4 Answers4

6

In terms of lowest latency, Infiniband RDMA is unbeatable. We are seeing <3us (microsecond) rtt time between 2 servers with a switch between them, running CentOs. As far as I know, there is simply no lower latency solution available at this time. 10gigE is certainly higher bandwidth, but also significantly higher latency.

Since I don't know your platform or requirements, that is the best I can do.

What do you mean by "Ethernet clashes", and how do they occur?

matt
  • 1,112
  • 1
  • 8
  • 18
  • I am led to belive that Ethernet is a non token system where you broadcast and listen to yourself. If you hear yourself fine else someone else broadcast at around the same time you did. I think they call call it 'collision detection' or something. But with a switch you sort of don't have all 4 machines sharing the same 'tin can string' I think I am getting confused from the days of coxcial cable and T pieces and terminators. – kingchris Jun 30 '09 at 06:28
  • 3
    It sounds like you are confusing ancient 10base2 ethernet and more modern network hardware. Collisions at layer 2 are rare at this point, because each port on a switch is its own collision domain, and only broadcasts, and traffic intended for the host connected to it will receive it. You are thinking of CSMA/CD(CA), and are correct, Ethernet is certainly a non-token system. – matt Jun 30 '09 at 06:34
  • Thanks. Did physical layers 101 years ago so the grey matter bit swiss cheesed. – kingchris Jun 30 '09 at 06:40
  • Some hubs are just a resistive load they are not switches per say. – kingchris Jun 30 '09 at 06:56
  • 1
    A hub is by definition not limiting collission domains (they're effectively repeaters, only with more than two ports). A switch should significantly limit your collision domain sizes (although there are cases where packets will be broadcast out). It becomes even better if you use full-duplex links all throughout. – Vatine Jun 30 '09 at 13:34
  • Nowadays you can get Infiniband up to 120 Gbps (96 Gbps actual) if you use 4x QDR. – Kamil Kisiel Jul 01 '09 at 04:16
2

10 gig Ethernet is available from some vendors, however you pay a premium for it. 1 Gig is the the norm.

mrdenny
  • 27,074
  • 4
  • 40
  • 68
2

If you're running a clustered application, you've got lots of options beyond Ethernet, but you'll need to figure out what characteristics will best suit your application. Often you'll need to make a trade-off between low latency of communications and high bandwidth. In extreme situations, you may want to look at spending more money to use a smaller number of higher-powered nodes to reduce latency to memory-access rather network-access levels. And, of course, you should take a look at your application to see if there are ways of rewriting it to work better with the technologies out there.

Wikipedia offers a handy list of network technologies and nominal bandwidths that you can use to start off your research. It gives nominal speed (the actual throughput you'll get will be lower) and doesn't discuss latency.

Note that if you're not using the latest and greatest servers, you need first to look at what you've got available in terms of internal buses. You can certainly put a 10GigE card on a 64-bit 66 MHz PCI bus and run faster than with a GigE card, but you're not going to get anywhere near the network's nominal rate of 1 GB/sec or so because the bus can only do about 500 MB/sec.

As far as whether you should "use a routing switch stop Ethernet clashes," if you're talking about using a switch instead of a hub, these days that's pretty much automatic. Hubs are darn hard to find, in fact. However, not all switches are created equal; one that might handle two hosts transferring at 100 Gbps may not handle six doing the same.

cjs
  • 1,355
  • 1
  • 12
  • 23
  • Using 2nd hand servers bought at auction (cheap). Wish I could afford the quad socket, Quad MP 7000 series. 'To dream the impossible dream, to ....' – kingchris Jun 30 '09 at 06:37
  • Yes, well, start by looking at your bus bandwidth, then. That's going to be the limit on what you can do. – cjs Jul 02 '09 at 23:30
1

We use HP Bl490c G6 servers which each pump out up to 6 x 10Gbps, not a bad price either - of course they can't flood all of those ports all the time, not even with dual E55xx Xeons, and certainly not with VMWare.

Chopper3
  • 100,240
  • 9
  • 106
  • 238
  • Run SUSE with a daemon on each machine holding the best current result. This daemon talks to other instances of itself on other machines to see who hold the 'best' value. Local processes talk to their local daemon. – kingchris Jun 30 '09 at 06:32
  • Network is the bottleneck currenlty. Each machine generates 10 million solutions. Then they test the results. Might be a better idea to let each machine find its best then as the final step compare with the other machines. That sould reduce the network traffic. – kingchris Jun 30 '09 at 06:43