7

I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I want to have 10GE on file server & central node. It's all local, so no need for cables longer than 3-5m. And of course I want to spend as little money as possible (not going to spend more than whole cluster costs) :-)

What are my options?

  1. Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And of course it would be hard to double bandwidth when needed... :-D
  2. Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+...
  3. Connect file & central node via 2 10G cards directly, and put 4 quad-port 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have a lot of CPU's left :-)
  4. Any other options? Infiniband?
  5. Are MyriNet adaptors works fine? I guess there are no cheaper options?
  6. Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...
Scott Pack
  • 14,717
  • 10
  • 51
  • 83
BarsMonster
  • 644
  • 3
  • 11
  • 24

2 Answers2

12

I don't think you're not seeing the entire picture here.

You are wanting to connect a file server at 10Gbps speed, which may sound like a sexy idea. The thing you are not seeing is the ability of that server to generate that amount of traffic reading from disks. Getting 1GBps from a file server is, today, a very good achievement. 10Gbps will not only be expensive as you have realized yourself but at minimum 90% useless.

Your best option is to start putting in some blazing fast disks in your file servers if it needs to provide such great amounts of IOs. I strongly believe the "affordable" (notice the quotes) path to this is SSD drives in fast RAID configurations (that is RAID10).

As for networking, a 4x1Gbps agregate will do the trick fine and you can even add more later. Watch out for the fact that internal buses (read PCI*) are not always capable of handling multi-gigabit speeds. This is especially true if you are not using server-grade motherboards.

I believe this is your only "affordable" option here. Infiniband cards are not horribly expensive. I believe you can find some for ~150$ but the switch will be very expensive.

Antoine Benkemoun
  • 7,314
  • 3
  • 41
  • 60
  • 5
    Have to agree with this. We have a 10Gb infrastructure at work, based on copper SFP+ connections. This was expensive enough, but the real expense was the server infrastructure that would actually use it. – Rob Moir Dec 24 '10 at 11:59
  • Well, even my home fileserver built from junk can read 400Mb/s sequentially. I can easily make large seqientical storage of HDD able to do 1Gb/s and tiny random storage of SSD able to do 1Gb/s (600Mb/s from onboard controller+few PCI-E raid cards). I am more interested in bandwidth between central node & 'small' nodes, as traffic will be between pieces of software - no limited by disks. So both bandwidth and latency matters. Does 10GE also means less latency? – BarsMonster Dec 24 '10 at 12:40
  • 1
    10Gbps will most likely mean less latency but I don't believe the improvement will really be meaningful. You're transmitting data at higher rates so it should get there faster. What file sharing protocol are you using to obtain these numbers or is it just raw local performance ? – Antoine Benkemoun Dec 24 '10 at 12:45
  • Yes, surely I mean raw local performance. I don't have anything faster than 1GE yet. – BarsMonster Dec 24 '10 at 13:24
  • You lose a whole lot through network file sharing protocols. – Antoine Benkemoun Dec 25 '10 at 12:41
  • 1
    this is for user "BarsMonster": could you pl explain your home fileserver? hardware? OS? filesystem etc – JMS77 Dec 27 '10 at 17:56
0

you may want to consider ATA over Ethernet, if you want to save on some expensive layer-3 switches. this is the protocol of chioce for low cost but high perf solution than any other filesystem i know of today. But there are no 10Gb vanilla switches (without L3 switching).

consider as a POC : Ubuntu server 10.x and AOE tools project http://sourceforge.net/projects/aoetools/files

https://help.ubuntu.com/community/ATAOverEthernet

JMS77
  • 1,275
  • 5
  • 27
  • 44