1

I'm in the process of building my first webAPI server cluster. The webAPI hosted by this cluster provides a hi-comute resource, where each task request requires between 0.5 and 1.0 seconds of CPU time. The API is transactional, so the traditional cache layers in web infrastructures do not apply. (Each API communication receives unique data, and returns unique data only served once and never a second time.)

The architecture of the server cluster is:

  • Two CentOS boxes, running HA-Proxy as the load balancer. One is active, 2nd is standby.
  • Two to start, more to be added later, Windows Server 2008 R2 boxes configured with the Uniform Server (production ready WAMP) and my hi-compute software that is the basis of the API.
  • two CentOS boxes, serving MySQL, with Raid10 configured disks; one is active, 2nd is replicating and in standby.

Perhaps this mixed OS structure is a bad idea, but I'm primarily a unix guy, and this is manageable by me without needing to hire a Win64 administrator. Plus, the hi-compute software only runs in Win64, so I'm stuck there.

My questions lie in the area of the networking connections between these server layers. I have a hardware vendor who tells me I need one 10 Gb switch between the load balancer and the web servers, and then a second 10 Gb switch between the web servers and the database servers. Is this necessary? This makes two separate networks, right? I'm also seeing a huge, exponential price difference between different 10 Gb switches when searching with Google; what is the cause of this huge price difference? My vendor is suggesting this 10 Gb switch: http://www.supermicro.com/products/accessories/Networking/SSE-X24S.cfm, which is a price of $8,350... must be a hell of a switch with that price. What does that do which a less expensive switch does not?

0 Answers0