1

Ok. So I'm completely new to the world of fiber. I've used Ethernet all my life. We're looking to upgrade some of our infrastructure, and I'm thinking of getting a few of these:

10G Dual SFP+ NIC https://www.fs.com/products/75600.html

for our servers and a couple of these to connect them

POE Switch with 4x10G SFP+ Uplinks https://www.fs.com/products/90132.html

But I'm a bit confused about this transceiver stuff. Do these NICs and Switches just have holes in them that I'm supposed to put transceivers in? I'm used to regular switches with regular ethernet, I get a patch cable rated for the speed I want and plug it in at each point and call it a day.

I know I want 10G Fiber to and from my servers, and then 1G Ethernet is fine everywhere else. What am I missing? Am I just worrying for nothing? Anything? What kinds of wires do I even need? There are so many, and with ethernet I just know all this stuff already. But I don't even know what to search for when it comes to fiber.

2 Answers2

5

Do these NICs and Switches just have holes in them that I'm supposed to put transceivers in?

Simply said: yes. Tranceivers go into their respective slots.

I'm used to regular switches with regular ethernet, I get a patch cable rated for the speed I want and plug it in at each point and call it a day.

Then you are used to low end switches, and those are becoming more and more rare.

Look at this:

https://mikrotik.com/product/CSS326-24G-2SplusRM

enter image description here

This is a pretty low cost rack switch. Extremely low cost. Anyhow, if you look at the photo, you see 8x3 (24) 1gb ports like you are used to. RIGHT of them are 2 SFP+ slots to plug in transceiver for an uplink.

I know I want 10G Fiber to and from my servers, and then 1G Ethernet is fine everywhere else.

If the switch is close, there is no need to even consider fiber, which is more problematic. Use a SFP direct cable, done. Those basically are thick cables with SFP+ transceivers on both ends. Simpler than dealing with fiber.

Example:

https://mikrotik.com/product/xs_da0003

enter image description here

look at the photo, you see the SFP+ endpoints on both ends. Those are basically directly connected transceivers. I would only use fiber for longer distances or problematic environment (i.e. vertical cabling that goes close to power lines).

The advantage of the transceiver concept is that it is YOUR decision, not the switch manufacturers.

Once you get into fiber, it gets really complicated depending on speed, but also length - you can theoretically go a LONG distance (I think 80km or 100km). I would just use direct cables in the data center.

TomTom
  • 50,857
  • 7
  • 52
  • 134
3

High-spec network switches may offer two types of physical interface connectors. The first one is the standard RJ45 that we also refer to as "copper" and just recently got upgraded to support speeds over 1Gbps. With the correct switch, cable and Ethernet card, you can achieve what the industry calls multi-gig speeds (2.5Gbps, 5Gbps, and 10Gbps) over Cat6/Cat7 cables with some limitations on cable length.

Traditionally though, switches offered interfaces with >=1Gbps speed through the use of modular cages (holes). There is quite a number of types for these ports, but the industry has standardized into the use of two cage form-factors: 1) the SFP/SFP+ one, and 2) the QSFP one. The cages REQUIRE the use of an appropriate module to have a cable connected. The reason behind the introduction of such modularity is that it allows the customer to select appropriate modules according to speed requirements, types of copper of fiber optic cables used, length of cable etc.

SFP cages support up to 1Gbps speeds. SFP+ cages support up to 10Gbps speeds, at the time of writing. QSFP cages also support 25Gbps, 40 Gbps, and 100Gbps speeds, at the time of writing.

As the cost to purchase modular SFP+ switches, with two modules per connection (one at the switch, and one at the server) was big, custom copper cables have been introduced, which we call DACs (Direct Attach Copper cables) or SFP+ Twinax, at lengths of up to 5 meters for passive DACs and 10 meters for active DACs. These are especially well suited for Datacenter deployments.

In your case, the cheapest solution is to get DACs to connect your cards to the SFP+ cages of your switch. With that switch you can have a maximum of 4 SFP+ twinax cables.

My experience has shown me that, especially when using good-brand NICs, such as those by Intel, there is no compatibility issue between the vendors of the switch, the DAC and the NIC. I am using Mellanox DACs on QLogic 10Gbps adapters on a Mikrotik 10G switch and I have no problems.

Panos
  • 46
  • 2