11

I am building a server to act as a BGP border router for my 100mbps uplink in ISP.

I needs these feature:

1) Dual stack BGP peering/routing (at least 100Mbps, maybe more). 2) Potential full internet BGP feed. 3) Some basic ACL functionality.

The hardware is L3426/8G ram. NIC will be on-board dual port Broadcom 5716.

I've worked with Linux extensively before and it seems to be able to handle 100mbps, but I heard FreeBSD is faster on networking stuff. Which one should I use? And do we have some performance benchmark numbers out there?

Cheers.

petrus
  • 5,287
  • 25
  • 42
  • 1
    any reason why your not using a cisco bgp router? Unfortunately most isp who let customers run bgp specify this requirement for 'compatibility' – The Unix Janitor Mar 06 '11 at 11:50
  • 9
    Em, first time I've heard of that restriction, and I work on a network that started with Quagga/Debian on Dell PowerEdge, up to Juniper and Cisco kit now. Also dealing with a LOT of different transit providers and exchanges. If an ISP is putting such a restriction in place, replace them with someone competent. – Niall Donegan Mar 06 '11 at 12:24
  • 1
    Side note, since it's a router I would **highly suggest** putting a NIC *card* in there as a backup. If the onboard one goes bad, you're replacing the mobo instead of swapping out a quick PCIe card. – Chris S Mar 06 '11 at 14:33
  • 1
    You are wasting money. A cheap box from Mikrotik (RougerBoard 1100AH for example) could handle this for a lower price and is linux based. – TomTom Mar 06 '11 at 14:53
  • Several people have suggested using a dedicated nic and not the on-board Broadcom ones. The Serverfault blog has a couple of interesting posts on this. – ollybee Apr 03 '11 at 11:31

5 Answers5

10

We've done exactly this for critical infrastructure for many years. We take three full upstream BGP feeds through Quagga's bgpd and it uses a whopping 658MB of RAM to run the whole system. For this purpose Debian is much more solid than other OSs in our experience (and it also needs less security updates with its minimum install footprint, causing much fewer reboots than the two other OSs we've tried). We use Ksplice so we only boot for critical package updates. Don't worry at all about compatibility with other vendors at your ISP ... RIPE the RIR use Quagga !

Surprisingly the hardware isn't that important, it's all about the NICs. Fast CPUs basically just mean the prefixes load quicker if you refresh the sessions (assuming you've got a GB of RAM and they load into memory) so an entry-level Quad Core is massively over-specced. We spent a long time trying different NICs and in our experience the best are the Intel cards which use the igb driver (for about £100/NIC we use the: 82576, ET Dual Port Server Adapter) with the e1000 coming second. There are a few considerations like how your ingress and egress NICs talk to the mainboard but for sub 250Mbps you probably won't notice if you use these NICs. We've repelled a sophisticated UDP DDoS attack using this architecture (it used the tiniest UDP packets which routers struggle to handle). Bear in mind being able to process the highest number of packets is what you're most concerned with and not necessarily the throughput, measured in Mbps. For very little money we've specified a Gigabit multihomed router that can handle standard Internet size packets, ie normal operation, up to 850Mbps !

I started with Cisco (bgpd's config is near-enough identical so if you've got experience with Cisco kit it's a really quick transition) but because Linux is so malleable (eg being able to add a few low-resource scripts to your routers to help with reporting and admin) IMHO makes it incredibly powerful (and underrated) for this type of set up. You can't go far wrong reading some of the Nanog Mailing list archives if you're still in any doubt or need further help.

This should get you started pretty quickly on Debian: Easy Quagga Tutorial

Jonathan Ross
  • 2,173
  • 11
  • 14
  • The other benefit of running Linux is you can easily shape your traffic with `tc` after tc's initial learning curve. A word of warning however is that running IPtables on your forwarding box significantly reduces kernel performance during attacks from what we've seen. – Jonathan Ross Apr 03 '11 at 07:23
  • I'd love to hear more on the nic <-> motherboard issue. Also, how many pps are you succesfully able to handle? – Joris Apr 03 '11 at 18:37
  • On our average packet size (HTTP, SMTP, DNS mostly) we should manage duplexed 850Mbps. The DDoS was 120,000 pps of 64 byte UDP packets. The effect was neglible on performance but we weren't pushing that much traffic when it hit. – Jonathan Ross Apr 03 '11 at 18:49
  • We opted for a motherboard with two unconnected fast PCIe slots so the buffers don't bottleneck. I forget the terminology because it's a while since we bought the hardware. One for egress, one for ingress. Fairly standard these days. – Jonathan Ross Apr 03 '11 at 18:55
5

They're both capable platforms. Run something solid like Debian or Centos, on good server grade hardware. Make sure you specify servers with Intel Server NICs, they're much better than Broadcomm for stability.

As far as BSD vs Linux, it's easy.. Choose whichever you are most competent with.

Tom O'Connor
  • 27,440
  • 10
  • 72
  • 148
  • 5
    +1, FreeBSD can usually inch out Linux in benchmarks, but the difference (if there is any) is so small that you should simply pick the platform your most comfortable with. – Chris S Mar 06 '11 at 14:30
3

I've seen old Celerons handling 80-90Mb/s of normal traffic on a Debian/Quagga setup with 3 full feeds without even breaking a sweat. However, the qualifier there is "normal" traffic, mainly HTTP/SMTP and DNS. The same machines have fallen flat on their face during DDOS situations where the Packets Per Second went to ridiculous numbers of mainly UDP packets.

It's normally not the bandwidth you normally need to be worried about, but the PPS you will be handling.

Unfortunately, I can't help you on the Linux VS BSD for routing performance part of the question, but it shouldn't make any difference on current commodity hardware for a few 100Mb connections.

Niall Donegan
  • 3,859
  • 19
  • 17
0

Quagga (Zebra) works both on Linux and BSD. Linux' networking performance isn't worse than BSD's. So, you're left to consider additional criteria to choose the platform.

poige
  • 9,171
  • 2
  • 24
  • 50
0

Data point:

I'm running a pair of Dell R200 servers on Fedora, one of which has seen 500 Mbps peak with NAT, iptables, LVS, quagga, bgpd over a 1GigE link. At 100Mbps, any modern hardware ought to do fine. For handling full tables, you should be able to consult corresponding RAM requirements from Cisco or Juniper and go from there. 1 GB of RAM should be enough even with no filtering. My routers run with 2 GB's configured but I'm only taking default routes.

dmourati
  • 24,720
  • 2
  • 40
  • 69