15

My question is more-or-less a newer version of this 3 year old question: Small Business Network Switches/General network configuration

Summary
Everything is Gigabit and we don't have any real complaints about network performance. My real question is: for the 4 or 5 of us here, is there a next step that makes sense?

Network information
My small business's informal network centers around one central 16 port Dell switch and our file server in a star layout. Internal server is a Debian samba sharing a RocketRaid hardware RAID6 on UPS with shutdown tested and working. Whenever I do dev work, I use the file server for http or MySql apps. We have our domain email hosted by Google Apps, which we've been using since it was Beta and we love it. I like what I read on these two QAs: guidelines for wiring an office and Best Pratices for a Network File Share?.

Current network configuration

It wasn't until I started preparing for this question two days ago that I even realized a person can log in to the Dell switch to manage it. (facepalm)

Oh and it gets better!! I wandered around the business taking pictures of everything with an ethernet cable on it. Turns out I had a some legacy haunting me! 6+ years ago, we had a server with 2 NICs & my IT helping friend talked me into putting a DMZ on the 2nd port. The switch for that trick easily is from the early 1990s: I remember getting it from the trash at a company that got bought by AOL in 1997! It's so old, you can't google it. So thanks to my recent server reinstall and reading around serverfault, I got rid of that sadness. Edit to add: haven't noticed difference at work yet, but scp files from work to home (initiated from home) is remarkably faster now with that 10/100 switch gone!

20+ year old technology where it doesn't belong

With preparation and luck we have gotten through a few failures. Right now, I'm comfortable with everything but suspect I'll be replacing the firewall and switch soon...

Questions:
Are there any easy ways we can get performance increases?

Would it make sense to get a switch with a fiber port and then put in a fiber NIC on the server? From the reading I've been doing, it seems LANs are holding steady with Gigabit and nothing really new has happened since then for the small folks.

I haven't even Google'd how to log into the Dell switch, so I'm assuming it's unmanaged. I was going to visit the switch for a webserver so I checked the DHCP server (on firewall box) and the switch doesn't show up among the clients. I've only scratched the surface of reading up on all that: should the switch and RAID server be using large packets or something?

Network load is normally pretty mellow until the Office computer is working on our videos. Right now, they cannot be served live from the RAID and are copied back and forth. I do all my CAD from the RAID but it uses a local scratch pad and saves of 40M+ take 10 or so seconds.

Illustrations were made using Inkscape. I tried a few of the network diagramming tools and it was easier to just draw everything by hand. SVGs available upon request


Edit for Update
I was working in the rack, moved the firewall Acer computer, & it's HDD died. Disappeared from bios, so is probably the controller. Yes, literally, touching the computer to move it from the back shelf to the front shelf killed it. For now, the Buffalo WHR-HP-G54 got reconfigured & pressed into firewall duty until the already ordered dual-NIC new firewall box shows up. SCPing from home seems a tad slower than the old firewall with the USB->eth adapter. I Google'd and found out that it's WAN port is 10/100.

The observations made:
1) The legacy 10/100 link from firewall Acer to the cable modem is slower than when the link from firewall to modem is Gigabit.
2) The Buffalo WHR-HP-G54 WAN port's 10/100 link to modem is slower than when all Gigabit.
3) The TU2-ETG's USB2.0 link is faster than 10/100.
4) Cox Biz cable upload is faster than 10/100.

Once the new switch shows up (with profiling), I'm going to review Evan's and Chris's answers, attempt Evan's suggested tests, and then choose between them for "The Accept".


Final result

Getting to "the answer" of this question has been an amazing three week journey. Thank you both Chris and Evan: it was tough to choose whose answer to accept.

Seeing how inexpensive a better switch cost, I bought a hp ProCurve V1910-24G. It cost less than the lesser Dell did 5 years ago. That it is shiney is only half why I bought it; the Dell is at least 6 years old and while still performing well, I'm going to have to set a rule about retiring hardware that's more than five years old.

Having said that, the ProCurve has spawned a new question (I would appreciate some thought about the functions of what I mention here) but I'm super happy to have eliminated all of the desktop switches. That sounds like another rule, maybe those go in the gun safe?

Below is the revised drawing. Of note is that I moved the Cox Cable coax "T" and the cable modem now resides in the rack. The two CAT5 cables going to the telco corner are now dedicated to feeding the Cisco VoIP boxes, so the telco corner now only serves telephones. Also, the cabling in the drawing reflects the physical reality of the infrastructure, including switch ports. Most cables are paired up and the newest "drop" I created to get rid of the office switch has the three CAT6 cables going to it.

Current updated network drawing





For the first time ever, the cabling associated with my switch and the firewall/router is something I'm happy with! Bottom left is the cable modem, top right is the pfsense mini-ITX firewall/router:

my rack

The switch is set back some: I actually didn't like it mounted "flush" in the rack, so I fabbed up some adapters to set the switch back about 10" from the front of the rack. The hp/Compaq server cabinet has the additional rails so I took advantage of those. The cabinet has freedom to roll forward enough to allow access to the back doors. Wifi AP is resting on top of the cabinet as is excess coiled up network cable.

The yellow ethernet cabling is CAT6 I bought from StarTech on close-out for $7 for 75 foot cross-over cable & has the plenum. That was such a bargain, I bought a couple dozen and am very good at fitting jacks. (+ have the T568B wire color order memorized)

This setup is noticeably faster than before! When I ssh -X from home and run a browser window from the server at work is faster than I remember 14.4k modems, so seems about 3x as fast as when I'd log in and need the web as surfed from within the LAN. At work, opening files is as fast as if the drive wasn't networked. If I already have photoshop cs6 running, opening a 6M jpeg from the raid is instantaneous.

Additionally, I realized the cable from the raid to the switch was one of those CAT5 cables that comes with wireless routers, etc, so I replaced it with a 2' CAT6 cable and could tell a before / after performance boost with my photoshop experiment. Now there's all CAT6 from cable modem to firewall to switch to server. My desk has CAT5 from the switch for now, but I'll upgrade whenever I open up a wall.

Once settled in and I'm caught up with regular work, I'll try my hand at benchmarking the network's performance. Right now, I'm pretty sure it can't get much better than applying the best of practice advice of getting rid of switches and unnecessary hardware. The hardware raid controller is 6+ years old, so getting a new one is on the horizon. Once that happens, this one will fall back to archival duty.

Krista K
  • 519
  • 7
  • 20
  • I'd like to say this is the most insanely complex network configuration I've EVER seen. I'm sure that there are perfectly good historical reasons for your setup but redoing the wiring (which may also ensure that you are less likely to have problems with it), and flattening down the network topology might be a great idea. – Journeyman Geek Jan 06 '14 at 13:03
  • @JourneymanGeek yeah, right? All cascaded switches (save for the VoIP boxes) are out now. The diagram is a hybrid function & location map, so admittedly chaotic. Re-cabling options are somewhat limited at this point but things are getting sorted. The office computer's switch was actually hanging off the upstairs switch, too. :P Was a temporary drop to see if the new desk location worked & easiest to literally drop it through the warehouse. Now there are 3 CAT6 cables neatly running there from the rack. I'm updating diagram as I work & will put in my answer/report. – Krista K Jan 06 '14 at 22:15
  • Thanks for stopping back and putting an update on this question. I'm glad to hear that you're seeing perceptible improvements in performance, too. – Evan Anderson Jan 22 '14 at 22:26

3 Answers3

12

Are there any easy ways we can get performance increases?

You need to employ a systematic process of identifying bottlenecks and eliminating them. You can shovel money into new gear, services, etc, but if you're not being methodical about it there's really no point. I'll make some specific recommendations at the end of my answer for some things you should look at.

Would it make sense to get a switch with a fiber port and then put in a fiber NIC on the server?

Nope. Your fiber-based Ethernet media choices are gigabit and 10 gigabit. Gigabit fiber and gigabit copper are the same speed, so there's no "win" to using fiber for gigabit speed (though, as @ChrisS says, fiber does excel in some specific use cases). You don't have a server in your office that can even begin to saturate 10 gigabit fiber, so there's no "win" with 10 gigabit either.

I haven't even Google'd how to log into the Dell switch, so I'm assuming it's unmanaged. I was going to visit the switch for a webserver so I checked the DHCP server (on firewall box) and the switch doesn't show up among the clients. I've only scratched the surface of reading up on all that: should the switch and RAID server be using large packets or something?

The PowerConnect 2716 is a low-end "web managed" switch when its set up in "Managed" mode (which, by default, it isn't, but it sounds like you've figured out you can enable web management). You can get a manual from Dell for that switch that will explain the management functionality. They aren't great performers. I've got a couple of them in little "backwater" places and my experience has been that they won't even do wire-speed gigabit switching.

When you say "large packets" I believe you're referring to jumbo frames. You have no reason to use jumbo frames. Generally you'll only see jumbo frames in use in very specialized, isolated networks-- like between iSCSI targets and initiators (SANs and the servers that "connect" to them). You're not going to see any marked improvement in general file/print sharing performance on your LAN using jumbo frames. You'd likely have headaches and performance problems, actually, because all the devices would need to be configured for jumbo frame support-- and I would suspect that you have at least one device that doesn't have support (just based on the wide variety of gear you have).


Here are some things I'd look at doing if I wanted to isolate bottlenecks:

  • Enable web management on the PowerConnect 2716 switch so that you can see error and traffic counters. This switch doesn't have SNMP-based management so you're not going to get any fancy traffic graphing, but you'll at least be able to see if you're having errors.

  • Benchmark the server performance w/ a single client computer connected directly to the server's NIC (for which you should be able to use a regular straight-through patch cable, assuming the client computer you're using has a gigabit NIC). That will give you a feeling for the server's maximum possible I/O throughput with a real file sharing workload. (If I had to hazard a guess I'd bet that the server's I/O to/from the disks is your biggest bottleneck.)

  • Use a tool like iperf (ttcp, etc) to get a feeling for the network bandwidth available between various places in the network.

The best single thing you can change, from a reliability perspective, is to eliminate all the little Ethernet switches and home-run all the cabling back to a single core switch. In a network as small as the one you've diagrammed there's no reason to have more than a single Ethernet switch (assuming all the nodes are within 100 meters of a single point).

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • Thanks very much for answering; your time is greatly appreciated. I saw your answer on my 1st reference then basically stalked your other answers from there. fyi, iperf is flagged as deleted on sourceforge. I also updated the question to say raid is a hardware raid on PCIe (I wasn't clear about that). Yes, nodes all close... cable runs are "scary" at best; wish I would have run 3 or 4 cables everywhere with 3 extra feet at each end. :( – Krista K Jan 01 '14 at 06:27
  • @ChrisK - I dropped on a minor edit to include an updated link for iperf. There's never enough cable... >smile< I'm advising most people to run at least 3 cables to every user workspace for a computer, VoIP phone, and printer (*sigh*) or other future Ethernet device. Lots of little switches is a recipe for unreliability (when users inevitably kick out the power brick on a switch) and out-right disaster when one of the switches goes crazy and starts blasting wire-speed garbage out all its ports (which I've seen happen several times over the years). – Evan Anderson Jan 01 '14 at 06:33
  • Thanks, saw that edit. Yeah, thankfully, I had 2 printers die recently. I'll work on another run for the office computer's printer and the Bookkeeper box. Both of those places have drywall on one side; I just need a ladder. :D – Krista K Jan 01 '14 at 06:36
  • Found iperf as part of jperf on google code: https://code.google.com/p/xjperf/downloads/detail?name=jperf-2.0.2.zip IPERF3 has a project with source code: https://code.google.com/p/iperf/ – Krista K Jan 02 '14 at 01:49
6

Are there any easy ways we can get performance increases?

Without having any specific pain points you could throw money at everything and get better all around performance, but it'd be little more than a waste of money.

Would it make sense to get a switch with a fiber port and then put in a fiber NIC on the server?

Nope. Fiber is excellent for three things:

  1. Long runs, ie over 300 ft
  2. Dielectric isolation, usually desirably in electrically "noisy" environements and between environments that don't share a common ground (usually between buildings in a campus).
  3. Leading-edge connectivity solutions. 10GbE Fiber appeared long before 10Gbase-T. There's 40GbE and 100GbE Fiber out already, nothing available for twisted pair copper (as of 2013).

Gigabit reigns king of small and medium networks. 10GbE is just gaining real popularity in core networks, but not at the access level. If you're bottlenecked at some point, aggregating links is likely plenty enough to solve the issue and much cheaper than trying to wedge 10GbE into your setup.

should the switch and RAID server be using large packets or something?

Nope. Jumbo packets are great for storage area networks, high performance computing, and their like, but have little use in small or access level networking. With your relatively simple, small network I'd leave the MTU at 1500 all around. Changing to Jumbo can introduce problems if not done carefully.

saves of 40M+ take 10 or so seconds.

40MB/10s => 32Mbps, that's pretty weak if you're running 1000Mb Ethernet with a reasonably modern server. You likely have a bottleneck somewhere, but without much more information I couldn't say where or start to offer a solution.


Managed switches are good for a variety of things, most commonly setting up VLANs, security (possibly 802.1x), reviewing bottlenecks and statistics, link aggregation, and a variety of other advanced functionality. However, I'm not sure your setup needs any of that, and managed switches tend to cost a pretty penny. (On that note I always recommend "budget mindful" businesses consider buying used ProCurve gear. It comes with a transferable lifetime warranty and you can find a large selection on eBay. And there are official resellers on there too, just add "ReNew" to your search).

Chris S
  • 77,337
  • 11
  • 120
  • 212
  • Thanks for your insights! And also for not issuing me a bass-ackwards award. :) I wasn't clear in the question, but raid is PCIe hardware card. + thx for sharing the knowledge of 32Mbps being slowish; what's a number I should hope for? Yours and Evan's answers will definitely get us faster. – Krista K Jan 01 '14 at 06:32
  • Whoa, PRoCurve V1910-24G just got bought. 2 factors led to this: able to save configuration and connectivity (has dhcp, snmp, & "console"). I expect it's ability/speed to help me find bottleneck will earn its $233. :D – Krista K Jan 01 '14 at 08:17
  • around 80 megabytes per second would be an aceptable storage - ON THE BACKEND (i.e. server side - obviously with 1g you are limited on the file server to 1g also). Likely you run into disc contention there - not a network issue - if you only get 32megabit. Your file transfer should go in less than a second. – TomTom Jan 01 '14 at 08:58
2

I personally would benchmark the speed you're getting from the TU2-ETG, this will probably not be a bottleneck to your service provider, but since you're saying you get faster speeds after removing the 100mbps switch this might be your next part.

The TU2-ETG claims to have a Gigabit link, but the other side of this link is USB 2.0, which has a maximum signaling rate of 480 Mbit/s (effective throughput up to 35 MB/s or 280 Mbit/s), I for one have never been able to get speeds higher then 20MBps over a usb connection.

I could not find any real benchmarks from this device, so try to do a benchmark or two and see what this gives you. This is probably only relevant if your internet uplink is faster then 100mpbs, but then again, I see your provider offers 150mbps packages.

So be aware this is definitely not a gigabit network card.

Jens Timmerman
  • 866
  • 4
  • 10