12

I've read a half-dozen threads here about the pros and cons of hosting in-house, but our situation is a little different than most, so I figured I'd just open a new question.

In short, we're a small software company in the northeast U.S. (not Boston or N.Y., a bit in the hills), with an existing product line. For various reasons related to future development work, we need to have servers in-house one way or the other -- right now, we have a couple of 1U Suns (very nice X2100's that are holding up like rocks BTW) in a quarter-height rack.

We've been hosting our sites elsewhere for years, but now that we've got a pile of hardware in-house that won't be going anywhere, I'm thinking it might be worthwhile to just do all our public-facing hosting in house as well. My reasoning, in short:

  • the hardware is essentially a sunk cost anyway
  • we're already doing whatever admin work is necessary (though, in my experience, you need to pay through the nose for truly managed hosting, as opposed to just having access to a box, which seems to be what passes for 'managed' in the biz)
  • our problems will be our own, or moreso compared to any hosting situation (i.e. I've experienced way more downtime because some fool kicked over a router or whatever at a hosting provider than I have due to random admin-level issues)
  • we push a lot of large files around, and not having to wait for an upload to our hosted servers sounds very appealing

Obvious cons include:

  • Power. We've got appropriate UPS, but no redundancy.
  • Bandwidth. Right now we have 16d/2u through Comcast. If we move our main site over, we'll need to at least double that, which might require bonding 2+ cable lines.
  • A/C. I don't think this a real issue -- I don't expect that we'll ever have more than 10 servers in here (if we get larger than that, then the economics of this decision change a lot).

Thoughts?

Thanks!


Update: After vacilating some, we've decided to keep hosting offsite. Coincidentially, there was a power outage on our block today, which sorta tipped the scales psychologically (hardly a rigorous process, but wasting about 3 hours was enough for me to take the cue that the universe was trying to tell me something ;-).

Not sure what vendor we'll use going forward, but I appreciated the tip-off to the rackspace cloud as an alternative to ec2, etc.

cemerick
  • 283
  • 2
  • 6
  • Are you going to have paying customers pay for your in-house hosting services? If so, look into redundant ISPs, power, security, and cooling. If you are not worried about cooling, start worrying. Its very important to keep your machines cool. – xeon Oct 01 '09 at 19:55
  • Keeping them cool yes, but < 10 servers in a 1/4 height rack? Putting them in a filing room or something should be fine, they're not going to generate thousands of BTUs and the heat should dissipate fine in a large enough room. – Mark Henderson Oct 01 '09 at 21:24
  • It's as-yet unclear where the paying customers will be taken care of. And no, we don't have 10 servers in the 1/4-height rack -- that's just the max I can imagine having in our current space. – cemerick Oct 01 '09 at 23:53
  • @xeon: according to intel and others, most data centers are over cooled by quite a bit. Yes, it's important to keep them "cool" ie, less than 80 degrees. However, most people go much further than that. See http://www.theregister.co.uk/2009/08/31/data_centers_run_too_cool/ – NotMe Nov 12 '09 at 03:44

9 Answers9

17

I just got done moving our public facing servers to Rackspace Cloud Servers. About a year ago I did what you're thinking about doing because I wanted complete control over my servers, and am a little bit sour on leasing cheap servers (the typical $100 / month "server" that's just a PC).

I finally gave up on the in-house hosting gig because:

  • Local bandwidth is either very expensive or not high quality or both. Good quality routers are also very expensive. So those times when a customer was downloading some huge file at the same time Google and Yahoo etc were crawling were not real nice.
  • Having hardware in-house means that I have to babysit it. I've already got our local servers to worry about; adding more didn't help me out much.
  • Power outages, some jerkwad with a shovel digging in the wrong spot, etc are suddenly a problem. Having local servers go offline is something we always have to worry about, but why should our public servers go offline with them? There have been times when we've had our lines go down for >24 hours. For in-house workers this is manageable, but having our public sites go away for that long can be bad news.
  • Disaster recovery planning is more complicated. What do you do if the server goes up in smoke? Do you have another one? How fast can you bring it or some other machine online? If your connection goes down and your phone/cable company says it'll be up "in a day or two", can you wait it out or can you throw your sites up online somewhere quickly? How do you get them there if all of your connections are down? I know these sorts of issues have to be addressed no matter what, but what kind of resources do you have available locally?

The one thing I did like, as you also mentioned, was being able to shoot files up to the web servers very quickly (we do a lot of WebDAV). But the way I figure it...better us having to wait for uploads than our customers having to wait for downloads.

Anyhow, the Rackspace Cloud Server solution addressed just about all of the concerns I've always had with leasing dedicated machines or signing up with a VPS provider (including cost). It offers a lot of those little tricks that real virtualization promises. Anyhow, I won't advertise for them. An alternative might be Amazon EC2.

Long story longer, I sleep better knowing that I'm not going to get a call because our webserver is on fire or that the power company dug up the DSL lines again. I'll let some other chump be responsible for that... in a place where they've actually got the resources at hand to handle these things.

So my suggestion is to keep your public stuff in a good datacenter of some sort. Use those extra servers for in-house tasks. There's usually something you can use them for... testing, special projects, backup, etc.

Boden
  • 4,948
  • 12
  • 48
  • 70
2

Bandwidth. Right now we have 16d/2u through Comcast. [...] require bonding 2+ cable lines.

I don't know if your ISP has any special tricks up his sleeve, but in general you cannot bond / merge multiple consumer-type lines. Your lines would terminate in different IP addresses at your premises, and you can't "bond" IP addresses. You could put half your servers on line 1, and the other half on line 2 -- but that has obvious drawbacks in case of a failure.

Next up, consumer-class lines generally have higher latency (Round Trip Time), and this impacts TCP/IP performance. For large file downloads or very simple webpages with a minimum of objects it's probably not a big deal. For VoIP, more interactive or more performance-critical uses it could be significant.

And lastly, there are benefits to having a "multi-homed network", i.e. a Autonomous System with multiple transit (carrier) providers. You'll enjoy better route diversity, and have better chances of all your customers being able to resolve a route to you at any given time.

In general, my recommendation would be to co-locate your old servers at a friendly datacenter not too far away. You will still benefit from your existing hardware, and your servers will have nice, low-latency, professional tubes to the Internet.

2

Obviously, do what you think you need to do, but my opinion would be a resounding "no way". As a small company, you need to focus on more important things than managing server(s) - do what you do best, develop software and sell software - and let someone else with a lot more experience take care of mundane tasks like installing patches to the O/S, backing up servers etc. and battling DOS attacks.

You will never be able to come close to having the power redundancy, A/C capacity, bandwidth capacity and knowledge know how that a big company like Rackspace can provide you 24 hours a day, 7 days a week, 365 days a year for a few hundred a month.

I ran my own servers for years - moved them all over to the Rackspace cloud - and have never looked back. Now I develop software, and someone else takes care of the infrastructure.

I have to admit I liked the thought of having the server sitting here right next to me, but the reality is they didn't need to be.

EJB
  • 1,329
  • 5
  • 15
  • 23
  • OK, but what hosting provider actually does the admin work for you at a reasonable rate? I was with Rackspace years ago, and left when they told me that I'd have to bump up to a $600/mo when I had been paying $200/mo, and that was for zero included service and a relatively bare-bones setup (back when we had just one server). Maybe things have changed... – cemerick Oct 02 '09 at 00:06
1

Without knowing the usage patterns of your servers, it's hard to say.

However, IMO, the best (maybe only) reason to have servers in a datacenter is bandwidth. If you think you can really get by with 2-4Mbps upload and you're confident of the uptime of your ISP, you should be able to handle any other issues.

Does your current datacenter provide you with bandwidth usage data? I'd take a long hard look at that before deciding to move. Also set up some in-depth monitoring of your current internet circuit at work and see if you're getting the uptime you'll need.

wfaulk
  • 6,828
  • 7
  • 45
  • 75
1

Just chiming in with another note on cooling. If you're renting, check ahead of time to see if the landlord likes to cut the A/C over weekends & holidays. That moderate amount of heat generally output becomes a big problem when it's 90F outside and 98 inside... Just sayin.

Kara Marfia
  • 7,892
  • 5
  • 32
  • 56
  • The A/C situation is good here. Other firms have much larger installations than we're contemplating, and the building has happily gone along with special build-outs to support that. – cemerick Oct 01 '09 at 23:57
0

Sounds like you've thought this through. Go for it.

Couple comments which you may or may not have already considered...

  1. You didn't go into much detail about your existing setup and the relationship you have with your hosting provider. I'll assume that you are already responsible for server-level issues. If you're not, then consider that you're going to have to respond in the middle of the night to failures. You'll also need adequate monitoring of the new responsibilities you're taking on, such as environmental.
  2. You mention that you've got cable Internet through comcast. Is that going to suffice for hosting your production websites? Bandwidth is one issue, but what about support and reliability. Will they allow bonding two connections into one, or are you going to try and use 2 separate connections with fancy round-robin routing? What about the fact that your IPs are likely in a block designated for cable Internet, and might be blacklisted by other mail servers.
  3. The hardware may be a sunk cost now, but what if it starts dying and you have to replace it? What if your capacity requirements increase and the X2100s aren't up to the task? Would that change the cost/benefit ratio significantly? For what it's worth, I have had lots of trouble with X2100 servers. 4 out of the 8 I've owned now have failed SATA controllers. :(
lukecyca
  • 2,185
  • 13
  • 20
0

Just based on what you've said, I wouldn't do it.

  • Network: Asymmetric network connections are not really suitable for hosting, especially if the rest of the office will be competing with the hosted servers for bandwidth. Your performance locally will be really good, but your customer's will not. I'm assuming you have some kind of DSL or cable since the connection is asymmetrical. DSL networks are rarely reliable enough to do hosting. A hosting situation should have access to much faster networks, and can usually be scaled up or down without having to have someone drag a fiber into your premises.
  • A/C -- temperature and environment control need to be sorted. While ten computers probably won't get too hot, you still need to think about it.
  • noise -- I have a stack of X2200-M2 servers, and every time I go in my server room I'm glad they are in there and not out here with me.
  • Power: power can be expensive to do right.
David Mackintosh
  • 14,223
  • 6
  • 46
  • 77
0

If you need it, you need it, and that's all there is to it.

Admin work is going to be the big scary: sure, you lose time now because some idiot at the co-lo spilled his coffee on a server, but when you bring it in house, and it's your coffee, then the problem is far beyond just calling your hosting provider and demanding that they get their butts in gear. What kind of hardware support are you looking into? It can be very expensive, depending on your needs.

Redundant pipe is nice, but the premium is high. We use two sets of bonded T2's and actually had a "moron with a backhoe" incident earlier this year. We stayed up, but it seriously impacted our performance.

I'd also add server hardening and monitoring and such. Firewall hardware, patching and patch testing, monitoring...All these things take a lot of time.

As an admin, I'd suggest a slow migration from remote to local, to give yourself plenty of time to make sure everything works right (and to back out, if it turns out to be ugly) but as an experienced admin, I know that the likelihood of being allowed to double your costs for a transitional period are very low.

Good luck, either way.

Satanicpuppy
  • 5,917
  • 1
  • 16
  • 18
0

Well, I've got to chime in here too...

  • Yes, can't bond two consumer-level (I have Optimum Business, it's just repackaged consumer) connections. But a number of cable ISPs are offering FAST connections, or you could do HTTP load balancing between the connections.
  • I've been hosting my personal site, blog, personal projects, and demo site (some freelance programming) in my home on Optimum Business for about 3 years now. I don't keep exact downtime statistics, but I think it's about a single 22-hour outage (tree down on all lines) plus maybe 40 minutes of unscheduled downtime. 45 minutes of UPS capacity, good router (actually a Proliant running software router) and Cisco switches. You can get an amazing amount of uptime from just monitoring well, paying attention to the hardware, and keeping things simple.
Jason Antman
  • 1,546
  • 1
  • 12
  • 23