3

I'm an FC SAN kind of guy and although we use NetApp 30xx filers for lots of non-essential storage I'm really no expert.

I have a new requirement for a very resilient (i.e. 3 or 4 '9s') basic static HTTP/S web server that will be shipping out large'ish (2-8GB) files from a directory/pool of around 16-20TB.

There will never be more than about 100 clients and there won't be more than 2-3Gbps of bandwidth between this filer/web-server and these clients so outright performance is not really any issue. My question is therefore, how trustworthy/resilient is a NetApp filer when used purely as a HTTP/S server?

Obviously if I ask NetApp they'll say they can give me the moon on a stick but I'm more interested in real sysadmin usage.

Thanks in advance, feel free to 'comment' me any questions.

Chopper3
  • 100,240
  • 9
  • 106
  • 238

4 Answers4

3

The idea gives me the heebie-jeebies.

I can see why you might want to since it's closest to the source of data. But I'm a fairly firm believer of separating devices for tasks and choosing the best for each.

You know that Netapp's are good at providing storage and that Apache/lighttpd/nginx (er, ISS) are good HTTP servers. The processes for diagnosing, tuning and scaling each are pretty clear cut.

I would be concerned about how you broach those same issues with what is essentially an embedded HTTP implementation. Without enlisting the help of Netapp Professional Services.

Somebody might come along and say that they're using it fine for such a tasks. In which case maybe it would also work fine for you. My gut feeling would be not to though.

Dan Carley
  • 25,189
  • 5
  • 52
  • 70
2

While I do have a several NetApp filers unfortunately I don't have any real work experience with using them as http servers.

I certainly agree with Dan Carley in that there are good tools available for tuning\scaling etc with Apache and other mainstream http daemons. You could then NFS mount the shared volume of big files to a load balanced web server cluster.

But I also like kmarsh's option #2 to use the NetApp just for serving big files, that way you can have your choice of front end http daemon which can then serve dynamic pages etc. and the heavy lifting can be done by the NetApp.

From the limited playing around with the NetApp http service that I've done it's certainly not very full featured and seems like it's optimized for such tasks. I'd ask NetApp for some reference customers who you can call who are doing what you are talking about doing. I'm sure your not the first person to want to do this.

Ausmith1
  • 1,179
  • 8
  • 12
1

As I see it, you have 3 choices:

  1. The web server is the file server.

  2. The web server redirects large file requests to the file server's web service, which fulfills the request.

  3. The web server passes on data from the file server backend.

Of these options, I like the second best, the first is OK, I don't like the third.

kmarsh
  • 3,103
  • 15
  • 22
0

I assume you are talking about using the Netapp HTTP service? I used it as a web server, once, never again. Toss up a Linux server of some type and use the Netapp for storage of the site if you like.