14

I just got a dedicated server with Windows 2008 Standard Edition and am trying to do the necessary configuration to run my web app on it.

Was wondering, is it a good idea to install an antivirus on the web server? In the app, users can't upload any files except images (and they checked for being images in the app code before being saved on the the server). I'm encouraged to not install an antivirus in order not to affect performance or cause any troubles with the app, will I miss anything by doing this?

Thanks

Mee
  • 847
  • 5
  • 15
  • 24
  • You say files are checked for being images in the app code before being saved - this would imply to me that the uploaded file is in some temporary directory before your app gets to see it? In which case it's already on the server *before* you get to check it! Which web server are you using? – Steve Folly Sep 13 '09 at 07:52
  • Actually the images are checked in memory, they are not saved to any temporary folder. They are only saved to the disk if they pass the check in the app. I'm using IIS, it's an ASP.NET app – Mee Sep 13 '09 at 09:16
  • Besides, even if you save a file in a temporary folder but don't runt it, it won't cause any problems. But again this is not the case. – Mee Sep 13 '09 at 09:30

4 Answers4

19

A well run webserver should IMHO not have a commercial anti-virus (AV) package installed. The kind of Office macro viruses and mass-market trojans that AV packages are optimized for are a poor match to the problems of a web server.

What you should do is:

  1. Absolutely obsess over input validation. Examples: that users can't upload malicious content to your site (virus, SQL injection etc); that you're not vulnerable to cross site scripting attacks, etc.
  2. Keep your server patched up with the latest security updates, and configured according to best-practices. Look at things like Microsofts security toolkit.
  3. Have a separate firewall. Doesn't help you much with regards to intrusions, but it adds another layer of defense against misconfigured network services, and helps with simple DOS attacks. It also helps a lot with locking down remote management possibilities etc.
  4. Install a host intrusion detection system (H-IDS) on your server, along the lines of the venerable Tripwire.

There is a lot of confusion about the terms, the words are often used in many different ways here. To be clear, what I mean by an H-IDS here is:

  • a service on a computer
  • which continuously check-sums all executable files on the computer
  • and throws an alert whenever a executable file has been added or modified (without authorization).

Actually a good H-IDS will do a bit more than this, such as monitoring file permissions, Registry access etc, but the above gets the gist of it.

A host intrusion detection system takes some configuration, since it can give a lot of false errors if not set up properly. But once it's up and running, it will catch more intrusions than AV packages. Especially H-IDS should detect a one-of-a-kind hacker backdoor, which a commercial AV package probably will not detect.

H-IDS also lighter on the server load, but that's a secondary benefit -- the main benefit is a better detection rate.

Now, if the resources are limited; if choice is between a commercial AV package and doing nothing, then I'd install the AV. But know that it isn't ideal.

  • Thanks. I don't really feel encouraged to install an AV but this being the first time to manage a web server, I was worried that there could be something I'm missing. I think I'd be better off without the AV. I'll just try to be more careful with what I run on the server. – Mee Sep 13 '09 at 09:43
2

If it's Windows based, which you said it is, I would. I would also try finding some form of host intrusion detection (a program that monitors/audits files that are changing on the server and alerts you to the changes).

Just because you aren't changing files on the server doesn't mean that there isn't a buffer overflow or vulnerability that will allow someone else to change files on the server remotely.

When there's a vulnerability the fact that there's an exploit is usually known within a window of time between discovery and fix distributed, then there's a window of time until you get the fix and apply it. In that time there's usually some form of automated exploit available and script kiddies are running it to expand their bot networks.

Note that this also affect AV's since: new malware created, malware distributed, sample goes to your AV company, AV company analyzes, AV company releases new signature, you update signature, you're supposedly "safe", repeat cycle. There's still a window where it's spreading automatically before you're "innoculated".

Ideally you could just run something that checks for file changes and alerts you, like TripWire or similar functionality, and keep logs on another machine that is kind of isolated from use so if the system is compromised the logs aren't altered. The trouble is that once the file is detected as new or altered you are already infected and once you're infected or an intruder is in it's too late to trust that the machine hasn't had other changes. If someone has cracked the system they could have altered other binaries.

Then it becomes a question of do you trust the checksums and host intrusion logs and your own skills that you cleaned up everything, including rootkits and Alternate Data Stream files that are possibly in there? Or do you do the "best practices" and wipe and restore from backup, since the intrusion logs should at least tell you when it happened?

Any system connected to the Internet running a service can be exploited potentially. If you have a system connected to the Internet but not actually running with any services I'd say you're most likely safe. Web servers do not fall under this category :-)

Bart Silverstrim
  • 31,092
  • 9
  • 65
  • 87
1

It depends. If you are not executing any unknown code, then it may be unneccessary.

If you have a virus infected file, the file itself is harmless while it's on the hard drive. It only gets harmful once you execute it. Do you control everything that gets executed on the server?

A slight variation is upload of files. They are harmless for your server - if I upload a manipulated image or trojan-infested .exe, nothing will happen (unless you execute it). However, if other people then download those infected files (or if the manipulated image is used on the page), then their PCs might become infected.

If your site allows users to upload anything that is shown or downloadable for other users, then you might want to either install a Virus Scanner on the Web Server or have some sort of "Virus Scanning Server" in your Network that scans every file.

A third option would be to install Anti-Virus but disable On-Access scanning in favor of a scheduled scan during off-peak times.

And to completely turn this answer 180° around: It's usually better to be safe than sorry. If you work on the web server, it's easy to accidentially click a bad file and wreck havoc. Sure, you can connect to it a thousand times to do something over RDP without touching any file, but the 1001st time you will accidentially execute that exe and regret it, because you cannot even know for sure what a virus does (nowadays they download new code from the internet as well) and would have to perform some intensive forensics on your whole network.

Michael Stum
  • 4,010
  • 4
  • 35
  • 48
0

Yes, always. Quoting my answer from superuser:

If it's connected to any machines that may be connected to the Internet, then absolutely yes.

There're many options available. While I personally don't like McAfee or Norton, they are out there. There's also AVG, F-Secure, ClamAV (though the win32 port is no longer active), and I'm sure hundreds more :)

Microsoft has even been working on one - I don't know if it's available yet outside of beta, but it does exist.

ClamWin, mentioned by @J Pablo.

warren
  • 17,829
  • 23
  • 82
  • 134
  • 2
    Thanks for your answer but .. this is not any Windows server, this is a web server, so, performance is of utmost importance here. Note that the antivirus software will scan every file that's accessed, ie. any file requested by a visitor. I need a good reason to install an antivirus and risk having problems with the performance, other than just security guidelines for computers in general. – Mee Sep 13 '09 at 09:22
  • it won't scan every file that's accessed if configured properly - and there's no reason for it to scan files on access when they're being served - just for attacks, etc – warren Sep 13 '09 at 11:57
  • 1
    @unknown-Performance is of utmost importance on the web server? With respect, I would suggest that if you're so low on available overhead that squeezing a few CPU cycles for a file scan is going to affect the end user's experience over the network, you may have another issue with the way the application is set up. – Bart Silverstrim Sep 13 '09 at 12:29
  • @warren, I know you can exclude any directories you want from access scan, but in this case, what's the point in installing an AV in the first place? I mean if users will still be able to upload files without being scanned, then there's no point in having an AV running on the server in such case. – Mee Sep 13 '09 at 13:49
  • 2
    @Bart, with respect antivirus software takes more than a few CPU cycles for a file scan, they slow down your own personal computer which is only used by you, so what do you expect for a webserver accessed by hundreds or thousands of people? – Mee Sep 13 '09 at 13:54