19

I hear that Linux-based systems are better for security. Apparently they don't have viruses and do not need antivirus software. Even my university claims this - they refuse to have Windows on their servers, which is a real shame because we wanted to use the .NET framework to create some websites.

The only reason I can see Linux being safer is because it's open-source, so bugs theoretically would get caught and fixed sooner.

I know a bit about how operating systems work, but haven't really delved into how Linux and Windows implement their OS. Can someone explain the difference that makes Linux-based systems more secure?

Eddie
  • 11,332
  • 8
  • 36
  • 48
echoblaze
  • 335
  • 2
  • 9
  • 5
    I'm not exactly answering your question, but I do want to defend your school's choice a little bit. My school operates both a windows system and linux system which (try to) share a common file system. But in practice this may be expensive because the windows and unix domains on the network really don't get along together, sadly. Given that we see Windows users needing to use some open source component more than the opposite (sorry about .net), then it is a respectable choice that they only support Linux on the core foundational hardware like servers. Linux supports most crucial services today – Notmyfault Jul 03 '09 at 18:56
  • thanks for your response - and to the other responders too, definitely helped cleared things up for me. For the record, I was more sceptical than angry at my university's claim. – echoblaze Jul 03 '09 at 19:08

22 Answers22

55

I don't think an operating system is "secure". A particular configuration of an operating system has a particular degree of resistance to attacks.

I'm probably going to get flamed for being a "Microsoft apologist" here, but this thread is very stilted toward generalizations about "Windows" that aren't true.

Windows 1.0 - 3.11, 95, 98, and ME are based on DOS. This lineage of operating systems didn't have any security in the formal sense (protected address spaces, kernel / user mode separation, etc). Fortunately, when we're talking about "Windows" today we're not talking about these operating systems.

The Windows NT family of operating systems (Windows NT 3.5, 3.51, 4.0, 2000, XP, 2003, Vista, 2008, and 7) has had a very reasonably security system "designed in" since the initial release in 1992. The OS was designed with the TCSEC "Orange Book" in mind and, while not perfect, I do think it is reasonably well designed and implemented.

  • Windows NT was "multi-user" from the beginning (though the functionality of multiple users receiving a graphical user interface simultaneously from the same server didn't happen until Citrix WinFrame in the Windows NT 3.51 era). There is a kernel / user mode separation, with address space protection relying on the underlying hardware functions of the MMU and CPU. (I'd say that it's very "Unix-y", but actually it's very "VMS-y".)

  • The filesystem permission model in NTFS is quite "rich" and, though it has some warts relative to "inheritance" (or the lack thereof-- see How to workaround the NTFS Move/Copy design flaw?), it hasn't been until the last 10 years or so that Unix-style operating systems have implemented similar functionality. (Novell NetWare beat Microsoft to the punch on this one, though I think MULTICS had both of them beat... >smile<)

  • The service control manager, including the permission system to control access to start/stop/pause service programs is very well designed, and is much more robust in design that the various "init.d" script "architectures" (more like "gentleman's agreements") in many Linux distros.

  • The executive object manager (see http://en.wikipedia.org/wiki/Object_Manager_(Windows)), which is loosely analagous to the /proc filesystem and the /dev filesystem combined, has an ACL model that is similiar to the filesystem and much, much richer than any permission model that I'm aware of for /proc or /dev on any Linux distro.

  • While we could debate the merits and disadvantages of the registry, the permission model for keys in the registry is far more granular than the model of setting permissions on files in the /etc directory. (I particularly like Rob Short's comments re: the registry in his "Behind the Code" interview: http://channel9.msdn.com/shows/Behind+The+Code/Rob-Short-Operating-System-Evolution Rob was one of the main people behind the Windows registry initially, and I think it's safe to say that he's not necessarily happy w/ how things turned out.)

Linux itself is just a kernel, whereas Windows is more analagous to a Linux distribution. You're comparing apples and oranges to compare them like that. I would agree that Windows is more difficult to "strip down" than some Linux-based systems. Some Linux distributions on the other hand, ship with a lot of "crap" turned on, too. With the advent of the various "embedded" flavors of Windows it is possible (albeit not to the general public) to build "distributions" of Windows that differ in their behaviour from the Microsoft defaults (excluding various services, changing default permissions, etc).

The various versions of Windows have had their share of poorly-chosen defaults, bugs that allowed unauthorized users to gain privilege, denial of service attacks, etc. Unix kernels (and plenty of Unix-based applications running by default as root) have had the same problems. Microsoft has done an amazing job, since Windows 2000, of making it easier to compartmentalize applications, run programs with least-privilege, and remove unneeded features of the OS.

In short, I guess what I'm saying is that the specific configuration of a given operating system for your needs, with respect to security, matters more than what type of operating system you are using. Windows and Linux distributions have very similiar capabilities with respect to security features. You can apply solid security techniques (least-privilege, limited installation of optional components, cryptographically secure authentication mechanisms, etc) in either OS. Whether you actually do or not-- that's what matters.

Evan Anderson
  • 141,071
  • 19
  • 191
  • 328
  • for someone like myself who has no idea how windows and linux systems were built, your post was incredibly informative – echoblaze Jul 03 '09 at 19:26
  • Agreed. Good points. – Kyle Jul 03 '09 at 19:48
  • 1
    +1 - I'm a Linux user at home and a mainly Windows security professional at work. The configuration matters much more than the OS on its own, and you definitely need to compare Linux distributions to Windows, not just 'Linux' a la the kernel. – romandas Jul 03 '09 at 20:54
  • 3
  • One thing that really has hurt the Windows world for a long time (even if it is now mostly history) is that for a very long time you had to be local administrator to do a lot of things whereas in the *nix world you would simply be sudoer on that machine. The problem obviously was that anything ran by a local admin om most machines could do anything with the machine. It would have been an equal threat to linux/unix if it hasn't always been a well known practice not be root but do sudo/su when needed. I guess it wasn't really a windows problem but a software one and with UAC it is mostly fixed. – Fredrik Aug 09 '09 at 20:55
  • Windows was only multi-user in that it supported allowing different people to use the same workstation. It was definitely NOT multiuser in the sense that UNIX systems were, where people were logged into the same machine at the same time from terminals. Windows was also made backwards compatible after the popularity of Windows 3.x took off; decisions made to allow compatibility hinder security efforts in NT. – Bart Silverstrim Aug 10 '09 at 00:00
  • There is a paper that helps illustrate some of the defects in windows security if you Google for something along the lines of "shatter windows api". One site includes http://www.net-security.org/article.php?id=162 . – Bart Silverstrim Aug 10 '09 at 00:02
  • @Bart: re: multi-user Windows - I believe that's essentially what I said already. Win32 wasn't designed to be multi-user, but NT most certainly was. I think that Microsoft's decision to keep the native mode NT API undocumented has been bad from the perspective that people wrongly see Win32 and NT as the same thing. Win32 is a "bag on the side of" NT. The Interix product (which is now part of "Services for Unix") is a good example of the kind of Unix that's hiding inside NT, trying to get out. – Evan Anderson Aug 10 '09 at 03:12
  • Windows also appears to suffer from Microsoft's bug release schedule. Most of the linuxes will release bug fixes as soon as they're ready. MSFT seem to wait till the next patch Tuesday. – Cian Aug 31 '09 at 03:51
  • A major drawback with MSWindows from a security viewpoint is that everything which could be done within the executive **is** done within the executive. Yes NT is multi-user - but for non-Enterprise systems, until very recently, it required extra effort to set up a user **without** admin rights - and even now the privilege seperation is typically divided only into two parts - the admin and the user - not compartmentalised (user, mail, device management, network...). Not to mention the problems all the problems around OLE and macros in MSOffice. – symcbean Jun 25 '10 at 13:30
16

One other thing that's not mentioned is that security in Windows is much more opaque than in Linux.

For example, I can look at a couple of text files, and see exactly what my web server is running. IIS? Not so much - you can see the results of the configuration thru the GUI tool, but there are hidden settings. Then you have to use a different set of tools to review the ACLs on the files, etc.

It's the same with most programs in the windows world - it's very difficult to quickly understand exactly what's affecting the run-time environment, between the registry & ACLs.

chris
  • 3,933
  • 6
  • 26
  • 35
11

Don't know about that file permissions comparison... when I was a UNIX/Linux admin, NT4 had file ACL's that were much more granular than UNIX/Linux traditional '777' style permissions. Permissions aren't everything, of course, and I'm sure that modern Linux distributions at least make fine grained ACL's available, even if they're not implemented by default. In my view, the sudo and root concepts have always been around in UNIX, though Windows has been adding these concepts steadily and are probably now at par.

My own interpretation is that since the Linux kernel code, and many of its drivers and utilities, are open - it's likely been reviewed far more extensively and is fixed far more frequently for coding mistakes that can lead to remote vulnerabilities that a hacker can exploit. The theory goes, in my head anyway, that since Linux is not owned by a corporation, it can explore the security goal more fully than a corporation can. Businesses must make money; while open source groups simply don't have this restriction.

It's much easier to go in to a Linux system and simply shut down the entire windowing system, RPC daemons, and so on - you can get a Linux or BSD based system down to one or two open ports with a minimum of installed packages and still have a very useful system very easily. This probably has more to do with the UNIX heritage as a developer's OS; everything was built to be modular, not overly interconnected. This leads to a much more configurable system where you can simply remove things that are not relevant. I don't think its as easy to harden Windows servers in this way.

The OpenBSD group has taken this concept to the extreme. The main number one goal of the program is to review every line of code for possible security flaws. The proof is in the pudding, an incredibly low number of vulnerabilities have been found for OpenBSD over the years due to this nearly fanatical (I use the word with respect) attention to detail.

Corporations, while they make wonderful software (MSSQL, Exchange, Windows Server 2003 are all wonderful in my book), just have different goals.

Kyle
  • 1,849
  • 2
  • 17
  • 23
  • 5
    Yes; Windows ACLs are finer-grained than Linux/Unix without ACLs (though most modern versions do have options to use ACLs). The significant difference is that people tend to be logged into Windows as Administrator - that's still the standard setup on company-provided XP laptops - whereas people on Linux/Unix do not do most operations as root. This limits the damage that can be done on Linux/Unix compared to Windows - by default. If someone runs as root the whole time, all bets are off (except that they'll have a regrettable - and regretted - accident, sooner or later). – Jonathan Leffler Jul 03 '09 at 18:47
  • "Its very true that the sudo and root concepts have always been around in UNIX and are only now coming to Windows." What are you talking about? Windows NT isn't as old as Unix, but Windows NT has had very reasonable security "designed in" since it was released in 1992. It's unfortunate that a lot of Windows admins don't deploy users with "limited user" accounts (as they should have been, from the beginning), but that shouldn't damn the operating system. – Evan Anderson Jul 03 '09 at 19:07
  • From the server perspective, granted. But a typical Windows user needed Administrative access to have an even reasonably comfortable environment until Vista. I see Vista's "right click, run as administrator" as comparable to sudo. – Kyle Jul 03 '09 at 19:11
  • 3
    I wholly disagree. I've depoyed thousands of desktops since Windows NT 4.0 with limited user accounts. "RunAs", which is somewhat analagous to "sudo", has been in the operating system since Windows 2000 (the "right-click, Run As functionality"). I will say that user account control is a stupid feature and shouldn't have been included in the OS. Microsoft did the wrong thing by making it "safer" to run as an "Administrator" instead of making it more difficult and painful while encouraging developers to work on writing software that doesn't suck (i.e. require "Administrator" rights). – Evan Anderson Jul 03 '09 at 19:21
  • I guess I mean that the way Vista does it now is easy enough for most Windows users to grasp - whereas the old RunAs you refer to was more great for Power Users. I suppose if I was managing hundreds of Windows PC's in a corporate environment I might not like it. Is there not a way to disable that in the AD policy management tools? (Another tool that Windows has that Linux doesn't IMO) – Kyle Jul 03 '09 at 19:43
  • 2
    Vista users at my Customer sites never see UAC because they're running as limited user accounts. You'll only ever see UAC if you're running as an "Administrator". You can disable UAC with group policy, but you shouldn't ever need to. – Evan Anderson Jul 03 '09 at 20:04
  • It is according to the tools the customer uses. If the tools have admin rights, then the UAC may have to be disabled. – WolfmanDragon Jul 03 '09 at 22:49
  • @Evan Anderson: deploying accouts as limited users would be great, except that in many cases software won't run properly unless users had higher privileges. Perhaps this is an issue with crappy programmers, but sysadmins often don't have time to track down every little change that would have to be made to "fix" or work around crap programming practices when upping privileges fixes the issue...I hate it too, but it's a rock and hard place. – Bart Silverstrim Aug 10 '09 at 00:05
  • I've never allowed users, corporate nor home, to run as administrators in Windows since the NT4 (or XP for home users) days. I've also never had a real problem with applications running as standard user either, the few that has - usually has easy fixes (or replacements). The "windows as admin" thingie is some sort of mass-hypnosis or plan laziness imo - if anyone is running Windows interactively as an administrator and accessing network or internet resources at that, then they failed. Completely. There's NO excuse, never were. – Oskar Duveborn Jan 21 '10 at 13:28
9

In my opinion, If configured well enough Linux based systems are more secure then Windows systems. Some of the reasons are :

  1. Transparency and abundant simple network tools: For example, it is very easy for Linux administrator to see current firewall configuration by typing "iptables -L -n" on shell. You can also see what ports are open on machine by running "nmap" from other Linux machine. This makes life so much easier as you can very accurately specify which ports are allowed to be accessible and from what addresses, etc.

  2. Text log files in one location: Text based log files in one location "/var/log" are easy to backup and analyze. Also tools like logwatch which can monitor these log files and email you important lines make things very easy. We can even write our own tools to analyze the log files and find information that we are interested in. The logs can even be exported to remote syslog server in case we do not want logs to be present on the same server.

  3. Not to worry about viruses: Whether viruses are less in Linux because there are less Linux based systems OR because all users love Linux or because Linux is more secure. The reason does not matter. If at the end Linux has less virus threat then it is a good thing about Linux. I have personally seen people install two anti-virus, anti-spyware and anti-adware on same machine. All these protection tools eat lot of CPU and memory.

  4. Support for many programming languages: It is very easy to code in Linux. C, C++, Python, Perl, Java, etc. just work without need of installing any additional package. (This in case you install a big distribution like Fedora which comes in DVD.) It adds to security as we can perform repetitive tasks by coding. So if make mistake and there is a problem it would be with all accounts and it would be easy to detect and fix. If we had to do same changes to large number of accounts/directories by hand we might make mistake in one or two and it might take long time to find such mistakes. Also we can correct the mistakes and look for simple mistakes using code. Since all configuration files, user information files, log files, etc. are in text it is very easy to code whatever we want to achieve and there are many ways of getting same things done. Also abundant authentic information is available in man pages, which usually warn us about security threats of configuring services in insecure manner.

  5. Open source code: Since probably many people have seen the code it is very rare that some spyware / adware is part of the applications that come with Linux. You can also see source code if security is very very important for some service and see how it works. If you know exactly how it works, then you know the limitations and when it will break. In fact if there are well known security limitations that would have been documented in man pages, package website and in comments in configuration files. The developers have nothing to loose in telling that if you use our tool in such scenario then it is risky. It may not be lucrative for organizations which sell software to tell limitations for software and it would make their software look bad and may reduce sell/profit.

  6. Free and interoperability: Although this is not related to security. For University where costs matter, Linux based systems are much more economical then Windows based systems and there is no need to purchase licenses for OS, as well as for additional software that we would install after installing OS. As far as interoperability is concerned we can connect from Linux machines to other OSes and share files easily. In linux we can mount many files systems including FAT, NTFS, HFSPLUS. We can share things using ftp, http, ssh, samba, nfs, etc. and all these things come installed or can be installed with one command. Other OS generally provide only one option of sharing things.

But if not configured properly Linux based systems can cause more problem then one can imagine. Many users can login into machine at same time and do almost everything just from shell. It is very easy to leave backdoors, trojans in case firewall is not configured properly. Attacker can delete log file or tamper with them to hide his tracks. Attacker can code on the attacked machine as all editors, compilers, debuggers are readily available once attacker has shell access. All servers ftp, http, can be run from user account just not on secure ports (1-1024). So attacker can download http server code, compile it and run http server on port 6000 to make it look like X Server.

So Linux systems are more secure provided administrator knows what he is doing or at least bothers to look up information in man pages and documentation before doing some new change.

Saurabh Barjatiya
  • 4,643
  • 2
  • 29
  • 34
6

Transparancy

  • Run ps auxf and you know what services are running, under which account.
  • Run netstat -lnp and you know what programs have which TCP ports open.
  • Run iptables -L and you know what rules your firewall has.
  • Run strace or lsof to inspect process activy.
  • Run ls -lah or tree -pug and you know exactly what ownership and permissions a complete folder has.
  • Logs are in /var/log and can be inspected with a simple "search through files".
  • No hidden settings. Everything is human readable in /etc. Searching through text files, or archiving them, or applying versioning control (subversion/git) is really easy.

Clear permission system

  • In the base, there are only file permissions. No "permissions on regex keys", inherited ACL permissions, security contexts per process, or other hidden features.
  • Permission bits are simple:
    • Write on files = edit file contents
    • Write on folders = create/rename/remove file nodes.
    • Sticky = edit own files only.
    • Files with execute or setuid permissions are highlighted (in ls color mode).
  • A simple "find all files" reveals what permissions a user has.
  • Additionally, ACLs can be used only where this is needed.
  • User accounts have only two places to write files by default: their $HOME and /tmp.

Advanced security options

  • SELinux / AppArmor can restrict processes to access a specific set of files only (on top of file permissions)
  • A Chroot jail enables admins to run program completely isolated from the rest. As if it's installed on a empty harddrive, with only the files it really needs.
  • With sudo, users and processes can be given permissions to run only a few administrative commands.

Single points for entrance and privilege elevation

  • A process can't gain more privileges on it's own. The only way is by running another "SetUID Root" program, like sudo or contacting a DBus service which checks PolicyKit first. Those SetUID programs can be found with a single "Search all files" command.
  • IPC between processes is fairly restricted, reducing attack vectors.
  • Accessing the system (text console, remote desktop, RPC, remove command invocation, etc..) all happens through ssh. That's a SSL tunnel with public/private key checking.

Secure background processes

  • Background services run with lower privileges as soon as possible. Services like Apache, Dovecot and Postfix hand over the incoming connection to a low-privileged process as soon as possible.
  • Locked down by default. Microsoft has adopted this approach in Windows Server 2008 now too.

Good auditing tools

  • Tools like nmap, ncat make security auditing easy.
  • Background services can be audited from the command line.
  • Log auditing tools are common.
  • Coding a secure service is easier because can be done in a modular way.
  • There are plenty of free Intrusion Detection tools available.
  • The command line tools are designed to be scriptable, so admins can automate tasks.

Good security updates

  • Every part of the operating system receives security updates. When Apache, Python or PHP are installed through the package manager, they will get updates too.
  • There is much openness in what a security update fixes, so you can figure out how that affects you.
  • Software packages all share the same libraries. They don't ship separate copies, leaving exploitable versions around.
  • No Patch Tuesdays, waiting for a fix when hackers are already exploiting the bug in the wild.
  • It is easy for developers to test security updates and deploy them.
  • No reboot has to be scheduled to do an update. Files can be replaced while the existing processes keep accessing the old data on disk. Afterwards you can find out what services need a restart (lsof | grep =).
  • The entire operating system can be upgraded without reinstalling!

Everything mentioned here is delivered or every mainstream Linux distribution, i.e. Red Hat, Debian, openSUSE, or Ubuntu.

vdboor
  • 3,630
  • 3
  • 30
  • 32
6

Server security is more than just the OS. I would say a greater factor in server security is the person running the server, and how careful they have been about locking things down.

That said, if the university is a Linux shop, they will not let you use a Windows Server regardless of what data you find on Windows server security. I would investigate using Mono (www.mono-project.com) if you want to use the .Net framework.

Adam Brand
  • 6,057
  • 2
  • 28
  • 40
5

Linux was designed to be a multi user system from early on, so it has a much stronger permissions system than Windows does. It was also designed for you not to be running with administrative rights (root access), so all the programs are designed not to need the rights. This means if your account gets compromised, the entire system isn't.

Part of it also probably comes from the fact that people running Linux are (generally speaking), more technical, and thus less likely to make the stupid mistakes that lead to computers getting hacked.

Dentrasi
  • 3,672
  • 23
  • 19
  • 2
    Some differences between multi-user and single user operating systems: http://jdurrett.ba.ttu.edu/courseware/opsys/os01a.htm – moshen Jul 03 '09 at 18:34
  • 7
    OK, I've been using Linux for 12+ years and UNIX-like operating systems for even longer. As much as I like Linux, you cannot say that it has a stronger *permissions* system than Windows. It has a better security model than early Windows versions (i.e., don't always be admin), but WinNT and later has a strong permissions system that just wasn't used to good effect. Recent Linux versions have selinux, which is even stronger, but this is a relatively recent (if very powerful) addition. – Eddie Jul 03 '09 at 18:57
5

'Security is about control'

From my point of view, in Windows you have less control than in Linux. Hardening Windows is... harder :). Although any tool depends on the wielder's skills I would consider the following:

  • Windows has more high-risk vulnerabilities and more automatic exploitation (virus, botnets)
  • Windows admins are (or should be) paranoid (because of fear of intrussion) and have made some kind of hardening
  • Linux sysadmins sometimes trust too much in the operating system security and forget about hardening
  • Once hacked, in a Linux system you can do more than in a Windows system, as there are more powerful commandl ine tools

So although I do prefer Linux over Windows, I think that you should not trust default installs.

chmeee
  • 7,270
  • 3
  • 29
  • 43
3

Most of the previous posts have focused on intrusion, and a good job has been done covering that point, one of the points of your question was about viruses. The largest reasons that Linux distro's have less issues with viruses is that there are more Window boxes out there than there is Linux and Mac put together. Virus writers want to get the biggest bang for their buck, therefore they write for Windows.

All systems are capable of intrusion and getting infected. Anyone who tells you different, be it your instructors or others, either are fools are have ocean front property in Utah to sell you.

3

Judging from security fixes on ALL software these days, I think the issue is not the software but the number of desktops running Windows. This is the target, to create botnets. If Linux ever really grows in the desktop space, then it will be attacked more as well. I think Mac OSX is already seeing this.

JamesR
  • 1,061
  • 5
  • 6
2

There is one very important reason why Linux and OpenBSD have the potential to be more secure than windows. That is the ability of the operating system to firewall itself from network attacks.

On Windows, incoming network packets have been exposed to significant parts of the operating system long before a windows firewall can reject the packet. On linux, using IPTables or on OpenBSD using PF you can isolate rogue packets much earlier in the process of the OS receiving a new network packet - reducing the exposure.

However, once you open up a port and run a service on it - i.e. make a networked computer useful - you are only as secure as the code that runs that service.

Michael Shaw
  • 663
  • 4
  • 9
2

There is no such thing as an OS that is more secure than another. It all depends on the knowledge of the people who administrate the system.

I've met and worked with some extremely talented *nix admins over the years and they could configure an extremely secure *nix server. However stick them in front of a Windows host and they'd have no idea how to lock the machine down. The same goes the other way, I know a decent amount about securing a Windows host, but put me in front of a *nix box and I'd have no idea what I was doing.

Neither OS is more or less secure than they other. Sure we could go talk about the history of the platforms, and use that to debate which one has been more secure over time, but we aren't talking about *nix OSs from 10 years ago and deploying Windows NT 4 into production environments are we. We are talking about modern OSs (or at least we should be) and which ones can be better secured.

I saw someone say something in an answer about packets coming to the Windows firewall touching more parts of the OS than the Linux firewall. By question to him becomes who the hell trusts a software firewall running on the host? That's what end point / front end firewalls are for. To protect the network. The host which is running a service has a service exposed. It's the hosts job to ensure that, that service doesn't become compromised. It's the network devices job in front of it to prevent other packets from getting from the Internet to the hosts other services.

Once the network is properly secured it all depends on how well secured the application running on the host is. Does that application have any buffers overflows that can be exploited? Are there any ways within the exposed application to get to the OS and in someway get a higher level of permissions? If not than it's a well secured application. If there are then you have a problem which needs to be exposed.

If someone won't consider another OS in their data center that's a sign of ignorance (goes for an all Linux shop, as well as a all Windows shop). Both OSs have there uses and should be used as such. Neither is any better or worse than the other. (And yes we've got a couple of Linux machines in our environment handling production services.)

mrdenny
  • 27,074
  • 4
  • 40
  • 68
  • 1
    I would differ in opinion that it all depends on the knowledge of the administrators. If you are asked to defend a fort from attack versus a tent, I think you have a bit of an advantage with a fort. If the two being compared here are Linux and Windows, then they were designed with two different philosophies for handling multiple users and simultaneous access to the system. While good administrators can help correct for deficiencies, there are still advantages from one over the other as a starting point. – Bart Silverstrim Aug 10 '09 at 00:11
1

There is no need to curse your university for using linux servers, for your specific requirements as AdamB said, use Mono (www.mono-project.com). Usally professor with interest in OS, prefer linux, even any OS enthusiast would prefer linux, for simple curiosities how things work in practical out of books.

  • Now regarding security,

linux now follows DAC (discretionary access control) its a smarter system for Access control. As mentioned in other answers, yes linux was multiuser way back, and therefore the access control system, got better than others.

But the security you are refering to looks like the server security, which is not so much as OS issue than the whole server-network issues. where by i mean Firewalls Access Control Lists, router etc... Updates are free, life long. its open so its tested a lot, which is very important.

apart from security, economical viability makes linux the best option for servers, where very few, but applications are suppose to run or host services. And these applications are very well ported on to them. Eg - Apache.

I think it was not security alone, but other factors which makes your like most of the rest university opt for linux at servers.

Vivek Sharma
  • 465
  • 2
  • 5
  • 14
1

While there are many great answers here, I just want to also add that there is no such thing as a secure operating system.

It's known that if a human created a 'secure' platform, then another human can find holes in that platform with time.

I agree that Evan's first two sentences sum up OS security best:

I don't think an operating system is "secure". A particular configuration of an operating system has a particular degree of resistance to attacks.

So it does not matter if we compare GNU/Linux, The BSD systems (Free/Open/Net), Microsoft, Windows, Mac OSX, Symbian, PalmOS, Cisco IOS, AIX, QNX, Solaris, z/OS or any of the other "operating systems" that run things like your TV, MP3 player, microwave oven etc, etc.

Each of these has a part of the whole that has the ability to be exploited by a determined individual.

For this reason most vendors have whitepapers on how to set up their systems to be as secure a configuration as possible. This means using other technologies to minimize the surface area.

eg:

  • NAT
  • reverse proxy
  • firewalls
Wayne
  • 3,084
  • 1
  • 21
  • 16
  • 1
    i'd still put my money on openbsd any day for "least likely to be remotely vulnerable". – Kyle Jul 05 '09 at 22:27
  • I will not put my money up against yours either! Unless we talk about DOS <=4 (no NDIS drivers before 5.5 I believe?) – Wayne Jul 06 '09 at 09:15
1

The fact of Operating System Security regarding frameworks is a little bit more than just a kernel type issue. Individually each of the frameworks do have their compliant security mechanisms. The multi-user account specification within Microsoft Windows does allow a bit more flexibility in terms of mass deployment however with Linux you have the ability to control down to the tee -- the ins and outs of permissions and delegation.

The .NET Framework security level mainly has to do with your group policy, powershell andnetsh console settings. The reason being is kernel telemetry at low level access parameters with dynamic access requests in memory. Linux frameworks often require a similar level of attention, but it mainly has to do with the flags that you specify when you are configuring the language. Linux when properlly configured is proven to be more secure than Microsoft Windows configured security. Though at a "decent" level of configuration; tools can slip straight through your IIS and dip right into your services by using a specific GUID. Overall Linux allows more aspect control than

Major Points:

inodes and NTFS index primers and permissions in Windows (including registry) 
    are easier to sift through than an EXT hardnened Linux 

protocol traversal within Linux for exception handling are easier to find 
    than a solid configured Windows Firewall. 

cache indexes within ASP.NET are easier to violate than cache management    
    technologies which are well handled within GNU and C++ libraries
    they are practically built for parallel systems now. 

SQL parse queries, have been proven over and over again; MySQL is faster. 
    than MSSQL, though Oracle has been pushing the belt. Transactional 
    security is proven to be more secure on Windows, but for performance 
    and sheer flexibility shows that MySQL should be used or something 
    along the lines of a iSQL or NSQL (not SQLAB like Berkeley SQL which 
    MSSQL is based on) 

Gateway permissions, Linux has an amazing ability to fondle packets and tiny 
    little things that Windows can only put into sorting bins. This being 
    said, if you are running a Windows network, you have more network auditing 
    than a Linux network because the packages are easier to apply walls to 
    than DLL files and protocol requests. 

Surface layer GUI, .NET Framework offers strict field definitions; while Linux 
    allows intense PCRE and other Regular Expressions. 

Government Statures:

OWASP proves over and over again that it is harder to crack a hardened Linux Server 
than it is to crack a hardened Windows Server. Why? Because the firewall and Group 
Policy does not allow as far a tuned key for aspects of the closed source framework 
within ASP.NET; Linux will let you choose a color for every letter on your command. 

NIST Shows over time that SQL management permissions are harder to parse with Windows 
while Linux PCRE makes it harder to bypass SQL queries whether it be within a GUI or 
a Web Interface. 

Carnagie Mellon shows that ASP.NET can hold higher regulations because it is built 
in a more module based context which employs the use of MVC frameworks and can potentially 
have a higher restriction. Meanwhile PHP and Java show that they are incredibly robust 
with their Obfuscation and encapsulation methodologies.

Personal Opinions:

Each Operating System has the potential to be more secure than the rest out of the box. Taken the raw comparison of frameworks which operate at a higher security with Linux or Windows I would have to say that the main part of web security is using the most incompatible but efficient framework. This way it becomes much harder to latch onto the native hard-drive access permissions and the library handles. This way you have somewhat a welded bowl ontop of your operating system. As Evan had said with NTFS and /proc or /dev permissions. If you use something that can't talk to it; its harder to crack.

What I have learned from web development is, never underestimate your framework. .NET has permissions to make shared mounted volumes and control mechanisms for SQL Server clusters; while Apache Source can do the same thing with Operating Systems using Linux. It is a pretty decent question though I would have to say, Linux allows more security on individual aspect control and multi-language restrictions and monitoring; while Windows has the extensive power of auditing and logging with a high level logic debug interface. Both of them are compareable, it eventually narrows down to a "how well - do you lock it down" and "how many bells and whistles are there?" within the framework. Apache has more add-in security boosts; IIS has dead stop or all run permissions with ASP which make it a give and take on module programming (Sharepoint for example).

At the current moment compairing PHP on Linux or Windows, it is quite obvious that there are more extensions you can use within a Linux Operating System; Windows has a different permission management level over PHP which makes it harder to manage directories and file access. Within Apache for example XAMPP, LAMPP or WAMP I would feel that Windows is a little less secure considering the fact that its restrictions on the firewall are easier to violate because it shares the same tunneling rules as your web browser. Linux on the other hand can use app pools and further packet level security mechanisms that are much more complicated to emmulate. Windows would require you to use all aspects of the operating system to make the networking more secure. Linux can fix it by using about %60 of the operating system when using distributions or flavors like CentOS and Ubuntu.

IIS (On a Microsoft Server, not Windows Client) on Windows with ASP.NET with the latest SEC_ATL mixes can also be very secure.

Just Apache alone, you may want to run it with Linux to enable the higher and lower level driver, SMIME, codec and packet level securities. While Windows would require you to install overlaying security mechanisms that would otherwise clog your traffic down a little bit more than you may like if it comes to running thousands of servers.

With Linux, the more slimline the kernel is and more optimal for net security it is the better (like fusing in Apache with NSLUG).

With Windows you better like programming Powershell modules and aditional overlaying security for your ASP.NET framework and configuring your group policy to USGS because most of the time it really does need it to shut out the kind of traffic that Linux will automatically deny and not think about.

Equally they can be strong. Out of the Box a live distribution of Linux will be stronger than an un-configured Microsoft Windows Server just set up with the Wizard.

Over time, Linux will outrun Windows in the security game. Debian 3 servers are still stronger today than Microsoft Server 2008 R2 out of the box and guess what they can support the same technologies without a kernel rebuild. Debian can still smoke it, and I have seen this with my own eyes.

Though as it has been said before I'm sure. It comes down to the staff you work with and your eye to detail. That always makes the biggest difference when it comes to working in a large server network.

VLi
  • 9
  • 2
0

Now that NT has caught up to Unix in many of the previously deficient places, File permissions and memory protection aren't huge differentiating issues anymore.

But.... a. In Unix systems, all access to all devices goes through files, for which security can easily be administered. For example, do you know how to prevent user X from accessing the sound card in Windows while still allowing user Y? In Unix, that kind of stuff's easy.

b. The directory structure is much more sane. (For example, a user application need only have write access to your home folder, etc.) This has improved in windows as well in the ast few years, however.

d. This is a biggie: SELinux (and Trusted Solaris and Mac OS's "Seatbelt" sandbox feature). This is known as NDAC (Non-Discretionary Access Control). If you are running an OS distribution with these features, then there are essentially two layers of security going on simultaneously, the normal DAC (permissions system) that Unix has always had, and modern versions of windows have - and on top of that, the "application firewall" that SELinux and similar systems impose. For example, you can set a policy that says Apache web server is allowed to write to /tmp and read from /var/www and /etc/apache. All other disk access will be denied regardless of permissions. Likewise, you can specify that it can accept incoming connections on port 80 only and make no outgoing connections. Then, even if there is a bug that would allow someone to do something very bad, and even if apache was running as root, it wouldn't matter - the policy would prevent it. This requires a (very minor) speed penalty and can be a pain if you are using an unusual configuration, but in the normal case can increase your security level by a huge amount over both old-style Unix and Windows.

e. Layers - Unix systems are composed of many more discrete layers and services that can be swapped out. This means they can be individually analyzed for correctness and security, swapped out, etc. For almost any of these, there is no need to reboot. This is a big plus on server type systems. Also, it is easier to disable (and uninstall) things you don't need on a Unix system. For example, why be running a GUI on a web server box? It increases the attack surface and takes up RAM.

f. For those that said that Windows NT was designed from the ground up for security... that's true, the kernel was designed from the start with advanced security and multiuser features, but there are two main problems: 1. Microsoft's poor track record with security, and 2. The OS as a whole was designed to have compatibility with legacy Windows applications which meant many compromises. Unix has always been multi-user, and so applications don't have any big surprise when security is enforced - which means fewer compromises.

Noah
  • 171
  • 2
  • Security accessing "device files" in Windows NT is done through ACLs applied in the executive object manager. It has roughly the same ACL model as the filesystem. Re: your point "a": Logo'd applications that comply w/ Microsoft development guidelines don't need write access outside the user's home directory either. re: "b": I'll agree that there's limited MAC functionality in Windows. Integrity-levels, added in Vista, is a form of MAC. The "advanced firewall" (also added in Vista) can limit outgoing traffic in the way you describe if you choose to configure it that way. – Evan Anderson Jan 21 '10 at 13:27
  • re: "e": I agree, in principle, that less software means less chance of failure. There are internal builds of Windows that don't have a GUI, but Microsoft has chosen not to release them. re: "f": 3rd party developers have been more of a problem re: getting sane default security policies set in Windows than Microsoft. Personally, I think Microsoft should be more hard-line about poorly-behaving applications, but they live in a different "space" than the developers of free and open source operating systems when it comes to making sure that their Customers' applications run. – Evan Anderson Jan 21 '10 at 13:30
  • re: "The OS as a whole ... designed to have compatibility ..." Win32 is a kernel subsystem-- it isn't NT. If Microsoft wanted to (or would let somebody else) you could build a "distribution" of Windows NT that had no Win32 subsystem, no GUI, and booted with, say, the "Interix" POSIX kernel subsystem. Virtually all of the UI in an NT OS is Win32-based, but the kernel is perfectly capable of supporting a non-Win32 environment. – Evan Anderson Jan 21 '10 at 13:33
0

There are several reasons why Linux-based systems are often considered more secure than Windows systems.

One is the skill of the owner. If you walk into Best Buy or Wal-mart (here in the US) and buy a computer without thinking about it much, it will run Microsoft Windows. That means that there are immense numbers of Windows systems run by people who have no clue. Since almost nobody buys an Linux computer by accident (at least since Microsoft counterattacked on the netbooks), most Linux users either know something about computers or have had their computer set up by somebody who does. This applies to all environments where you get people who don't know what they're doing; the ones who don't are running Windows, and the ones who do run various different OSs.

One is the number of attackers. Microsoft Windows is a much more attractive target, because of all the badly administered machines out there. There are plenty of high-value Linux targets, but they're generally well administered (along with lots of the high-value Windows targets). To a reasonable approximation, nobody targets Linux computers in general.

One is the culture. In any Unix/Linux environment, there is a clear distinction between root and user accounts, and in almost all cases people work in their user accounts when they don't need to be root. The distinction is not, in my experience, as strong in Windows environments, in that each user will normally have one account, with whatever privileges are associated. I'm on my work computer now, where I have one account, an admin account. Everything I do is done by an account with high privileges. When I go home to my Linux box, I'll do almost everything in a limited-privileges account, and escalate when I need to. My wife had to argue hard to get two accounts on her computer at work, one her normal admin account and one a limited-privileges account so she could see if regular users have the privileges to run what she writes.

One is backward compatibility. While Unix didn't start out as a secure OS, it got security early on. Programs for Linux do not require running as root unless they actually do root functions. Windows, on the other hand, runs a large number of programs that require admin accounts because that's what the developers ran and tested on (see above paragraph), and those developers were usually not security-conscious, and that used to work just fine. That's the big problem Microsoft was trying to solve with UAC. I don't think it's a particularly good solution, but to be honest Microsoft isn't going to find a good solution here (I don't think dropping backward compatibility is a good solution here).

These lead to the fact that most large-scale security problems will be on Microsoft systems, regardless of the merits of the security models, and the perception that Microsoft gets the big security problems. By the availability heuristic, the fact that people can think of more Microsoft security problems biases their judgment.

Those are, in my opinion, the valid reasons. I haven't touched on actual OS security, since I don't know that either Windows or a Linux distro is more vulnerable than the other when run by a knowledgeable admin. Linux has the advantage of open source, in that anybody can find and fix bugs, while Microsoft has instituted security practices that may or may not work better. (If I wanted to run a really secure OS, I'd pick OpenBSD, an open source OS that strives to be secure.) Both OSs have good permissions systems (my preference is the Unix one, but other reasonable people disagree).

There are, of course, bad reasons for considering OSs less secure. Some people have a favorite OS, and waste no opportunity to badmouth other ones. Some people dislike Microsoft or Richard Stallman or some other person or organization, and denigrate the associated OSs. Some people haven't noticed that Microsoft has changed over the years, since it wasn't all that long ago that Microsoft really didn't care about security, and Windows really was less secure than Linux.

David Thornley
  • 181
  • 1
  • 1
  • 4
0

Predominantly, I believe Linux is seen to be a safer choice because of its ubiquitous use of open source software.

The at-easeness comes from the idea that "the community" will notice if something fishy gets added somewhere (say, if openSSH suddenly started phoning home with passwords) it wouldn't stick around for long.

But I can't reiterate enough what others above have already said: security is dependant largely on configuration: who cares if openSSH isn't phoning home if you have no firewall, a null root password and PermitRootLogin enabled in sshd ;)

msanford
  • 1,427
  • 15
  • 27
0

Short answer: initially, UNIX was designed to be secure; Windows was designed to be simple. Now, descendants of UNIX are going to pretend to be simpler for their users; Windows pretends to be more secure.

They haven't met, yet

dmityugov
  • 756
  • 4
  • 5
  • 2
    Bah! Windows NT had more forethought to security in its initial design than Unix did. Unix had security bolted on later as an afterthought. Modern Unix-like operating systems (such as Linux, which isn't really a "Unix" operating system since it's a completely new code-base) have improved greatly from the original Unix, but Windows NT was designed, from the beginning, to meet US DoD "Orange Book" security requirements. – Evan Anderson Jul 06 '09 at 02:27
0

Previous versions of Windows had applications running in the same address space, so they were able to walk pointers through each other. They also relied on cooperative multitasking, and sometimes weren't cooperating.

Even very early versions of Linux/Unix had partitioning between applications and between the O/S and application layer. Task slicing, while not always ideal, was at least fair.

Thus the legacy of Unix (or Linux) for more robust systems that need higher availability.

Does all this still apply today? That's another question.

kmarsh
  • 3,103
  • 15
  • 22
  • Of course it doesn't. A lot of the negative press that Windows gets from the Linux community is actually directly targeted at those previous versions, and fails to take account of the fact that things have moved on. The last version of Windows to use coop m/t was 3.1, and the last version of Windows that had DOS beating at it's heart was ME. – Maximus Minimus Aug 29 '09 at 21:04
-1

There's no real reason other than inertia. I've seen a lot of Linux advocates making arguments against Windows (and not just on the security side) which on the surface seem valid, but - which when you dig a little - only apply to Windows 3.1 or 95/98.

I've said it before, but while Windows may have more patches/etc, these are fixes for security vulnerabilities that have been identified. And it's not the ones that have been fixed that you have to worry about, is it? I don't believe that being open source is intrinsically more secure either. Rolling your own patches may be fine for a home user, but a corporate user or admin will always want the Real Thing that is certified to work (and to have been tested fully) with a variety of apps, and that is certified to not break the next kernel update. The same applies to fixes from the FOSS community.

So, in my book, it's a mixture of inertia, prejudice and being embedded in the UNIX culture to the exclusion of alternatives.

Maximus Minimus
  • 8,937
  • 1
  • 22
  • 36
-2

Processes on Linux/Unix cannot change thier privilages after they are asigned a user, processes on windows can change thier user privilages and change thier user while mid process.

This is the essence of why windows is less secure than linux/unix.

Fire Crow
  • 235
  • 2
  • 9
  • 2
    That's only true if the context the process is running as has "Act as Part of the Operating System" (SE_TCB PRIVILEGE) rights. If you start a process with a security context that doesn't have SE_TCB PRIVILEGE then the process can't just randomly "impersonate" (the NT-ism for assuming another security context) another user. If you're running an application with SE_TCB PRIVILEGE for no good reason then you get what you deserve, IMO. – Evan Anderson Jul 06 '09 at 02:25
  • So the security is enforced by a "context" as aposed to a system-wide rule, which is why windows is less secure than unix/linux. – Fire Crow Jul 06 '09 at 20:52