11

Overall question

What is preventing the uptake of MAC systems such as SELinux/AppArmor in corporate and desktop computing environments?

Why don't you think it isn't already widespread?

I do not count "available in the operating system" as "widespread". Windows actually has a native POSIX emulation layer, but very few Windows systems have it installed and running. Many Linux distributions have packages for AppArmor and SELinux, but to my knowledge only Fedora and subsequently RHEL ship with these enabled and the default behaviour is only to constrain system services - Fedora, for example, supports unconfined_t - no prizes for guessing what that does.

In addition, many commercial vendors of Linux products say they "do not support SELinux" and I've seen frequent forum references in my pre-Stack Overflow days and indeed many blog posts suggesting that "fixing" SELinux basically involves turning it off.

Background

I categorise access control into two types:

  • User-only access control systems such as DAC and RBAC. In these models, each user has a set of privileges and these are inherited by any software or application running by that user.
  • Mandatory access control (MAC) systems, where each application has its own set of privileges which may or may not be combined with the user level privileges as appropriate.

The reason I ask is this: if I want to compromise a system under the first model, it is pretty much a two step process. Firstly, find a vulnerable entry point that allows me to execute arbitrary code and two, find a vulnerable privileged process from that starting point to allow me to escalate my privileges. If the vulnerable entry point is also privileged, so much the better.

But this raises the question: why do these applications need access to everything the user has access to? Take for example Firefox. It has some shared libraries (or DLLs, if you're on Windows) it needs to load and it needs to be able to load profile information and any plugins, but why should it be able to read my entire /usr tree, or enumerate all processes my user is currently running? It might well want write access to /home/ninefingers/Downloads, but it doesn't need access to /home/ninefingers/Banking, for example. More to the point, it doesn't need to be able to start a new instance of a privileged process with corrupt input, or be able to send messages to a setguid process via local sockets.

Now, to some extent we have a semi-working solution. On Linux, many system daemons (services) actually drop privileges and are run as separate users which cannot log in interactively (with a shell - use /bin/false or /sbin/nologin as shells) which works to an extend, except that any file can only have owner, group and other permissions (unlike Windows).

I realise also that there are some technical challenges to MAC, including the current X11 security model. Many Linux distributions do offer SELinux or AppArmor configuration and constrained daemons, but there doesn't appear to be much impetus for the desktop. Windows Vista supports Integrity Levels, but these are not particularly fine grained.

I am not so concerned with the idea of privilege levels within a domain - see this question asking for practical usage of such techniques and strategies, but more the idea that applications, just like users, should be subject to the principle of least privilege. The Invisible Things Lab blog post "The MS-DOS Security Model" makes many of the points I am concerned with, particularly with regards to desktop security.

I also think shipping MAC rules with each application would encourage better software development - almost like test driven development, if a rule is triggered that you aren't expecting, you know you potentially have a bug (or your rules are wrong).

Potential sub questions that might help answer the overall question

  • Have you ever tried to implement Mandatory Access Control/appsec in a corporate environment? Was it ever discussed and abandoned and if so why? What I'm looking for is "what stopped you using it" if it was considered.
  • Am I right? I clearly think MAC systems as I've described them help protect entry points against malware intrusion, but I'm open to arguments for other solutions to the problem or arguments that actually the current system works well enough.
  • What are the impacts for usability (there are clearly some) and can they be mitigated?
  • Can the hurdles with desktop integration be overcome?
  • Are there any other middle-ground systems I may have overlooked?
  • Why is there no concerted industry-wide effort to improving the appsec security situation, either via this method or via another alternative?
  • Why do you say it's not widespread? A very tight system-wide SELinux policy is provided with every Android installation. – forest Dec 16 '17 at 02:20
  • @forest The question was asked in July 2011. At the moment, android was at [3.*](https://en.wikipedia.org/wiki/Android_version_history), but selinux was introduced into android at [4.3](https://source.android.com/security/selinux/). Btw, apparmor is now enabled by default on [debian testing](https://wiki.debian.org/AppArmor/Progress#Enabling_AppArmor_by_default.3F) (still subject to change though). – Alex Vong Jul 31 '18 at 12:35

6 Answers6

3

Unfortunately I think you already answered your question with the "fixing SELinux basically involves turning it off" comment. It is hard work.

In an ideal world specific requirements for access etc. would be detailed and any application would be coded to enforce MAC or other control paradigm, however in the real world the following issues arise:

  • Requirement for rapid release of new functionality
  • Limited budget
  • Poor understanding of security requirements at project lead or board level
  • Acquisition of companies with widely differing security policy

and many others.

Security, while often saving money and reducing risk long term is generally a cost in the short term, and in order to meet business requirements (which are often short term - focused on a project or for shareholder return in a fixed period) it is often run at the bare minimum required, or below.

Implementing MAC correctly is quite an intensive, high overhead requirement, and maintaining it in the face of rapid change is hard work.

I would love it if organisations did do it correctly, as it would reduce or remove a whole host of attack types, but I'm not holding my breath,

Rory Alsop
  • 61,367
  • 12
  • 115
  • 320
1

I want to quote Casey Schaufler (SMACK creator):

From the middle of the 1980's until the turn of the century Mandatory Access Control (MAC) was very closely associated with the Bell & LaPadula security model, a mathematical description of the United States Department of Defense policy for marking paper documents. MAC in this form enjoyed a following within the Capital Beltway and Scandinavian supercomputer centers but was often sited as failing to address general needs.

Around the turn of the century Domain Type Enforcement (DTE) became popular. This scheme organizes users, programs, and data into domains that are protected from each other. This scheme has been widely deployed as a component of popular Linux distributions. The administrative overhead required to maintain this scheme and the detailed understanding of the whole system necessary to provide a secure domain mapping leads to the scheme being disabled or used in limited ways in the majority of cases.

Smack is a Mandatory Access Control mechanism designed to provide useful MAC while avoiding the pitfalls of its predecessors. The limitations of Bell & LaPadula are addressed by providing a scheme whereby access can be controlled according to the requirements of the system and its purpose rather than those imposed by an arcane government policy. The complexity of Domain Type Enforcement and avoided by defining access controls in terms of the access modes already in use.

So basically selinux is not designed for the common use at all. And, generally, current MAC systems are unreasonable harder to understand for layperson than common unix permissions. And most important, I think, they are incomplete, in the sense that there is no ready to grasp policy that you can follow (like we are following with unix DAC model). For example, your home dir is generally have your user.group ownership, /tmp have such permissions, and /bin/ files such - all is there already invented. But for MAC you usually should develop policy yourself, like some scientist, except you did not learn that science. By what criteria and methodology some non-security dude should develop it?

Autogenerated policies for restricting /usr/bin/date and so on, are laughable, why protect them? Who will hack /usr/bin/date? So another important reason is what exactly we want to protect and why? Something where POSIX perms are not enough? Against what type of attacks? There is no common model of such attack. No understanding, no goal, no meaning.

catpnosis
  • 215
  • 1
  • 8
  • If `date` is ever run privileged and ever parses environmental variables or accesses files writable by a less privileged user, then there would exist situations where a less privileged user can compromise the system through it. Obviously this is not a huge deal in most cases, but it may be an issue. A more realistic example is `ping`, which is setuid or at the very least setcap. – forest Mar 07 '18 at 02:45
1

@Rory-Alsop is right on. Robust, effective, security has a very high cost.

what is preventing the uptake of MAC systems such as SELinux/AppArmor in corporate and desktop computing environments?

I believe the answer is cost. Security, like any other aspect of a system, has cost. Consciously or not, those responsible for purchasing IT, be it home user or CIO, have decided that currently they are not willing to spend much on security. We may dismiss them as not having carefully considered the problem, but as much as I want them to be wrong, they may be right.

Edited: thanks to D.W. comments and my fuzzy memory

Prior to designing Unix Thompson and Ritchie, worked on an operating system called Multics. Multics was a security focused operating system that attempted extensive error handling. Unix decided to forgo the extensive error handling, and saved themselves a enormous effort. As told by Tom Van Vleck "half the code I [Van Vleck] was writing in Multics was error recovery code. He [Dennis Ritchie] said, "We left all that stuff out."

Extensive error recovery requires roughly twice as much design and coding as handling some errors, but not all of them.

Why don't you think it isn't already widespread?

Because you can handle a lot of security probles without MAC, and MAC is very expensive. MAC is a mechanism to implement policies. To use MAC you have to model your system and design policies to provide the controls you want. And, your policies won't be perfect the first time. MAC causes a lot of pain in existing systems, because existing systems were build under the assumption that the security would be of the Discressionary type. I mean at the conceptual level critical protocols assumed that they could access critical system resources.

why do these applications need access to everything the user has access to?

They don't. However, it is easier to implement a lot of applications if they act just like the user. The user is mostly a understood system. Making the application behave differently from the user takes a lot of thinking and designing. In the balancing of lots of cost to the developer (and a higher retail price for systems or software) and less security, the managers usuall choose less security. Oh, we have a security issue, there's a patch for that...

applications, just like users, should be subject to the principle of least privilege

Except usually they're not. Yes, in some IT systems and some environments they are, but I think you would find that a lot of database administrators have access to capabilities they can live without, and network administrators have accounts with capacities to change far more than server addresses and protocol options, and system administrators who can modify just about anything.

With due respect to Joanna Rutkowska, modern operating systems do not use the same security model as MS-DOS. The MS-DOS security model did not have multiprocessing, concurrent users, network communication, or multiple processors.

From The MS-DOS Security Model: Does anybody know why Linux Desktops offer ability to create different user accounts?

Yes, becase Linux attempts to emulate Unix and provide equivalent capabilities. Since Unix is fundamentally a shared system with many users, Linux provides the features to allow for multiple users. I know that services have been given user style accounts and that making services users has been a weak attempt at improved security, but that was not the original reason for multi-user capability.

What are the impacts for usability (there are clearly some) and can they be mitigated?

They are huge, and some can be mitigated, but I don't want to be the one to calm down the database admin when their update script stops working. And the network engineer who wants to know why the internal NTP servers stopped working...

this.josh
  • 8,843
  • 2
  • 29
  • 51
  • You seem to imply that Multics was provably, mathematically guaranteed not to crash and not to have security vulnerabilities. That is not the case. Multics had crashing bugs and had security vulnerabilities. Also, Thompson and Richie weren't lazy. – D.W. Jul 11 '11 at 17:48
  • Oops, my mistake on Multics. The crack about Thompson and Richie was supposed to be a joke. I guess it didn't parse well, I'll remove it. Do you know the last commercial formally verified kernel before Unix? – this.josh Jul 11 '11 at 19:07
  • High-assurance kernels (many of which have been verified to some degree or another) include PSOS, Blacker, SCOMP, Boeing MLS Lan, Gemini Trusted Network Processor, and possibly the XTS-400. I don't know which came before or after the Unix OS. I'm not too sure about the claims about history made here. – D.W. Jul 11 '11 at 19:27
  • My memory was faulty. Just read ["Mathematics, technology, and trust: formal verification, computer security, and the U.S. military"](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=601735&isnumber=13172) The Multics contract was awarded in 1965. In April of 1969 Bell Labs team (includes Thompson and Ritchie) withdraws. October of 1969 Multics goes into operation. It looks like formal verification did not get going until 1972 with "Proof of Correctness of Data Representations" C.A.R. Hoare – this.josh Jul 12 '11 at 01:21
1

Re: "but to my knowledge only Fedora and subsequently RHEL ship with these enabled"

Ubuntu has shipped since 8.04 (2008) with AppArmor enabled by default, and I hear that SUSE has had it since release 10. You can find out what it is doing on your system via sudo apparmor_status

One of my current relatively vanilla Ubuntu desktop systems has 12 profiles enabled, including the pdf viewer (evince) and some daemons like dhclient3 (DHCP), libvirtd (virtualization daemon) and cupsd (printing). Some examples of protected server services include MySQL, named smbd and ntpd, and apache2 has a profile which is however disabled by default. See more details at SecurityTeam/KnowledgeBase/AppArmorProfiles - Ubuntu Wiki

And the google-chrome browser does its own sandboxing, which I suspect provides some similar benefits.

AppArmor has been in the mainline Linux kernel since October 2010, 2.6.36. Hopefully that will make it more attractive for more apps to integrate support for it.

nealmcb
  • 20,544
  • 6
  • 69
  • 116
0

SELinux already has achieved considerable deployment. For instance, it is enabled by default on Fedora Linux installs.

I suspect the main thing holding back broader deployment of SELinux is its impact on system administration. It is not unusual for something to stop working, for system administrators to spend hours trying to figure out why it isn't working, only to discover that they ran into some obscure SELinux denial. In that kind of situation, it is understandable if the sysadmin shuts off SELinux, or starts disabling SELinux on future systems. In general, SELinux adds extra complexity that can make it harder to understand the system and troubleshoot failures, which is I suspect the leading reason that it gets turned off (or that it doesn't get installed).

D.W.
  • 98,420
  • 30
  • 267
  • 572
0

Focus and Comprehension

The developer/architect has a job to do, the job they are paid for.

They must understand the whole operating system, and how it runs with their software on top. They have only so much mental bandwidth and they are oriented in the functionality dimension.

They don't have to understand it all, but if they don't then there are consequences/bugs.

Understanding their problem space is a hard enough task as it is, without adding other layers like MAC, which to their point of view can make their application random, inconsistent and stop some features from working if not fully understood.

Most of the other features, even security features have direct benefits, MAC benefits aren't obvious or directly applicable.

The work that Red Hat has done with SELinux in order to make certain at-risk applications is great, because for certain classes of application it can seriously raise the bar.

MAC will never work on the desktop, because of the serious GUI disconnect that it would impose on the users. One limited use case could be for special purpose browsers for your bank and suchlike, but with limitations for copying files and copy/paste between zones/labels. Look to the Hypervisor level as well, as hypervisors can provide some limited isolation which might be useful for some desktop cases.

A few further points:

  1. Look to desktop virtualization to provide corporations most of the benefits of MAC.
  2. Separating applications into different silos also provide similar security benefits without the cost of MAC (see 3 tier architecture, Service Oriented Architecture, Queueing Infrastructure). They are testable, verifiable and provide good value.
Andrew Russell
  • 3,633
  • 1
  • 20
  • 29
  • 2
    *"MAC will never work on the desktop"* - This is ambiguous and might lead some people to believe that SELinux will never work on the desktop. Well, "never" might have come earlier than expected. Fedora Linux enables SElinux by default, including on desktop systems. SELinux is not full-fledged MAC, to be sure, but it does provide some value, including on desktops. – D.W. Jul 11 '11 at 17:51