15

First-time asker/commenter, long-time reader.

As someone who's currently doing a lot of thinking & writing about measures that might foundationally improve computer security (ie., involving not just the kind of evolutionary, fairly modest steps that most tech makers are focusing on right now, but pursuing "big leap" changes might break backwards compatibility but would make systems much more secure). I am very taken by the idea of using robust general process isolation to try to prevent user-mode programs--or, anyway, what we today call "user-mode" programs--from being able to do nasty things like trying to read/steal data being used by other user-mode programs or hitting the OS with privilege escalation attacks.

Now, there are certainly companies/organizations out there who have tried or are trying to implement robust privilege isolation schemes in software. For example, Microsoft's developmental Singularity SO that put almost everything into"sealed isolated processes" that could only communicate with each other and the OS through restrictive message-passing "contracts". (There are certainly others, including some that are in use with governments for military/intelligence high-security scenarios, I understand.) However, I suppose I'm a person who's reluctant to put tremendous trust in security defenses that aren't rooted, at the end of the day, in some sort of direct hardware-enforcement. Which brings me to my two (closely-related) questions:

First, are there any non-government, commercially produced microprocessors out there today--that are intended for multi-purpose computing; no smart card chips or something--that use instruction sets/architectures specifically designed to enforce strong process isolation/separation? (Such that not even an attack exploiting a deeply-occurring flaw in the kernel of the OS running on the device would be sufficient to result in allowing malicious code in a process from breaking out of isolation.)

Second, what sort of changes would you need to make to the x86-64 instruction set & corresponding chip architecture to make it capable of providing hardware-enforced support to strong OS isolation/separation of individual processes?

(FYI, I do know that Intel has added some proprietary security capabilities to some of their chips over the last 5-10 years, with Skylake with SGX this year suppose to bring some capabilities to isolate a given program that has high-security needs from the rest of the system. But much larger, further steps would be needed to pursue hardware-enforced isolation of, say, at minimum, all processes that today run in user-mode. Or am I wrong about that?)

halfinformed
  • 153
  • 4
  • I think DEP with Mandatory Integrity Control does what you ask https://en.wikipedia.org/wiki/Data_Execution_Prevention – makerofthings7 Sep 09 '15 at 21:36
  • Perhaps DEP helps @LamonteCristo, but it doesn't solve the whole problem. DEP is aimed at protecting a process from injection. I think that the OP's question is broader and includes things such as protecting the OS and other processes from a malicious binary. DEP doesn't really help there. – Neil Smithline Sep 09 '15 at 22:51
  • 2
    @thomas-pornin You state: The trouble begins when you begin to understand that complete isolation is useless: application processes must, at some point, be able to interact with the hardware, to save files or send data over the network or display images. I don't understand why that is a limitation. To interact with hardware, each application would interact with the kernel, in turn granting access to hardware. With the MMU architecture, why not logically split the file system as well (making a memory/file system "container"). Also, I think windows user OSs are maybe a bad example ... I would lo – user1325457 Sep 10 '15 at 06:15
  • @thomaspornin So to implement an OS design where most processes would be sandboxed/robustly isolated would take a major transformation of the number of system calls and the scope of what they can do vs. what Windows desktop/OS X/general-computing Linux distros do now? – mostlyinformed Sep 10 '15 at 22:30
  • @halfinformed I am trying to convert your comments to comments for you - keep submitting answers and I'll convert until you get your rep back – schroeder Sep 10 '15 at 22:35

2 Answers2

14

Actually, almost all of the CPU on the market, save for the very small ones meant for low-power embedded devices, offer "hardware-enforced isolation". This is called a MMU. Synthetically, the MMU splits the address space into individual pages (typically 4 or 8 kB each; it depends on the CPU architecture and version), and whenever some piece of code accesses a page, the MMU enforces access rights, and also maps the access to a physical address (or not -- this is how "virtual memory" works). At any time, the CPU informs the MMU whether the current code is "user code" or "kernel code", and the MMU uses that information to know whether the access shall be granted or not.

The access rights and mapping to physical addresses of each page is configured in special tables in memory, that the kernel shows to the MMU (basically by writing the start address in physical RAM of the main table in a dedicated register). By switching the MMU configuration, the kernel implements the notion of process: each process has its own address space, and when the kernel decides that the CPU shall be granted to a process, it does so by making the MMU configuration for that process the active configuration.

This is about as hardware-enforced as these things can get. If you wanted a software-only isolation enforcement, then you would have to look at things like Java or C#/.NET: strong typing, array bounds check and garbage collection allow for cohabitation of distinct pieces of code with isolation and without the help of a MMU.


MMU-based process isolation works well in practice -- processes cannot alter or even see the pages of other processes. The last major operating systems where this was not done properly were the Windows-95 family (up to and including the infamous Windows Millenium Edition, in 2001).

The trouble begins when you begin to understand that complete isolation is useless: application processes must, at some point, be able to interact with the hardware, to save files or send data over the network or display images. Therefore, there must be some specific gateways that allow some data to flow in and out of the isolated address space of each process, under strict control of an arbitration system that maintains coherence and allocation of hardware resources to process; that arbitration system is exactly what is known as an "Operating System". The "gateways" for escaping isolation are often called system calls.

Right now, the OS is software, and has bugs, because every significant piece of software has bugs. Some of these allow for maliciously written process to impact other process in bad ways; this is known as "security holes". However, making a "fully hardware OS" would not solve anything; in fact, it would probably makes things worse. Hardware has bugs too; and the source of bugs is that what the developer is trying to do is complex. Doing it in hardware only makes bug-fixing a lot harder, so it does not improve the security situation at all.

Thus, to make a better isolation between process, the solution is not to throw more hardware at the problem. There is already enough of the stuff (and maybe too much). What is needed is a reduction in complexity, which really is a thorough pruning and redesign of the list of system calls. A basic Linux kernel offers more than 300 different system calls ! This makes for a lot of work when trying to prevent security holes. Unfortunately, removing system calls breaks compatibility with existing code.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • 1
    Do you think that the number of system calls is a useful measure of complexity? You can reduce the number of calls without reducing kernel size or functionality, albeit at the cost of making the remaining function calls very messy. – Neil Smithline Sep 09 '15 at 23:01
  • 3
    It is not a measure, it is a symptom. – Thomas Pornin Sep 09 '15 at 23:12
  • @thomas-pornin gives a great answer on what's available today. The work that I think is most promising to add additional security is [Capsicum](https://www.cl.cam.ac.uk/research/security/capsicum/ "Capsicum"). – Adam Shostack Sep 09 '15 at 22:44
  • In addition to the MMU, most CPU architectures also has process [protection rings](https://en.m.wikipedia.org/wiki/Protection_ring). The protection ring defines what instructions are available to the running code; so that user mode code cannot modify the MMU or access the hardware without going through the kernel. The protection ring is enforced by the hardware. – Lie Ryan Sep 10 '15 at 23:47
  • 1
    The "protection ring" is the x86 name for the duality between kernel mode an user mode. For historical reasons, x86 CPU offer _four_ distinct modes, called "rings"; for equally historical reasons, operating systems use only two modes (rings 0 and 3), mostly because nobody really knows what to do with the intermediate modes. Other CPU types (e.g. ARM, Mips...) offer only two modes. – Thomas Pornin Sep 11 '15 at 00:44
  • Unfortunately, popular OSes (Windows and Linux at least) provide ways for processes to access other processes' memory (at least those of the same user). Why they allow this for unprevileged users is beyond me... – billc.cn Sep 11 '15 at 15:14
  • @billc.cn Guess how your favourite debugger works? – user253751 Jan 11 '16 at 00:58
  • 3
    @immibis Don't really want to start a debate on an old post, but: to support debuggers, which is used by a minority of users, is not a good reason to compromise such an important security feature. There can be more secure ways to enable debuggers like requiring the debugger to launch the process being debugged with a special flag. Indeed in Windows, debugging already running processes is a special privilege not granted to normal users. – billc.cn Jan 11 '16 at 10:38
  • 2
    @billc.cn For the record, I can attach to existing processes just fine, as a non-elevated administrator (which is supposed to be equivalent to a normal user). Obviously I can only attach to my own processes. Although you're right, it would be quite sensible for attaching to a process to invoke a UAC-style secure user prompt. – user253751 Jan 11 '16 at 10:39
  • @thomas-pornin I agree on your point concerning complexity. However, I cannot agree on your suggestion on complexity reduction. The number and complexity of system calls is a secondary problem, to my mind. The primary problem lays inside OS: it is lack of isolation between OS modules that handle different functions and absence of practical (hardware) means to implement such isolation. – Master Nov 25 '16 at 07:36
  • `At any time, the CPU informs the MMU whether the current code is "user code" or "kernel code"` Is this really how it works? I thought it could partition virtual memory, but the separation between kernelspace and userspace did not require the MMU, just a check to see if `CPL == 0`. – forest Jan 03 '18 at 02:48
1

First, the present implementation of "hardware-enforced isolation" looks insufficient and weak. Here are just few "nasty" questions:

  1. What for does the process running in "privileged" mode have access to data of all other processes?
  2. Should device drivers run in "privileged" mode or in "user" mode? Both solutions result in bad problems: drivers running in "privileged" mode have too much access to sensitive data. Drivers running in "user" mode must access devices, therefore, device access must be granted to applications.
  3. Do we really trust to all device drivers' suppliers?
  4. All modern OS's originate from the times when what we call "good programming practices" were plainly unknown. One can check the code of any open-source OS and find a lot of (sorry) mess there. Do we believe that commercial OS's are far better - just because we can not see their source?
  5. Any modern OS is really complex SW. The common way to improve reliability of complex SW is its structuring. This structuring must be supported by some enforced "borders" between various modules. There are no means to support / enforce such isolation in OS programming. So we only trust on good intentions of people who design, support and code OS's.

This list can be continued.

I have recently searched for CPU designs that help solving these problems. No such designs were found.

Master
  • 111
  • 1