11

Today, PCs (laptops, desktops, etc.) generally work under what I'll call the "open PC" security model. Users have full, system administrator/developer-level access to their own machine. Users can install arbitrary software of their choice onto their PC. That software can do anything, or at least anything that the user can do. Software applications are not sandboxed; they can freely access all of the user's data and interact or tamper with all other applications (*). Essentially, every user is God on their own machine, and can grant that God status to any software application they choose to install.

You could view malware as one consequence of the "open PC" security model. If a user has the ability to install software of their choice, and if that software gets full access to their PC, then all an attacker needs to do is persuade the user to install a malicious piece of software, and the user is toast. Similarly, if applications aren't sandboxed, then all an attacker needs to do is compromise one of the user's applications, and then the user is toast (the attacker gains access to all of the user's data and can compromise all of the user's applications).

Currently, the "open PC" security model is deeply baked into the way that PCs work, and the way that PC operating systems work.

What other alternatives to the "open PC" model are there? If the industry wanted to move away from the "open PC" model over the next 5-10 years, what are some possible alternative paradigms that might be worth considering? What are their advantages and disadvantages?


For example, one competing paradigm is the "app" security model. In the "app" security model, users generally do not have full syadmin/developer-level access to their own machine (unless they take some special step, which most users don't take). Users can install apps, but for most users, the selection of apps is limited to some list that is curated in at least some minimal sense (there may be ways to sideload apps from other sources, but most users mostly don't do that). Before installing an app, there's some way to get a feeling for how safe or risky that choice is (e.g., through perusing reviews, the permissions the app requests, or other information). Apps are sandboxed. Each app is prevented from interfering with other apps; one app cannot access all of the user's data or interfere with other apps.

The app model is arguably is more resilient to malware: it makes it harder for an attacker to persuade users to install malicious software, and it limits the damage that a malicious or compromised app can do.

So, we could think of the "app" security model as one alternative to the "open PC" security model. Much of the mobile world has moved to an "app" security model, and we've even started to see some movement in this direction in the desktop space (e.g., Windows 8).

Another possible alternative might be the "appliance" model, where your PC is no longer a general-purpose computer and users no longer have full God power over their PC. Instead, system administration is outsourced to someone else (possibly your employer's sysadmin, or some other third-party company that does system administration). Some basic software applications might come pre-installed (e.g., a web browser, some office/productivity software), and you might not be able to install anything else, or you might be limited in what software applications you can install (e.g., you can only install applications that are on some whitelist of permitted applications). I'm calling this the "appliance" model, but other reasonable names might be the "whitelisting" or "outsourced system administration" model. This model might not be right for everyone, but you could imagine it might be suitable for some fraction of users.

Are there other, radically different security models that are worth considering? If we could completely change the security paradigm underlying computers and operating systems and computer architecture (starting over from scratch, if need be), are there other paradigms/security models that might enable significant benefits to security?


(*) Footnote: OK, I know I'm simplifying my description of the "open PC" model a little bit. I realize that modern desktop operating systems do draw some distinction between the user account and Administrator/root. However, in some sense, this is a detail. For instance, the user/root separation does not provide any isolation between applications. Most of the software we run runs at the user level, so in desktop OS's, any user application can still interfere with any other user application.

D.W.
  • 98,420
  • 30
  • 267
  • 572
  • 2
    If admin powers are outsourced, malware will attack the new admin. – Deer Hunter Jul 09 '13 at 04:53
  • 1
    The new threat model seems to be application writers planting backdoors in their code or stealing user data through the network. I am afraid that outsourcing system administration can do nothing to prevent this kind of attack (with enough money, outsourced admins/walled garden curators will be bribed). – Deer Hunter Jul 09 '13 at 05:41
  • The other model is the jailed model, common on cell phones. – Gilles 'SO- stop being evil' Jul 09 '13 at 15:28
  • 1
    I think what you're calling the "appliance" model is really conflating two different models: "whitelisting" like in GPO-controlled networks, and "appliance" like some specialized (or minimized) platform such as those provided by vendors of some network products. I think the two have different characteristics... – AviD Jul 09 '13 at 16:52
  • 1
    How about the "no actual OS" model: think 1985 Nintendo. – tylerl Jul 09 '13 at 17:37
  • How does phone phreaking prior to the advent of the PC fit into your assumptions? Systems from padlocks to nuclear centrifuges can be hacked based on their weak points. Claiming usability is a core weakness may be missing the true weakness. – zedman9991 Jul 10 '13 at 12:08
  • https://en.wikipedia.org/wiki/Qubes_OS Look at Joanna's xen jails for that task. – trankvilezator Sep 22 '13 at 20:01

4 Answers4

6

To some extent, all the Web/Cloud hype is about a new model (or, maybe, an old model with a new layer of paint). With "apps", applications are quite contained and isolated from each other. With the "apps" model as is employed on iOS / Android system, a further twist is applied in that only "allowed" apps can be installed. The user can still choose which apps to install, but only within the list of apps maintained (and signed) on the dedicated Store.

The "appliance" model is one step further: the user no longer chooses which applications are installed on his hardware. Or maybe the difference between this model and the "signed app" model is the size of the list of "allowed apps". The boundaries between these security models are a bit fuzzy. In a Store with more than 100000 apps, it would be overly optimistic to believe that all of them are benign.

Arguably, with the "appliance" model, the user's computer is no longer his computer. Then, let's go to the end of this logic: if that's not his computer, why should he be able to touch it ? Let's put it somewhere else, reachable by network only. What remains in the physical presence of the user is just a display device, but the actual code runs on a remote server. That's the Cloud model (or, simply, the Web model; the difference between a Cloud and a Web site is at best quantitative, not qualitative).

So one could argue that there is only one model and the differences are only of a quantitative nature. That would be a rather extreme point of view.


A rather long time ago (about 20 years), I frequently heard the ravings of a student at the same school as me; in his view, the future was provable security. Applications ought to describe what they do in a model sufficiently precise to capture interesting properties, but also such that the operating system would be able to "prove" that these properties will be effectively enforced. Partial incarnations of these ideas are used in practice, for instance in the list of "permissions" of Android apps, and also (for the same thing at a different level) in the Java bytecode (bytecode runs within the constraints of the Java type model, and the VM can make sure of it with statistical analysis and only minute runtime checks).

Then, what we need would not be really a qualitatively new "model" (as I tried to explained above, all the models are actually variants on a single continuum), but rather a richer framework for describing what security characteristics we want to enforce (as usual, the tricky point is defining what we want).

This framework should also be understandable by whoever is responsible for the administration of the machine, i.e. the end user. Thus this looks like a Holy Grail quest.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • 2
    I disagree that these models are on the same continuum, with only quantitative differences. What @DW is calling the "appliance" model is qualitatively different from the "apps" model - in the first, the list of programs is strictly controlled (apparently by a 3rd party), whereas in "apps" the list of program *privileges* is strictly controlled. Saying these are on the same scale, is like saying that DAC and MAC are on the same scale. Yes, there are some similarities, and it might be possible to implement some subset of one with the other, but there are more than just quantitative differences. – AviD Jul 09 '13 at 16:49
3

The greatest problems of today's desktop PC environment:

  • allowing arbitrary applications access to all the data that a user has;
  • giving applications unlimited access to the Internet and sensitive hardware (mike, the camera, GPS chip, Bluetooth etc.);
  • proliferation of idiot users;
  • emergence of a business culture where stealing and selling users' personal data is the norm.

I strongly object to curated walled gardens, since their raison d'être is commercial, and it's not that far-fetched to see their profit trumping my security. The same goes for cloud services and sealed appliances: I buy the computing power and I feel free to buy, install and use whatever I choose without fear of having my data held ransom, unsurprisingly, to someone else's commercial interests.

Am afraid there's no silver bullet to wash away users' ineptitude and industry's greed, but rather some evolutionary steps to make application security usable, and putting pressure on the industry (a futile endeavor, I know).


Some solutions:

  • hardware virtualization (VT-x) and sandboxing;
  • establishing fine-grained sets of permissions that separate various roles:

    • system administration hat/role (as in OS tweaking) should be separate from the hat the user puts on when installing (and trying out) applications;
    • my banking browser should be separate from the browser I use for e-mail;
    • e.g. videoconferencing should be isolated from my office applications (please remember that there are workflows where there is supposed to be 'seamless integration' - there were days when this kind of interoperability was the selling point of many software vendors);
  • moving as many security functions as possible into the hardware (yeah, microcode can be updated, but implementing access control in software is vulnerable and costly).

This is not reinventing the wheel, all are tried recipes that will eventually percolate down into the wide world of consumer desktop OS/hardware.

...unless idiocy and greed happen to win the war against prudence and wisdom.

Deer Hunter
  • 5,297
  • 5
  • 33
  • 50
0

On your personal PC, you are "god" but this is not usually the case on a corporate PC. Your sys-admin may force all your web traffic to go through the corporate web proxy, and you have no ability to override this (short of hacking). Linux and Windows (and Mac) have pretty good multi-user security, which they have effectively inherited from multi-user mainframes.

So I think the "appliance" model you mention, which is more commonly called a "managed system" already exists and is widely used. But perhaps it could be more widely used. For example, my Dad owns his own PC, but is not technical. He currently uses it in "god" mode, but "managed system" would be more appropriate for him. The question is: who would be his sysadmin? I'm not volunteering :-)

The big security weakness of PCs is exactly what you point out - that apps can do everything you do. And I think you're right, we do need to consider a radically different security model, where apps are sandboxed. But this model has a precedent: it's the "app" model that we see on iOS and Android. This is an excellent technical innovation, and I feel the teams who created this should get more credit.

By the way, it's important to separate the ideas of apps being sandboxed and apps being controlled (like this iOS app store). Both help with security, but controlling apps creates all sorts of social issues. Sandboxing on the other hand is not particularly controversial.

So, the next step is sandboxing on the desktop. We are seeing baby steps towards this, e.g. Chrome runs in a sandbox. However, it is a hard problem. Desktop apps need to do more than mobile apps. On mobile, each app tends to have its own data in a silo. On the desktop, multiple apps work together on a shared file system. And of course on the desktop, power users want to do things like scripting.

I don't think any radical new security model is needed - but the process of implementing sandboxing on the desktop will take many years.

paj28
  • 32,736
  • 8
  • 92
  • 130
-1

This "question" is biased because there may not be a definitive answer. The term "Open PC" is equivocal: There's no "Open PC" model within the area of computer architecture/security models.

Users have full, system administrator/developer-level access to their own machine

From an IT security and system administrator perspective, giving full administrative rights to a user is not acceptable. A "best practice" is to audit the different roles within the organisation and map their levels of permissions/access accordingly. The least privilege principle must apply when it comes to user permissions. Such policies need to be enforced at system level and a network level. Roles and permissions tied to systems & applications must be reviewed on a regular basis. A software & hardware inventory must be managed.

The topic is broad, different aspects might be discussed:

  • Access Control model(s) to implement (MAC, DAC, RBAC, etc...)
  • Plaftform type (PC, dumb terminal, appliance, server...)
  • OS architectures & limitations (Permission management, sandboxing mechanisms, trust model for software...)
  • Software distribution (in-house, SaaS, Classic vendor channel, "app-store", distributed repositories...)
  • Information System management (Policies, technical controls, audit, monitoring...)

In order to define what should be an alternative to the so-called "open PC" security model, the former must be defined: The mentionned "Open PC" security model appears to be a use case where security of the platform is completely unmanaged. This use case cannot be defined as a "model".

Platform types fall into the following categories:

  • A Personal computer (PC) is intended to be operated by its end-user. Strictly speaking, PC is a general-purpose computer for home use.
  • A workstation is a computer in the corporate/professional environment (single-user or multidesk).
  • A terminal (usually dumb) is a purpose-built system that expose a constrained user interface to perform specific tasks only (Cash register, ATM, Thin client...).
  • An appliance is a "turn-key" system made of tailored hardware and/or software. It is not intended for end-users. Usually an appliance is used in a networked environment for specific purpose (security, storage, ...)
  • A server is a system that responds to client requests in a networking environment (client/server model is also used as a communication framework between components, applications and processes). Clients can either be PCs, Workstations, terminals, appliances, application processes, other servers...

Security models are abstract models intended to help developing security policies, systems & softwares, as well as managing information flow within an organisation. Security and access control models are mainly high-level. They are well defined within the official infosec literature.

Trying to define new security paradigm as alternative to particular use case is a dead-end. Nevertheless the question is likely more about software provisioning and access control implementation within computer systems.

Software provisioning model depends on:

  1. Computer system category/platform type
  2. Operating system features
  3. Environment (single-user, multidesk, distributed, networked...)
  4. Vendor business model

Access control model implementation depends on:

  1. Risk acceptance regarding the data to be protected. A cost/benefit analysis must be performed to keep the balance between security & cost of security). Note that cost might be either money, time, expertise...
  2. Environment (single-user, multidesk, distributed, networked...)
  3. Operating systems features available to enforce the security controls
  4. Governance model (User-driven,role-based,organisation policy based, outsourced...)
g0lem
  • 179
  • 3