8

Inspired by: Why don't OSes protect against untrusted USB keyboards?
Related: What can a hacker do when he has physical access to a system? (I address the points of its main answers below.)

There seems to be an old adage "if the bad guy gets physical access, the computer is no longer yours". My question is, is that just hardware manufacturers suck at security, or is there an intrinsic reason that is true. Can you create systems that are secure against physical access?

Now, keeping your data secure is easy: encryption and/or hashing. What I'm talking about is keeping the device secure such that you can still trust it.

Really, it doesn't seem much different then software security. All you have to do is:

  • Require any hardware add-ons to be approved by the user before being trusted:
    • Use Privelege Separation. Just like when you download an app, the OS tells you what permissions it needs, you could have a computer tells you what permission the hardware needs. That way if you have a thumb-drive that requires keyboard access, you'll know something is up.
    • Establish public-private key encryption between the computer and hardware. This would defeat key-loggers and other security threats.
    • Just like we have certificates for websites, you can have certificates for hardware (keyboards and monitors it particular).
    • As an extra layer, put all ports in the front, where the user can see them.
  • Make the system physically durable, so the attacker can't rewire/insert malicious components inside the system.
    • You can make it also shatter completely or something if it does break, so its obvious something is up. Basically, make it tamper evident.
  • Glue it/lock it to the ground so the system can't be replaced.
    • Just as an extra layer, make the computer prove itself to the user's phone, somehow. Perhaps they the user's phone and computer have setup a system that can thwart Man-in-the-Middle attacks (for example, the phone tells the user the password, but signs it with the computers public key).

Is there any other reasons I'm missing, particularly ones that make physical security completely impossible, or do hardware manufacturers just suck at security? (Even if the above aren't completely full proof, they seem like a step in the right direction.) - To prevent against spy cams, make the monitor VR Goggles, that has an encrypted connection to the computer (again, resistant to man in the middle). (Okay, this is a little fanciful, but so are spy cams.) - Actually, one way to eliminate a lot of attack vectors is to make the room sound proof and X-Ray proof (if you are worried about that stuff), and then only allow authorized persons in. You run into all the same problems as above, but know its a lot simpler, and its an extra layer. - Even just putting a lock on the keyboard can prevent physical key loggers.

I believe my question rules out the answers of Methods for protecting computer systems from physical attacks and What can a hacker do when he has physical access to a system?.

Bonus Question: Are there any systems that are secure against physical access?

PyRulez
  • 2,937
  • 4
  • 15
  • 29

4 Answers4

5

The fundamental answer is the market doesn't really want to pay for general purpose computers that are secure against physical attacks. You propose making tradeoffs, and those tradeoffs involve engineering costs and operational costs which only a small number of buyers care about.

Some computer systems try to make these tradeoffs (for example, the Xbox) because there's a market reason (reducing game piracy). ATM makers and voting machine makers have generally chosen to put their machines in locked cabinets.

Of course, most ways of mitigating threats are subject to further attacks. Locks can be picked, glue can be scraped or dissolved, and so an attacker can perhaps modify the system even after all this work is done, which puts additional pressure on the economics.

Another solution is to go for tamper-evidence over tamper-resistance, and that's the approach that credit card terminal makers generally use. Of course, there, tamper-evidence begs the question of "evident to whom, and how are they trained?" Tamper evidence is another point along the economic spectrum.

Adam Shostack
  • 2,659
  • 1
  • 10
  • 12
2

I think some of the things you propose are doable but conflict with expected usability, flexibility or freedom of use. Others take ideas from the software world and try to adapt it to the hardware, but fail to address the problems we already have at the software side. But some of the ideas are already used in practice.

Just a few examples:

  • Seal the hardware, glue it on the table, destroy it when somebody tries to open it.... Such self-destroying hardware exist, i.e. you find tamper resistance in smart cards and maybe also in computers used in special highest security environments. But for general use it just impacts the usability because how do you know that the hardware is really tampered with in a malicious way and not just dropped accidentally on the floor? In the last case most users would hope that they still can recover the data from the broken system but this would not be possible if the system detects an attack and destroys the data.
  • Cryptographic secure pairing between hardware components with public-private keys, certificates etc... Just look at the mess we have with the 100's of certificate agencies inside the browsers, with compromised CA etc. It will not get better if we move this to hardware. In reality such pairing already exists but mostly to enforce digital restrictions (DRM) like with HDCP etc.

In summary: you would gain something and you would loose something at the same time. Mostly you cannot gain more security without giving up parts of usability, flexibility or freedom of use. The aim is to find the best balance between these requirements, but this of course highly depends on the use case.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
  • Is there any particular reason why this would be different for hardware v.s. software? – PyRulez Feb 06 '16 at 15:51
  • @PyRulez: it's not different. With software you have the same problem of security vs. freedom/usability/flexibility. In fact you have this in real life too, i.e. speed limits, checks at the airport... – Steffen Ullrich Feb 06 '16 at 15:54
  • I mean, why does Hardware lean towards usability and Software lean more towards security, relatively? Is it because the user can often physically protect systems anyways, and so hardware security is redundant? – PyRulez Feb 06 '16 at 15:57
  • 1
    "Software lean more towards security..." - Are you real? I would not consider software leaning to the security site. Most software is on the flexibility side (one might say feature bloat) and only considers security when something goes really wrong. – Steffen Ullrich Feb 06 '16 at 15:58
  • Relatively, compared to hardware I mean. – PyRulez Feb 06 '16 at 15:59
  • The hardware does not really have to destroy itself. It just needs to destroy the data. If you trust your cryptography you just need to destroy a master key. And if you have a strong password that can be used to reconstruct the master key, this does not even cause any permanent loss of data for the legitimate user. But most users don't want security, which is why such a scheme is rarely seen implemented. – kasperd Feb 06 '16 at 16:01
  • 1
    @PyRulez: The general assumption with hardware is that the user is in control of it and thus it provides only the security needed for this case. And the goal is not to affect the flexibility too much. In cases where this assumption is not valid you will already find temper-resistant hardware, i.e. smart-cards, HDCP or similar. – Steffen Ullrich Feb 06 '16 at 16:02
  • Security and usability are not opposites. An unusable system cannot be secured. – Adam Shostack Feb 06 '16 at 16:56
  • @AdamShostack: My statement was not limited to usability but to freedom/flexibility/usability. And it talked about "mostly opposite" and not always opposite. You might have a usable system which is secure but does not all what you want. Adding flexibility and freedom often impacts security. Just look at HTML only browser vs. browser with Javascript, single task system vs. general purpose computer, DRM vs. watching a movie on any system you like. – Steffen Ullrich Feb 06 '16 at 17:11
2

It depends on what you consider a "secure computer". Security is typically defined by the CIA triad of confidentiality, integrity, and availability. Once you allow physical access to a device, you lose the ability to maintain availability. It's just too hard to make something indestructible against a motivated and funded adversary when they have physical control of it.

But we can consider confidentiality and integrity. For example, a device that has strong tamper detection and resistance that destroys the data when tampering is detected could be said to be secure even when physical security is lost. That not only is achievable, but there are standards that for such products such as FIPS 140-2 Level 3:

FIPS 140-2 Level 3 adds requirements for physical tamper-resistance (making it difficult for attackers to gain access to sensitive information contained in the module) and identity-based authentication, and for a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces.

But meeting such criteria is expensive. You can find FIPS 140-2 Level 3 compliant storage media and cryptographic processors, but not entire computing systems. It just ends up being cheaper to lock the computer up in a room, put an alarm on it, and hire security than to mass produce computers that can are secure against physical attacks.

Neil Smithline
  • 14,621
  • 4
  • 38
  • 55
-1

There are three problems.

  1. By definition, someone having physical access over a system already breaks security.
  2. Hardware is made to be compatible
  3. Hardware has much more and much more severe attack vectors

My question is, is that just hardware manufacturers suck at security, or is there an intrinsic reason that is true. Can you create systems that are secure against physical access?

It is unlikely that physical access means that an unauthorized subject has shared access to a system because of the very nature of physical access.

Thus, if physical access is defined as having exclusive control over a system: The very definition of information security relies on the confidentiality, integrity and availability of data:

"Preservation of confidentiality, integrity and availability of information." (ISO/IEC 27000:2009)

If data is stored inside a physical system and this system is physically taken in possession by someone not authorized, the availability and thus the security of the system is already lost. This is the de facto definition of information security. If someone controls a server exclusively, information is no longer secure. This is just one aspect tho.

do hardware manufacturers just suck at security?

No, but most of them prioritize ease of use and quick development over including features which might not make the components themselves better.

Apart from the fact that hardware is designed to be compatible and using open protocols, physical access allows for much more attack vectors such as Cold Boot or piping input (hardware keyloggers) or output (Side Channel Attacks like Van Eck). While it is possible to defend against such attacks, the effort is in no relation to the effect. Hardware based attacks are not the regular way how computers are infected and for high security areas, there are security policies covering physical access.

So even if completely impractical, is a secure physical system imaginable?

Yes, but it will be specialized and limited by its incompatibility. All its hardware must be driven by closed source protocols and data must be encrypted at runtime in a way that it is inaccessible even to people possessing the system physically. Also this system must not have a HCI, because every interface that humans interact with is per se unsafe. This is because every input not controlled by the system itself can be intercepted when entered, and there is no way to control which subjects are able to see for example a visual output. Needless to say, this system would need to be completely inaccessible internally, which means indestructible, to prevent it from being disassembled and reverse engineered.

Also this doesn't interfere with Kerckhoff's principle, since for a physical system to be 100% secure it must be impossible to reverse engineer anyways, and under this constraint, Kerckhoff's principle doesn't apply.

In the end, it would be a really theoretical and limited system with zero usability.

AdHominem
  • 3,006
  • 1
  • 16
  • 26
  • "By definition, someone having physical access over a system already breaks security." - This seems like a poor definition. – PyRulez Feb 06 '16 at 15:53
  • "All its hardware must be driven by closed source protocols" - Why? Requiring it to be closed source seems to be a violation of [Kerckhoff's principle](https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle). Open source protocols can be secure as well. – PyRulez Feb 06 '16 at 15:55
  • Also, being indestructible is not necessary to prevent it from being reverse engineered. If its open-hardware, you can't reverse engineer it by virtue of there being nothing left to reverse engineer. If it isn't open-hardware, it should still be secure in face of reverse engineering. – PyRulez Feb 06 '16 at 15:58
  • Updated the post. tldr: For a physically safe system, you need to make assumptions which will lead to logical fallacies. You can approach physical security, tho with exponentially increasing effort and this is the ontological proof that such system would be prohibitively impractical. – AdHominem Feb 06 '16 at 16:32