16

My question regards whether or not the mitigations I use are appropriate for my threat model. Please don't jump to conclusions and say "you need to use locks" or "you can't leave your computer unattended" without first reading at least my threat model. I'm not defending from a janitor who got bribed $200 to nab a hard disk from a cheap 1U server. Additionally, exploitation of most network-facing software such as browsers is out of scope. I have sufficient protections on the software level for that to be a non-issue (strong privilege separation with custom seccomp filters in place to reduce kernel attack surface). If the answer turns out to be "there is absolutely no solution which does not involve custom-designed hardware", I will be disappointed, but I will accept it.

I have a workstation computer which I must leave on 24/7, so it is unattended for many hours a day. It also has a high computational demand, so I cannot replace components with significantly lower power versions (e.g. a self contained Intel Edison is surely more resistant to memory acquisition, but it is far too weak for my purposes). Most people who look for physical server or workstation security assume an attacker with very brief or intermittent access, where physical locks could keep them out. Unfortunately in my situation, that is almost completely useless, though I do lock my doors, of course. Recently I've been thinking of some more paranoid solutions, and I'd like some advice to make sure it covers my threat model correctly so I can be sure that I am not putting too much effort in an area that I do not need to worry about, or ignoring an area which I have left wide open. Yes, I am aware of risk assessment, and this is hypothetical for the most part. I would not be putting myself in a risky situation unless I was already familiar enough with the subject that I would not need to ask here.

Threat model

My adversary is capable of:

  • Bypassing all physical deterrence measures, given no hard limit on time.
  • Denying me access to my own hardware immediately at the onset of the attack.
  • Transporting my hardware to a remote facility for extended analysis, without it losing power.
  • Possessing state-of-the-art forensic hardware such as bus analyzers and high-grade freezing spray.
  • Potentially accessing trade secret design documents or datasheets for hardware components I may use in order to look for bugs which may be used to gain access (who knows how secure that ASPEED chip is?). I do not know how likely this is.
  • Observing my public and online behavior for extended periods to customize their attack.

However their limitations are:

  • I will always be aware when they attack, so they have one shot. As a result, I do not need tamper evidence.
  • They cannot force me to provide them access (no $5 wrenches allowed).
  • If the computer is powered on but locked, they cannot guess the password. If the computer is powered down, they cannot break the encryption key.
  • They are not quite the level of the NSA, so attacks which are not yet practical such as power analysis attacks or hardware backdoors are also not allowed.

Their goals are to:

  • Obtain a partial or complete but forensically useful memory image of the running system.
  • Obtain the full plaintext contents of all attached storage devices.

My goals are ANY of the following:

  • Harden the system such that physical access is not sufficient for my adversary to accomplish their goal.
  • Have the system shut itself down within several seconds of unauthorized physical access, resulting in the physical memory vanishing.
  • Have the system introduce massive corruption to memory upon unauthorized physical access, making any subsequent memory dump forensically useless.

Examples of possible methods of memory acquisition could occur if physical access is attained without the system noticing and shutting itself off:

  • Sniffing exposed buses such as PCI, QPI, etc.
  • Exploiting the exposed GPU hardware to gain DMA over PCI (e.g. resetting the GPU processor and then using JTAG?).
  • Getting the JTAG SDK from Intel and then directly hijacking my motherboard (so far, I cannot think of any solid mitigations for this other than de-soldering it, but I will try to find some).
  • Exploiting peripherals which I have not confined and which I do not know are at risk.
  • Somehow hard restarting the CPU such that the debug registers are not cleared, and reading them (to steal TRESOR keys). I believe the standard states that in all resets, an Intel CPU should clear debug registers, but there may be some exceptions which I do not know about.

In other words, they are a state level adversary, but not quite at the level of the NSA. I have a few mitigations in place. If you don't want to read the following wall of text, a tl;dr with potentially inaccurate simplifications is:

  • I am protected from DMA attacks from most compromised PCI devices.
  • My storage encryption key is protected from cold boot attacks.
  • Certain high-risk processes have their memory partially encrypted, with the key located outside of RAM as well.
  • The entire memory is lightly scrambled, although it is probably easy to break (Edit: yup, the scrambling uses LFSR for scrambling, which is broken).
  • The system will power down if the chassis is opened.
  • If I am removed from the system while it is unlocked, it will shut down.
  • The memory will wipe itself if it is hit with freezing spray.
  • If the system is shut down improperly in an emergency, the encryption key will become harder to crack.
  • The hard drive can in theory be modified to detect hardware write-blockers, and wipe themselves when one is used on them.
  • Live BIOS modification will be detected and defeated.
  • The computer watches itself with a camera, and shuts off if it detects motion.

These are the mitigations in more detail, along with what they are suppose to mitigate:

DMA protection with VFIO

Because the attacker will only get one shot, I don't have to worry about them taking out some PCI device and replacing it with a malicious one which will mount a DMA attack. However they may be able to exploit an existing and trusted PCI device. Because of this, I've confined most sensitive PCI devices using VFIO. Essentially, I've bound an IOMMU group containing untrusted PCI devices to a very small live system in QEMU, and had QEMU forward all communication to the host. In the case that one of those PCI devices is compromised, it will only be able to see the 32 MiB which QEMU has been allocated. So far, all USB controllers are isolated this way. The network goes through USB as well, instead of Ethernet, so going through the Intel Management Engine is mostly avoided. The LPC's DMA ability is disabled too, though on many motherboards, its ability to become bus master is disabled in hardware. Other PCI devices are simply disabled as well. SATA controllers and the GPU are not yet protected, though it's possible in theory and I'm working on it. While the GPU is pretty much safe (it's only exposed through /dev/dri/*, unless EDID headers and such are parsed by the GPU's own hardware at all), the SATA controllers really should be, considering they are so complex and NCQ does support client->host DMA, if the host allows it. If many types of peripherals are inserted at runtime (excluding some harmless ones like PS/2 and serial ports), a custom kernel patch triggers a kernel panic, and a pseudo-hardware (BIOS) watchdog shuts the system down shortly after.

TRESOR

Disk encryption keys are stored in the x86 debug registers using a Linux kernel patch called TRESOR. This ensures that the key itself never hits RAM, which completely mitigates cold boot attacks and passive DMA attacks. Access to the debug registers is disabled with this patch to complete the protection. The downside is that a hard reset, such as one triggered by a triple exception fault, may preserve the debug registers such that the operating system being booted into can access them. And of course, ring 0 can access them as well. Unfortunately, they are only the encryption keys, and unencrypted process data, kernel data, file system cache, etc still exists so TRESOR is far from a complete solution. I suppose I could create /dev/ram0, encrypt it with TRESOR, then format it with a filesystem that supports DAX (direct access, a filesystem feature which completely bypasses the page cache), but that would not be a complete solution either.

RamCrypt

A modified version of TRESOR was created recently called RamCrypt, which encrypts most of a target process' memory, leaving by default only 4 pages unencrypted. While 4 pages is only 16 KiB of unencrypted memory on most hardware, which is quite good, pages which are marked with VM_SHARED, VM_IO, or VM_PFNMAP are not encrypted. This means that information which may be forensically useful can still remain unencrypted. Additionally, RamCrypt only encrypts individual processes, but not metadata of those processes or the process' task_struct in the kernel, or anything else like that. So while Firefox may be mostly encrypted, the slabs in the kernel dealing with TCP may still give away what websites have been viewed, considering it's the networking stack-related slabs that are the ones which are deferred to the RCU for destruction, so they linger around the longest. If that weren't bad enough, RamCrypt also suffers from a severe performance impact in the default and most secure configuration.

Memory scrambling

Modern DDR3 and DDR4 memory controllers support a feature called memory scrambling, which is designed to reduce excessive di/dt on adjacent lines in memory (in other words, it prevents successive 1s or 0s from causing electromagnetic interference in the memory bus). The scrambling seed is re-initialized at every boot, probably by UEFI. It is strong enough that the reverse engineer Igor Skochinsky apparently could not trivially crack it, but I don't know if he even tried. Memory scrambling may mitigate simple cold boot attacks, but the seed is likely not cryptographically secure, especially considering the goal is only to increase the distribution of 1s and 0s. If memory serves correctly, a quick read through part of the Coreboot source code made it seem like it may be only 32 bits anyway. It looks like there are no full memory encryption solutions on the market, sadly. PrivateCore claims to have VPSes which fully encrypt memory (their vCage product line), and Xbox supposedly encrypts its memory to frustrate RE, but that's about all.

Edit: Just as I thought the scrambling is not cryptographically secure. It does seem like the steps for performing recovery of the seed are rather complex, especially due to interleaving which increase the amount of lost data which may provide a small amount of protection. And there is no analysis yet on DDR4 memory, which may use a stronger seed. Hopefully, in the future, Intel will use a very fast cipher such as Simon in their MCH for memory scrambling.

On-line chassis intrusion detection

My BIOS and hardware has chassis intrusion detection built in. While I have not implemented this yet, I believe it may be possible to poll /dev/nvram once every 0.5 seconds or so and parse it for whatever value stores the chassis intrusion count, and shut down the system immediately upon detection of an intrusion event. If it's not possible for the operating system to obtain that information, then I might have to actually modify the hardware and have it use GPIO or something, but I'm not so familiar with that.

Wrist strap

Tinfoil time! I plan to make a wrist strap connected to a device on my desk which can be pulled out with only a small amount of force. In the case that I am forcibly removed from that area, the strap would be yanked out and the system would shut itself down. While this seems overkill, it would allow me to be almost completely safe during the most sensitive times: my computer unlocked with a root prompt sitting in front of me just waiting for someone to insmod ./crashdev.ko and read all the physical memory from /dev/crash. During all other times when the wrist strap is not in use, the system would be locked using vlock, which is designed in a way that makes it almost impossible to have bugs. If the vlock program crashes, you are simply locked out of your computer, compared to most graphical lock screens where a crashed lock process gives you access back.

RAM temperature polling

As far as I am aware, cold boot attacks can be conducted in two ways: 1) A system can be reset, and made to boot into a live system which extracts memory contents, or 2) memory modules are cooled to a low temperature, removed (and optionally cooled further), then inserted into a different motherboard or bus analyzer to be refreshed and directly read. The former can be partially mitigated with a BIOS password, but that can be fairly easily defeated by removing the CMOS battery, or just shorting the right pins. The latter may be defeated by repeatedly polling the DMI table for memory module temperature, and wiping memory then shutting down if a sudden, inexplicable drop in temperature is experienced. I currently do this with a simple C program. In the future, I may have it directly wipe the key from memory by calling crypt_wipe_key() from a kernel module, and issuing HLT.

Hardening from improper shutdowns

The system should never shut down improperly, unless an attack is occurring. I can take advantage of this with two LUKS keyslots, both with the same password. The first keyslot takes 5 seconds of PBKDF2 time to process, and the second takes an obscenely long time (e.g. 72 hours). When the system boots, an init script copies the first keyslot to tmpfs and wipes it. When the system shuts down properly, it writes the keyslot back to the LUKS header. If the system is ever shut down in an emergency, that keyslot is lost for good, and the only one remaining is the one which takes an obscenely long time to hash. The worst case scenario is I accidentally type poweroff -f or something, and I have to wait 72 hours before I know if I made a typo in my password. Best case scenario is my adversary will be almost completely unable to attack the system, because any time it is on, the physical disk will be encrypted with a key that can be guessed at a rate of one try every few days. On a side-note, I might also be able to make use of the ephemeral nature of /dev/nvram, assuming it is true NVRAM (which should be the case if /dev/nvram has a size of 144 bytes) and not CMOS EEPROM or some sort of emulated NVRAM. Much of its memory is not utilized, so it could be (ab)used as a sort of poor-man's SED, instead of relying on poorly designed SED inside the closed source firmware of "enterprise" nearline SATA drives.

Defeating hardware write-blockers

One common way to obtain a forensically-sound disk image is to use a hardware write-blocker, which is a small device that attaches a hard drive to a computer and drops all writes going from the computer to the drive. Normally, there would be no way to prevent this. However hard drives contain multiple powerful CPUs, and most of the boards they are on support JTAG, which is a method to control a CPU like a puppet. This means that a small device could be put inside a hard drive and attached to the JTAG interface, injecting code into the hard drive's memory to change its behavior. Injecting into memory this way would be preferable to writing to the hard drive's persistent firmware because that would require closed source SDKs which I do not have access to. The behavior could be modified so that the drive could initiate ATA Security Erase if a certain threshold of sectors are read in a row, which would indicate a hardware write blocker. Or alternatively, the drive could initiate erasure if a certain combination of sectors aren't directly read from (a sort of analogue to port knocking... sector knocking?). This is a bit tinfoil hat, but would make an interesting project to harden hard drives from forensic analysis. This isn't a new idea and people have done interesting things with hard drives over JTAG.

Continuously scanning the BIOS for tampering

Cold boot attacks are getting impractical, especially when many other mitigations are in place. However, modification of the BIOS on a running system, and then resetting the system into the new BIOS can have interesting consequences. In the case just linked, the BIOS was modified directly over SPI, then the system was warm reset over LPC into the new BIOS, which promptly began to export the entire contents of memory slowly over serial to the investigator's computer. A mitigation to this would be to have the OS scan the BIOS in a continuous loop and verify that it has not been modified since the last read. As writing to the BIOS is much slower than reading from it, this will likely detect any tampering as it is occurring. The computer can then take defensive action, like shutting down before the write is complete. I've heard someone mention that EEPROM apparently cannot handle millions of reads (not a typo, I said reads), but luckily most modern BIOSes are MLC NAND, which can handle a theoretically unlimited number of reads, so the system should be able to read in a continuous loop indefinitely, making this mitigation practical.

Motion sensitive camera

Pretty simple, but superior to generic chassis intrusion detection. I have a camera pointing at the workstation, hooked up to the workstation, being monitored with the motion program. In essence, the computer is watching itself to make sure no one gets near it. If anyone gets near it, it will take a predetermined action, such as shutting down. This is much harder to circumvent than chassis intrusion detection switches, because it requires fooling the camera. The only way to defeat this would be to cause the camera to freeze with the current image it has in place.

To re-iterate, my question is: what other or more effective methods for protecting an unattended workstation in this threat model have I not thought of? Specifically in the domain of detecting unauthorized access and making chassis intrusion detection more robust.

forest
  • 64,616
  • 20
  • 206
  • 257
  • If you go through all these measures they are going to assume you messed with JTAG and reprogram via JTAG wiping out your code. – cybernard Apr 02 '16 at 15:31
  • They could also just open the drive and remove the device which injects via JTAG. All it would do is screw over the technician who's job it is to image a dozen drives a day, who's most in-depth understanding of storage devices involves SMART data and perhaps hidden data in fake bad sectors or DCO. – forest Apr 03 '16 at 02:50
  • Another concern, is cloud compute is getting cheaper and cheaper, the DROWN attack only cost a research approx $500 in amazon compute time, and the people you are defending against could afford a significantly more compute time. Assuming they don't already have this capability on site. – cybernard Apr 03 '16 at 16:10
  • I'm not sure what the DROWN attack or cloud computing has to do with this. – forest Apr 04 '16 at 02:17
  • How about blinding lights,smoke,dust, or etc to subvert your cameras so they either see all black,white, or useless images and subvert while blinded. Regarding cloud computing, in your example, your adversary would seem to have enough money to leverage 10's of thousands of computers (entire amazon,google, and etc clouds) to help brute force, or use a combination of attacks to chip away at your encryption. The point of mentioning DROWN is it is a side channel attack to break a much stronger encryption. – cybernard Apr 04 '16 at 03:06
  • If the cameras are blinded by smoke or light, I will assume that my system is compromised. Smoke screens are not particularly stealthy. I see what you mean by DROWN attack though. I don't believe there are any known similar attacks that work against AES-NI (due to being implemented in individual instructions) or Serpent (due to having very tiny S-boxes that fit in the smallest of caches). – forest Apr 04 '16 at 03:20
  • Ok, so you assume you are compromised, then what? How would you ever be sure you were safe again? I could embedded bugs in the walls,ceilings, floors, rafters, and who knows what else. Got a drop ceiling, maybe I did an exact match replacement filled with micro bugs. Here, to me, is the scary part DROWN, researcher perform **2^50** offline work in **under 8 hours** using Amazon EC2, at a cost of $440. They weren't even using the whole cloud, nor did they combine there efforts with google and etc compute. If your attacker had 100k, that would buy a ton of compute time. – cybernard Apr 04 '16 at 03:46
  • 2
    If I have to assume I am compromised, I will just buy new hardware and restore from authenticated backups. I don't much worry about 2^50 operations, because my encryption key is the output of "head -c 32 /dev/urandom | base64", so it has a keyspace of exactly 2^256. I don't worry about pure computing power. Directly attacking crypto is rarely the weak point in a secure system. More likely, some method of PCIe hotplugging I had not thought of would allow my running system to be compromised, or something along those lines. This is why my threat model is so limited. – forest Apr 04 '16 at 07:14
  • Not really an answer to your question, but it seems cold boot attacks against scrambled DDR3 memory are again possible: Lest We Forget: Cold-Boot Attacks on Scrambled DDR3 Memory ( paper | pres ) Johannes Bauer, Michael Gruhn and Felix Freiling http://www.dfrws.org/2016eu/proceedings/DFRWS-EU-2016-7.pdf –  Apr 06 '16 at 20:45
  • 1
    If you are going that far with a threat model, have you considered building the workstation using components *explicitly* designed to provide the functionality you are looking for, like processors which are designed to be hardened against physical access to the busses, or even the JTAG port? – Cort Ammon Apr 07 '16 at 07:29
  • I don't know of any processors which are like that, although I can do that myself to a limited extent with epoxy resin or by destroying the Intel JTAG port (which is supposedly also LPC or something unless you sign an NDA to get their closed debug specifications, I'm not sure). – forest Apr 16 '16 at 08:26
  • I start reading through this lot and at some point I wonder if it's just easier to do a destructive erase with C4.... – ewanm89 Jun 17 '16 at 13:42
  • Does the wrist strap have some way of tracking your heart-beat? aka what happens if you are hit with a tranq dart or the room is filled with ether? (or does this qualify as the 5$ wrench attack?) – CaffeineAddiction Jun 21 '16 at 20:06
  • 1
    I'm not qualified to address all your points, but at least as far as intrusion detectors go, they are easily bypassed. Generally it's just a button that held in a depressed state by the panel. When the panel is removed, the button returns to its "normal" state, which the BIOS marks as an intrusion. If the attacker is aware of the presence of a standard intrusion sensor, it is easily evaded. Intel has patented other sensors that might be harder to evade: https://www.google.com/patents/US6388574 but I've never seen them. – Jesse K Jun 21 '16 at 20:48
  • You need explosives. Lots of explosives. – KristoferA Jul 12 '17 at 16:04
  • Explosives are a meme. They're flashy and fun but not actually useful (or often legal) for data destruction purposes. – forest Nov 29 '17 at 21:06

6 Answers6

3

Bypassing all physical deterrence measures, given no hard limit on time.

Install cameras with a monitored security solution off premise, e.g. ADP call the cops when something is out of the ordinary. If you are using keycards, you could create a script that does something along the following: “If this mission critical system stops responding to probes, lock all doors allow no one out.”

Denying me access to my own hardware immediately at the onset of the attack.

This would have to be physical. Do you mean barricading themselves in a room you couldn’t get into? If not, same scripting logic applies: “If system X stops responding to probes, log into switch and shut down the port entirely.”

Transporting my hardware to a remote facility for extended analysis, without it losing power.

This would be addressed in the lock door instance.

Potentially accessing trade secret design documents or datasheets for hardware components I may use in order to look for bugs which may be used to gain access (who knows how secure that ASPEED chip is?). I do not know how likely this is.

Minimize the access to documentation on what you have deployed on this system to a principle of least privilege.

Observing my public and online behavior for extended periods to customize their attack.

How would they go about doing this in a “risk aware” environment. They’d need to compromise your machine/credentials, or be in a buggy networked environment to accomplish this (sniffing). To minimize this, you could install SIEM and convert it to extrusion detection. Make a rule to the tune of “here is my username, here is my machine. If you see anything associated with my username that IS NOT coming from my machine, immediately alert me.

Obtain a partial or complete but forensically useful memory image of the running system.

In order to do so, they’d need local access. To accomplish this, they’d have to be trusted. If the machine is locked in a controlled environment (key cards, monitoring, etc) it would be difficult to pull this off. The solution would be a deterrent measure: “Anyone caught doing anything will be fired/prosecuted/etc” along with cameras that were maintained off-site.

Harden the system such that physical access is not sufficient for my adversary to accomplish their goal.

How do you propose this. Given all you stated, you would need a titanium case, rigged to shock (lethal) someone when the device is unplugged or tampered with.

Have the system shut itself down within several seconds of unauthorized physical access, resulting in the physical memory vanishing.

You could perform this with scripting (perl, python, powershell). E.g.: “System X: Probe this address (router, switch, etc) if you do not get a response in under 500ms power off” the problem would be if your network hiccupped, your machine would be in a constant state of rebooting.

Have the system introduce massive corruption to memory upon unauthorized physical access, making any subsequent memory dump forensically useless.

Difficult to accomplish since so far, you have inferred you don’t trust even trusted users.

--- tl;dr version now.

You are concerned with someone stealing a system and want to protect memory, and or physical tampering.

  1. Memory wiping could be accomplished with the scripting examples I have stated: “If I don’t get a response to a probe I am sending in X amount of time, power off/run this command, etc”
  2. Data wiping script: “If I am powered on for N amount of time, and I do NOT receive responses from my probes, then erase the disk”
  3. Facility lock down: script something to seal all doors (if they are controlled by say key cards) if the machine does not respond in millisecond ranges
  4. Off premise cameras with 24x7 monitoring that call authorities WHILE the doors are being locked. THEN create a message to ALL employees/users accessing this machine bragging about the level of security you have on it. Put it to the test (disaster recovery mode) for all to see: “Authorities WILL COME to this facility if this machine goes offline” (treat is similar to a jewelry store)
techraf
  • 9,141
  • 11
  • 44
  • 62
munkeyoto
  • 8,682
  • 16
  • 31
  • The question pretty explicitly said `Bypassing all physical deterrence measures, given no hard limit on time.`, so unless you know of a lock which is rated to survive an attack of unlimited duration, much of this answer is worthless. And rigging a case to lethally electrocute someone is 1) illegal, 2) is easily circumventable, and 3) would only work if I assume an adversary composed of exactly one person. Two or more people and one dies, the second uses protection, and I go to jail for life. And additionally, such keepalive connections must _not_ be used for security... far too easy to bypass. – forest Dec 16 '17 at 07:12
1

A potential attack vector you haven't covered so far is something called "acoustic cryptanalysis", although I don't know if this is relevant in your case. Some experts could actually determine a RSA key using the sound the CPU makes (which humans aren't able to hear).
http://www.cs.tau.ac.il/~tromer/acoustic/
I haven't thought of a practical solution as of yet (maybe a soundproof case with liquid cooling instead of air conditioning?).

Speaking of sounds: This is also notable, I think:
https://www.insidescience.org/content/computers-can-be-hacked-using-high-frequency-sound/1512
But long story short: Make sure there are no "unauthorised" devices near the workstation when you are logged in.

You said that there is no "hard limit on time". They could however connect the workstation to a portable battery and take the it with them. I guess there are way more possibilities for an attacker if they could bring the workstation into their own laboratory.
Possible solutions:

  • Detect minor fluctuations in the power supply
  • Detect if there are any changes to peripheral devices. I figure it's much harder to move a computer with the monitor(s) attached to it.

[Would normally use a comment for clarification, but I can't do that here yet]
You stated that the attacker has one shot. Does this also apply for a breach in the room where the workstation is located?

  • 1
    The acoustic cryptanalysis attacks against RSA was caused by a problem with the implementation. Modern openssl and gnupg are not affected, and modern symmetric ciphers in the kernel are not affected either. And yes, a breach of the room counts as an attack. They cannot place bugs in the room to sniff my encryption key without me being aware of it. – forest Apr 03 '16 at 02:51
  • 1
    Regarding detecting minor power fluctuations, they can't do much with that once it's in the lab. It is encrypted with TRESOR which uses AES, accelerated with AES-NI, which is constant-time and has almost no variation in power. It also uses Serpent, which has tiny S-boxes so there will be little flushing of the cache that would lead to variations in power usage. As for moving a with the monitor connected, that would be trivial. They could move it even if it were attached to a washing machine. – forest Apr 03 '16 at 02:55
  • 1
    And that insidescience.org link is misleading. It talks about the ability of multiple systems which have already been compromised to communicate even when not connected over the network. It has nothing to do with compromising a "clean" computer using pure sound. – forest Apr 03 '16 at 02:57
  • acoustic attacks are in there infancy and you can only expect them to get better/more effective over time. Also, "cannot place bugs" in the room seems a bit optimistic, as very clever people have developed bugs that are so small/hidden detecting them is almost impossible. Could be embedded in a power cord even. Researcher have even used high speed cameras, like 20,000+fps aimed at an object in a room through a window and been able to use detectable vibration of that object to hear what is going on in a room. You speak anything usefull(passwords,etc) I got it. – cybernard Apr 03 '16 at 16:05
  • It may seem optimistic, but it's part of this threat model because those issues are easier to mitigate. Cameras can detect physical bugs being placed, van eck phreaking does not pick up passwords, detecting vibration only works through windows, and virtual keypads defeat all that anyway. I imagine having almost microscopic cameras attached to insects to fly into the area is above the budget of my adversary, so I do not include anything at that level or above it so extreme in my threat model. That's why I don't consider it overly optimistic. – forest Apr 04 '16 at 02:16
  • What protections do you have against your cameras being attacked (or subverted), locally and remotely? Maybe a 0 day camera exploit, and loop or turn off your cameras. Maybe I walk (or robot) in with a full size mirror in front of me, (all sides if necessary), and place small mirrors in front of cameras so they record themselves or the wall. Place bugs and leave. The cameras did not record me, nor do you have any idea where the bugs are located. – cybernard Apr 04 '16 at 02:46
  • I don't know of any way a 0day could work against the cameras, because they are directly connected to my workstation. I'm not sure how effective something like a mirror would be. They would have to use multiple mirrors so the light goes around them, so the cameras line of sight does not get cut off, which sounds almost impossible to pull off. Has there been any research on that, or any demonstrations? This is exactly the kind of thing I'm looking for. As far as I am aware, there are no practical ways to make a person effectively invisible to cameras without blinding them, but I may be wrong. – forest Apr 04 '16 at 03:26
  • It's a shame articles like this do not get more up votes, actually a sticky system would be good as these questions often get mentioned daily. Nice to have every scenario, plus lots of extra ideas in one place. – k1308517 Apr 06 '16 at 14:45
  • @forest in terms of fooling motion detection, its actually easier than you think. Mythbusters did it with a acoustic motion detector https://www.youtube.com/watch?v=x8vmd3DkzDg but video motion detectors also work on the same principles of having a movement threshold. – CaffeineAddiction Jun 21 '16 at 20:46
  • 1
    Wouldn't that be easy to solve by adding both a temporal and absolute spacial movement threshold? E.g. if a certain number of pixels change in a given time period, or a certain (large) number of pixels change from the original image _ever_, motion could be detected. I wouldn't be surprised if the _motion_ utility can do that. – forest Nov 29 '17 at 23:08
0

I take it this is a theoretical scenario? If you have real near-NSA level attackers doing targeted attacks against you, and they have physical proximity - you have bigger problems than I can help you with. And in that case, I suspect variants of the $5 wrench would work.

As you will be aware when they attack, we can exclude "evil maid" attacks. The class of attacks of real concern seem to be cold boot attacks, and live "bus injection". Everything else you seem to have well covered.

You mention two main approaches in your question: encrypt memory, and detect intrusion.

The problem with trying to detect intrusion is that a smart attacker can bypass your detectors. If you have a chassis open switch, they can drill through the side of the case. Memory temperature sensors - heat the sensor while cooling the memory. Even if they don't know what detectors you have, they can X-ray the system to get an idea. I don't think intrusion detectors can be trusted for this purpose.

Encrypting memory with TRESOR / RamCrypt fits your use case very well. The theory is that the CPU doesn't trust any external system. All bus access is moderated with the IOMMU, and the memory is encrypted. There is the inherent risk that the decryption keys are in registers, so potentially someone could analyse the CPU in an advanced manner to extract them. But you have discounted that attack as "not quite the level of NSA". I'm more concerned that there are implementation bugs that let someone bypass RamCrypt. For example, there could be an Intel Management Agent vulnerability that lets you get in when you tamper with particular bytes in memory. Given that RamCrypt is relatively new, and there is a large attack surface, it's completely plausible that attackers of the capability you are concerned about could find and exploit such flaws.

There's one part of your threat model I think is counter-productive:

Bypassing all physical deterrence measures, given no hard limit on time.

This is not realistic - strong enough physical deterrence can stop them. In a real-world scenario, what you would want to do it create physical security - put the computer in a safe. There are safes designed to hold running computers - I've seen them in highly sensitive facilities - although I don't have model numbers to hand. You could potentially combine this with an intrusion detection system, so if someone tries to break into the safe, it alerts the computer and the memory is wiped. I would trust instruction detection on a safe much more than I would on a computer case.

paj28
  • 32,736
  • 8
  • 92
  • 130
0

Camera to Record Login

An adversary of this caliber could install a camera to record your keyboard login. A low tech solution would be to have a keyboard cover though a thermal imaging camera could be used. So choosing the right fabric is key. The technology solution would be 2-factor (i.e. smart card, biometrics, etc.). In your case it seems best to use a combination of 2-factor methods.

Duplicate System

Your adversary is likely capable of building a duplicate system with the exact hardware you're currently using. They would have no problem triggering an improper shutdown to image the disk and bring it up on the duplicate system. Waiting the 72 hours thanks to LUKS but this isn't a big deal since your login was recorded by their camera.

Network Communications

Having physical access also means your adversary has access to all of your network communications. A successful MitM attack could plant their trojan on your system. This makes physical access moot since your system will be imaged/examined/etc. over the wire.

user2320464
  • 1,802
  • 1
  • 15
  • 18
0

Re coldboot vs RAM:

Using the old data [1], DRAM looses its contents in 1s-10s at room temperature. Once your memory is cold, it retains data long enough for extraction.

Suppose your machine detects a physical intrusion (camera, self-build sensors, whatever). Time < 1ms: Your debug registers are cleared. Decision time: Do you attempt to wipe memory or do you just power off?

Wiping takes about amount-of-unencrypted-ram/speed-of-memcopy. If you keep most of your memory unencrypted, power-off should be faster at normal operating temperature. Regardless, the attacker can force this behaviour by shooting a gun at your power supply.

Now, how fast can an attacker cool your memory? I guess your worst case scenario is pressure-pumping a bottle of liquid nitrogen into your computer. A sturdy case is unlikely to help, since the attacker is willing to physically destruct non-RAM parts of your computer anyway.

Due to these considerations, an essential consideration is thermal insulation of your RAM, as well as the specific curve of data-loss on power-off for the RAM you built in.

For the thermal properties, you want high latency and ok bandwidth for heat conduction between your RAM and its surroundings. Naive guess at a good system would be water-cooler + insulation foam (hardware store). But maybe standard systems are good enough.

Re acoustic side-channel: The documented attacks were against public-key, not symmetric crypto. Depending on your setup this may or may not matter.

Re securing DMA/etc: I would plan for the main defense to be a big cage for your computer, where you want to detect any tempering with the cage in order to trigger the shutdown (emergency shutdown should be just wipe debug-registers, possibly processor cache, power down ram).

Really bad attacks: I don't know whether cold-boot against debug-registers and processor cache is possible. If yes, then you need the same considerations as for the RAM.

Really, really bad possibility: Research what happens to RAM when you plug power and next, immediately, power up again. When does the periodic refreshing of the ram resume? Can the BIOS influence this? A nightmare would be that your system can be defeated by a brown-out, triggering a reboot and ending up in your harddisk password prompt, keeping all your valueable data from RAM refreshed and giving ample time for either cooling of DMA-extracting its content.

[1] https://www.usenix.org/legacy/events/sec08/tech/full_papers/halderman/halderman_html/index.html#sect:effect

anon
  • 1
-2

The solution for you does exists I presume - use an onboard memory, better take a look for SoC-integrated memory, btw. the crucial part for memory aquisition or even for cold boot is an ability to ACTUALLY eject a memory or be able to boot from a pendrive, for example. uBoot+NAND will save you, I suppose. Start looking from Cubieboard 1 or 2 as an example of what I mean.

Alexey Vesnin
  • 1,565
  • 1
  • 8
  • 11
  • 1
    I'm not sure I understand what you're saying. Are you suggesting that I switch to a SoC platform to improve tamper-resistance? Unfortunately my system has high computational requirements, so I can't really switch to much smaller, weaker systems. I could however use tamper-resistant epoxy on the memory modules, of course. I updated my question to reflect that. – forest Apr 07 '16 at 01:36
  • @forest correct, I propose you to swithch to SoC. You can use a cluster of SoC's to fulfill a computational requirements + increase the complexity of memory images merging in case of cold boot – Alexey Vesnin Apr 07 '16 at 03:43
  • 1
    That would only work in embarrassingly parallel computational tasks. They would have to be in a cluster connected over a network, which would be far slower than what normally connects multiple CPUs together, like QPI. – forest Apr 22 '16 at 01:15