11

I've seen OWASP Top 10 guides for web apps, native apps, etc., but never anything for embedded systems or hardware devices. These usually involve microcontrollers (e.g. Atmega / PIC) or small microprocessors which execute code and accept input from various data sources. Many implementations of a wide range of interfaces (including HDMI, HDCP, 802.11b/g/n and even IR remote controls) in physical devices have been shown to be vulnerable to DoS and more nefarious exploits.

Are there any guidelines for these kinds of devices, especially from a mitigation and testing point of view?

Polynomial
  • 132,208
  • 43
  • 298
  • 379
  • I'm not aware of a similar set of top-10 guidelines for hardware devices. Perhaps you might care to take a stab at it, by answering your own question with a candidate list of top-10 concerns? – D.W. Nov 18 '12 at 23:03
  • Might do later. I can only think of 3 or 4 that really apply directly to hardware implementations, but I'll put some research in and try to dig up more. – Polynomial Nov 19 '12 at 08:31
  • Buffer overflows protection, validating inputs, etc. All secure coding policies should apply to embedded systems. – schroeder Nov 19 '12 at 16:29
  • @D.W. I've added an answer based on some research. – Polynomial Nov 20 '12 at 09:58
  • 1
    “Embedded systems or hardware devices” covers a huge swathe of different expectations and environments, from elevator controllers to smartphones, from network routers to credit cards… You should pick one (the same way OWASP concentrates on client/server web applications). Also, are you targeting the device as a whole, or the hardware design and construction (I guess the device, from your answer)? – Gilles 'SO- stop being evil' Nov 20 '12 at 17:59
  • @Gilles it's mainly about the specific vulnerabilities that apply to custom hardware products that contain embedded software. I'm not really looking at any type of device in particular. – Polynomial Nov 20 '12 at 21:53

3 Answers3

15

1. Backdoor testing accounts.

Engineers often include backdoor mechanisms and testing accounts in hardware for debugging purposes, with trivial or no security measures put in place to protect them. Unfortunately, a large number of devices make it to market without having these mechanisms and accounts disabled, allowing attackers to gain illegitimate access to the device.

2. Unsecured network management protocols.

Many devices implement SNMP or UPnP for remote management, but fail to implement even basic levels of security around them. Office equipment, networking hardware, industrial control systems, etc. often leak sensitive data (such as the routing table) via SNMP, and may provide various control functions via UPnP. Many UPnP-enabled devices implement the standard network interface management functions, which allow an attacker to disable a network interface. Further functions may allow an attacker to permanantly damage the device, or cause severe problems with functionality.

3. Buffer overflows.

Hardware implementations are notoriously bad for not checking target buffer sizes, and can allow complete corruption of the memory state. This may result in code execution, but more commonly simply results in a denial of service condition, where the device must be hard-reset before it can be used again.

4. Integer overflows / underflows.

These occur when an unsigned integer input is treated as signed, or a larger integer type is directly cast to a smaller integer type. Both of these issues may result in cases where a value is outside the expected range, which can cause array range checks and buffer length checks to pass despite the input buffer being too large. These can also result in read-what-where conditions, where a negative array index leads to memory operations accessing preceding memory regions.

5. Insufficient sanity checks.

Input values in hardware implementations are usually considered to have strong integrity, so sanity checks are often not performed. This might allow a malicious user to supply an unexpected value and change the behaviour of the device. A common example of this is when a set of bit-flags are used, and conflicting bits are set.

6. Shared, hard-coded secrets embedded in non-volatile memory.

Hardware designers are sometimes under the delusion that because something is stored in an EEPROM chip on a board, the data within it will not / cannot be read by someone who wishes to interfere with the device. It is common to find hard-coded cryptographic keys and other credentials stored in the firmware of a device. These might be used to compromise the device or its communications, and are often the same across all devices of the same model.

7. Poor quality, self-designed or missing crypto.

Often it is complicated to implement proper cryptographic algorithms in embedded systems, especially when working with less widesrpead microcontrollers and microprocessors. Reference implementations often assume x86-like or ARM-like architectures, which increase the work factor. Engineers also usually lack the knowledge and experience to properly implement cryptography. As such, cryptographic algorithms found in embedded systems tend to be poorly implemented, or are home-brew designs with serious security flaws. Many systems completely lack any form of hashing on passwords and other credentials. This type of data can usually be extracted from non-volatile memory using off-the-shelf tools.

8. Lack of strong integrity and authenticity checking on firmware upgrades.

The integrity of device firmware is often checked via a CRC32 or MD5 hash, but is rarely authenticated via any strong means. Many manufacturers rely on obscurity as a security measure. Whilst the cost of reverse engineering the firmware of a device may be considerable, it is becoming less so with the advent of common microprocessor architectures (e.g. ARM) in embedded systems. Once an attacker gains the ability to upload a firmware image, they may entirely subvert the system.

9. Various web-application flaws in control panels.

Many of the OWASP Top 10 web application flaws are disturbingly common in web control panels for embedded devices. CSRF, XSS and session theft feature highly in the list of common vulnerabilities, though SQLi is less common due to the relatively low number of systems that implement a full relational database.

10. Improperly bound network services.

Many network services on embedded devices are configured to bind to 0.0.0.0, rather than directly to a LAN IP address. This may allow an attacker to communicate with the device from outside the LAN. Appropriate segregation and firewall configuration may help to mitigate this, but it is still a concern.

Polynomial
  • 132,208
  • 43
  • 298
  • 379
8

Information Security is a system discipline in that it requires expert knowledge of an entire information system (including the people) to be successful. Hardware and software can never be more than pieces in a larger system. Doing security analysis on an embedded system requires understanding of how it will be used by people in the system. A good system analysis will always be better than a check list.

1. Easy physical access to security critical parts or components.

Some parts of an embedded device will be critical to it operating in a secure manner and other parts will not. Well designed devices will make it difficult to access those components by a variety of techniques. Simple techniques include excluding screws and using solid plates to cover the exterior of the device. Testing requires a disassembly expert.

2. Putting too much trust in a unique component of the design.

From custom connectors that 'no one else can obtain' to unique frequencies that 'no one else uses' these design features usually hurt authorized users more than unauthorized users. Effectively making the device more expensive to use. There is no way to directly test concentration of trust, but analysis of modes of failure and states of the system may help identify single points of failure.

3. Not protecting the blueprints, schematics, and maintenance manuals.

These documents and information help an attacker find the physical soft points as well as identifying the security critical components of the device. Mitigation involves document and equipment accounting and regular auditing

4. Keying all devices of the same design with the same key.

This way when one device gets compromised, they all get compromised. Depending on the total number of devices and threats in the environment keys could be changed per device or per lot. The number of devices in a lots will be a compromise between efficiency of use of keys and damage done when a lot's key is compromised.

5. Not designing a rekeying system.

Almost any secure device will require one or more cryptographic keys. Over time, as devices are exposed to a threat environment, one or more keys will be compromised. When that happens its nice to be able to re-key a device instead of scraping it for parts. It is especially important to test failure modes of rekeying functionality.

6. Not building in tamper detection

Without tamper detection you can not build in any active mechanisms against tampering. Mechanisms include sanitizing sensitive data, destroying keys, or 'bricking' the device.

7. Improper RF shielding.

This may expose out-of-band signals to the environment which may be used to infer important information abut secure operation of the device. A simple example is determining success or failure of a secure request. Many systems will take longer when a request is partially successful then when no part of the request is successful. This is often described as a side-channel attack.

this.josh
  • 8,843
  • 2
  • 29
  • 51
4

You could have a look at the Common Criteria Protection Profiles, of course a non-exhaustive list, but indeed a list of requirements for different systems. Some might cover what you're developing.

Each protection profile (PP) follows a format introducing the target of evaluation (TOE), and the objectives for securing the TOE. A PP thus is a formal requirements formula for Common Criteria evaluation.

An example PP is the US Government's PP for general purpose networking OSes. A specific requirement, here for auditing, is presented as follows:

The TSF [the OSes trusted security functions] shall provide authorized administrators with the capability to read all audit information from the audit records

Also, threats to the system are presented and formalized:

[Threat T.CRYPTO_COMPROMISE:] A malicious user or process may cause key, data or executable code associated with the cryptographic functionality to be inappropriately accessed (viewed, modified, or deleted), thus compromising the cryptographic mechanisms and the data protected by those mechanisms.

In conclusion, the threats are considered:

Each of the identified threats to security is addressed by one or more security objectives. [Provided is a ] mapping from security objectives to threats, as well as a rationale that discusses how the threat is addressed. Definitions are provided (in italics) below each threat and security objective so the PP reader can reference these without having to go back to sections 3 and 4 [, which are sections discussing the security environment and objectives].

Henning Klevjer
  • 1,815
  • 15
  • 20