2

It's clear that keyless entry fobs, smart entry cards, NFC, etc. are vulnerable to relay attacks.

Suppose they were improved by limiting the period of activity, eg let them be active only when they are shaken, only when sound is detected, only when today is Tuesday, etc. These reduce the window of opportunity for attacks to occur.

Do these improvements violate any security principle? Specifically, could such improvements be conceived to be futile defences?

John M.
  • 311
  • 1
  • 2
  • 5
  • 2
    I think that many NFC payment apps already support "tap and go" http://www.techradar.com/us/news/phone-and-communications/what-is-nfc-and-why-is-it-in-your-phone-948410 – Neil Smithline Sep 26 '15 at 03:54

2 Answers2

2

What you are describing is multi-factor authentication, the first factor being the proximity of the 2 devices. Therefore it does not violate established principles, but rather is becoming standard practice rather than as a response to a high profile attack. The devices involved need to know that the communication is being performed when only the user wants it to be, and that they are indeed the devices they say they are. This is achieved through secure protocols and methods to detect legitimate user interaction.

Proximity can be faked on some devices with improperly designed protocols through relay, the most public being those on keyfobs (signal amplification) and garage door openers (jam/steal 2 and replay 1st, storing 2nd for later).

Adding a second factor is not difficult, it is usually a matter of cost. The jam/steal attacks jam the signal from the transmitter to the device while making a copy, twice, then sends the first signal to the device activating it. The second copy is then used to activate the device under the attacker's control. This bypasses both rolling code transmitters and those with cryptographically enhanced incrementing counters. A synchronized time code on both devices and as part of the code generally defeats this attack. Synchronizing a timecode and keeping it that way requires high accuracy (and expensive) clocks on both devices, more power, and additional user effort.

A time code however does not usually help with a signal amplification attack. A better way to detect the proximity helps, such as an accurate measurement of the time it takes to process the device handshake, in order to estimate the distance. An excessive response time may indicate an active attack. My father's 20 year old Mercedes Benz had an IR transmitter on the keyfob in addition to RF, so the car could make sure they keyfob was pointed towards it and not being pressed in a pocket by accident. It also unlocked the specific door (or trunk) at which it was pointed, to prevent someone from sneaking into the car through one of the other doors. This also prevented RF only relay attacks, increasing an attackers equipment cost and effort.

Additional methods to detect legitimate user interaction include time-fencing (as you suggested), geo-fencing, hand detection (using capacitive touch), gesture sensing, temperature sensing (to detect attacks that modify speed of clocks and processors)

A more optimal device protocol would include cryptographically authenticated handshakes with large keys, timecodes, counters, and receipt response of messages. This type of protocol uses large amounts of power for an embedded device, so better power efficiency and higher capacity batteries are required before something like this can be adopted on a large scale. Implementing it on a phone is easy, but not so on ultra low-power devices (BTLE, IOT, smartcards, etc). At least not at the moment, but efficiency is constantly increasing.

The cost of implementation must be weighted against the effort of a successful attack during the lifetime of the device, and against the possible costs from such an attack, in the form of bad PR, lawsuits, etc.

I failed to keep this short and went a little off topic, hopefully all the information should be relevant to the question.


Here is a basic protocol I use for IRC services control, it is short and simple, and can be applied to many use scenarios.

A: Hello B, I am A, I would like you to do X<br>
A: B + A + H_1 + X + T_A + C_A + MAC_A(msg)

B: Hello A, you are authorized for X<br>
B: A + B + H_2 + X + T_B + C_B + MAC_B(msg + Z_A)

A: Thanks B, please do X now, over<br>
A: B + A + H_3 + X + T_A + C_A + MAC_A(msg + Z_B)

B: Ok A, I will do X now, over and out<br>
B: A + B + H_4 + X + T_B + C_B + MAC_B(msg + Z_A)

Where A and B are serial numbers or device identifiers, C_x is an incrementing counter specific to device x, T_x is the current timecode of device x, H_x is an identifier for part x of the handshake, and Z_x is the last MAC response of device x. The messages are encrypted and authenticated with large preshared keys, but for something like a keyfob or garage door opener only the authentication is necessary.

Richie Frame
  • 565
  • 2
  • 6
0

No, it does not violate any security principle, and no such improvements would in general not be considered futile defence.

The point to all this is that a secure system is often designed as kind of onion : each security layer will bring its own advantages and weaknesses. The goal when building a secure system is that weaknesses from a layer will be counter-balanced by advantages from another other layer.

I highlighted the term "in general" above because it happens often what is called security theater: adding supplementary layers which will not add any supplementary advantages than already existing layers. These are superfluous layers giving a false sense of "better security" without adding anything (except a uselessly increased complexity).

However, as you described in your question:

  • You have keyless entry fobs which provide some security advantages, but you identified some weaknesses,
  • You propose to add another security layer whose properties are complementary with the already existing one.

So, all in all, the security of your system will be improved.

WhiteWinterWolf
  • 19,082
  • 4
  • 58
  • 104