7

I'm probably just thick and sufficiently new to security related issues in software development, but since I can't find information and I've been stuck on Googling this for a while.

Recently I came across a book, Dynamic Secrets in Communication Security (also available in Amazon) and now wonder, if this technique has an equivalent in socket programming? There's also the paper Creating Shared Secrets out of Thin Air that has a good description on the idea.

Briefly, as I understand it, the idea is to dynamically generate communication secrets based on transmission properties in wireless channels (e.g. faults) that are difficult for an adversary to guess, if it's not present in many places of the wireless transmission medium. Thus the communication channel would be secure, even if the adversary had a great deal of computation power at disposal. This would also allow the participating parties to detected if the traffic has been hijacked.

Is there a known method to generate dynamic secrets based on e.g. every message exchanged, and gain such a security as to detect traffic hijacking and make the communications resistant to even an adversary with a lot of computational power? I'm not after Diffie-Hellman, since it's applied only on initiation, but maybe something like IPSec Encapsulating Security Payload?

TildalWave
  • 10,801
  • 11
  • 45
  • 84
Veksi
  • 259
  • 2
  • 8

3 Answers3

8

The second article (Safaka et al) describes a protocol which is based on a roughly similar to the Cachin-Maurer protocol that I describe at the end of this answer. The premise is that there is a broadcast unreliable communication channels between the involved parties, so that when one party emits a list of "packets", all others see only some of the packets, and not all of them see the same packets. So Alice and Bob, wishing to establish a shared secret, just have to emit a lot of packets, record what they receive, and then tell each other which packet they received (packets being referenced by some conventional ID). With high probability, the attacker could not see all the packets which both Alice and Bob recorded, so the concatenation of all packets, suitably hashed, is a good shared key.

In the Cachin-Maurer protocol, the broadcast is from a high-bandwidth random source, and the unreliability of reception is due to the impossibility of recording all of the data, because of the sheer size of it. In the Safaka protocol, the transport medium is assumed to be unreliable, which is a bit optimistic because the attacker may invest in a good antenna, something much better at picking up radio waves than the basic WiFi adapters of honest people.

Transporting that principle to application level looks hard because the really basic characteristic of the "application level", the reason why we call it "application", is its inherent reliability. For instance, raw IP packets are unreliable: they can get lost, duplicated, sometimes altered (I have seen it: a Cisco router with bad RAM), and arrive out of order. However, the first thing applications do is to apply TCP, which brings reliability (through acknowledges and re-emissions). When transport is reliable, it is reliable for everybody, including the attacker.

This is a generic trend: the kind of key exchange protocol we are talking about must rely on some physical process which enforces some unpredictability; in the Safaka protocol, the physical process is radio noise disrupting reception of some packets. The computer world, on the other hand, is mathematical rather than physical: it lives and strives in an abstract world where a bit is a bit and does not flip randomly. Indeed, when a RAM bit is flipped (this is said to occur about once every three months on average for a given machine, because of cosmic rays), the machine can crash, depending on where the said bit was. The whole principle of computerization is to run away from the physical world and keep it as far away as possible. This inherently prevents efficient usage of Safaka-like protocols, and even more so when we go higher up the layer stack, i.e. "at application level", as you put it.

A secondary point to make is that these protocols are key exchange, not authentication. They may provide security only against passive-only attackers, who observe but do not interfere. This is not a realistic assumption nowadays. A lot of network attacks involve positive actions from attackers; and some low-power attackers can be described as "active-only": it is often a lot easier to send fake packets to a server than to eavesdrop on packets between a server and an honest client.

Thus, some authentication is needed: you don't want to exchange a key in general, but a key with a specific client or server. To do that, you need some authentication mechanism which happens sufficiently early in the process, e.g. with public keys or some PAKE, and you are back to "normal cryptography", making the Safaka-like protocols rather pointless.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • A really write-up! You didn't only answer my question, but (probably) most of the lingering side-tracks and digressions I had my mind as well! – Veksi Jul 21 '13 at 11:35
3

Tom's made some good points. This scheme is for a lossy network. Application level can use something like UDP, a ridiculous amount of packets and stats on sent vs received vs timing. It's not as good as the intended domain: lossy networks like wireless. So, it starts to sound hard to do and that's what I specialize in so why not take a stab anyway. ;)

(Important note: it's best to use existing crypto libraries and techniques for important work. Much safer, more practical, etc. This is for educational and entertainment purposes only.)

The main assumption is that they have limited presence. They're not at every endpoint so they miss packets. The key point is that they miss packets. We can do that at application level. Here's a few possibilities.

  1. Use several communication channels at once. Send UDP, email, pings and maybe DNS amplification packets all at once.

  2. Use a third party relay that allows SSL connections and doesn't log traffic. Split the key among several if you don't want to trust just one. Has about the same effect as other protocol in practice.

  3. When using relays or unreliable packets, send them in a specific route that goes to countries with only one major undersea fiber connecting them. Also, send at peak time. The unpredictability of congestion should help.

  4. Split pieces between UDP and email sent via Mailinator.

Honestly, schemes that rely on the obfuscation of the exchange, both protocol and data, are better than one that relies on noisiness of the channel. That channel property can't be relied on in our quite optimized Internet. As said before, though, this is all just for kicks.

Nick P
  • 667
  • 4
  • 4
0

Dynamic secrets, in general, is a modification to the traditional (a.k.a. Kerckhoffs) model of key-cipher cryptosystem design. Kerckhoffs asserts that a cryptosystem should assume that everything except the key is known to adversary. Such an assumption is valid and close to reality in battlefield scenario, where the soldiers carry cipher machines and keep secret codes in their brains (if they can remember so clearly...)

Such a model is not 100% practical in today's world, especially in civilian world secure communications. On one side, it is not reliable to assume that the key is always a secret. Adversary steals username-password and cryptographic keys from time to time, using Trojan programs for example. On the other side, it is a strong assumption that adversary knows everything else. For example, it is practically impossible to eavesdrop every transmitted bit in wireless communications. These thoughts motivate UMass people (Xiao et, al.) to generate the idea of dynamic secrets and draft an INFOCOM paper and the book "dynamic secrets in communication security". They introduced the idea to his European colleagues and then they followed the research and generated the new INFOCOM paper.

cocoon
  • 1