The second article (Safaka et al) describes a protocol which is based on a roughly similar to the Cachin-Maurer protocol that I describe at the end of this answer. The premise is that there is a broadcast unreliable communication channels between the involved parties, so that when one party emits a list of "packets", all others see only some of the packets, and not all of them see the same packets. So Alice and Bob, wishing to establish a shared secret, just have to emit a lot of packets, record what they receive, and then tell each other which packet they received (packets being referenced by some conventional ID). With high probability, the attacker could not see all the packets which both Alice and Bob recorded, so the concatenation of all packets, suitably hashed, is a good shared key.
In the Cachin-Maurer protocol, the broadcast is from a high-bandwidth random source, and the unreliability of reception is due to the impossibility of recording all of the data, because of the sheer size of it. In the Safaka protocol, the transport medium is assumed to be unreliable, which is a bit optimistic because the attacker may invest in a good antenna, something much better at picking up radio waves than the basic WiFi adapters of honest people.
Transporting that principle to application level looks hard because the really basic characteristic of the "application level", the reason why we call it "application", is its inherent reliability. For instance, raw IP packets are unreliable: they can get lost, duplicated, sometimes altered (I have seen it: a Cisco router with bad RAM), and arrive out of order. However, the first thing applications do is to apply TCP, which brings reliability (through acknowledges and re-emissions). When transport is reliable, it is reliable for everybody, including the attacker.
This is a generic trend: the kind of key exchange protocol we are talking about must rely on some physical process which enforces some unpredictability; in the Safaka protocol, the physical process is radio noise disrupting reception of some packets. The computer world, on the other hand, is mathematical rather than physical: it lives and strives in an abstract world where a bit is a bit and does not flip randomly. Indeed, when a RAM bit is flipped (this is said to occur about once every three months on average for a given machine, because of cosmic rays), the machine can crash, depending on where the said bit was. The whole principle of computerization is to run away from the physical world and keep it as far away as possible. This inherently prevents efficient usage of Safaka-like protocols, and even more so when we go higher up the layer stack, i.e. "at application level", as you put it.
A secondary point to make is that these protocols are key exchange, not authentication. They may provide security only against passive-only attackers, who observe but do not interfere. This is not a realistic assumption nowadays. A lot of network attacks involve positive actions from attackers; and some low-power attackers can be described as "active-only": it is often a lot easier to send fake packets to a server than to eavesdrop on packets between a server and an honest client.
Thus, some authentication is needed: you don't want to exchange a key in general, but a key with a specific client or server. To do that, you need some authentication mechanism which happens sufficiently early in the process, e.g. with public keys or some PAKE, and you are back to "normal cryptography", making the Safaka-like protocols rather pointless.