4

Objective

A secure Arrow of Time with unpredictable future behaviour to stop attackers or DDoS BotNets from pre-calculating proof-of-work and other gateway protections for P2P/F2F. A distributed solution in so far as there isn't enough free centralised timestamp servers to pick randomly amongst them to foil attackers.

Due Diligence

I was looking for a distributed timestamp server as centralised solutions are often commerical (example) and easily coerced in any case. The only functional distributed one I could find was as an implicit behaviour of the Bitcoin network. Which while worthy, is too heavy for a distributed timestamp service and has different design goals.

So in frustration of absent(?) distributed solutions, it seems time to homebrew...

Homebrew

We want network consensus of a hash value / key pair per time interval that a randomly selected node will sign as a ad-hoc TSA server for that time interval only. The bigger the distributed network, the harder to subvert; assuming 50% subversion required to control the network result.

  1. Each node generates a random 32 byte network hash candidate as a ECC or RSA public key.
  2. Each node hashes the candidate through an expensive scrypt() call, using the previous network hash as the salt. A new network starts with a zero bit salt.
  3. All nodes send each other both their network hash candidate and the (32 byte) scrypt() result.
  4. Each node sorts all received entries in ascending order of scrypt result and picks the median* entry. Each node then publishes (votes) the hash candidate of that entry to other nodes**.
  5. Each node collects votes. Multiple votes and votes from IPs that didn't publish a Step 3 result are dropped.
  6. The most voted hash candidate is run through scrypt() again to check if the hash candidate was honest. If not the next most voted candidate is checked.
  7. The most voted valid hash candidate is the next network hash value.
  8. Any node may store or publish each authoritative network hash as they occur to some external directory for non-repudiation if desired. Each peer also supplies authoritative hashes to new nodes on request.
  9. The node that supplied the successful hash candidate signs any TSA-style requests with the associated ECC or RSA private key during the appropriate network time interval; then discards the private key. (Optionally if a winning node is liable to disappear if it wins, include a Step 3B where each node must send its private key to a few other redundant nodes.)

An attacker attempting to control the next network hash value would need to:

  • Have many nodes or the ability to spoof extra IPs for dispatching colluding hash candidates that outcompete other alternate medians.
  • Have a beefy supercomputer capable of ~2^254 * scrypt_cost processing power to create colluding hash entries within the network time interval. Pretty unlikely.

Question

Is this sane, or have I (re?)invented a square wheel?

* Or nearest neighbour below the median if an even set of values.
** The entries a given node received and thus the median may differ somewhat due to lost packets, dropped nodes, attacker interference and latency or congestion at the edge of network - hence the voting stage.

LateralFractal
  • 5,143
  • 18
  • 41
  • 1
    I don't see where bitcoin or namecoin goes wrong. Just publish a hash of your data to the blockchain. If you want to safe money you can also group 100 hashes togather and calculate a hash and then post the new hash to the blockchain. – Christian Oct 17 '13 at 18:17
  • Whether Bitcoin or Namecoin will remain competitive (on-blockchain) on a transaction/kWh basis vs new or existing centralised models is unclear, but in any case the timestamp/kWh for the current generation of cyptocurrencies is far too high for a distributed TSA network, as it [wasn't a core design goal](http://goo.gl/a9E0h) for those networks. The [most current](http://bitcoin.stackexchange.com/a/10097/6679) stack exchange answer is approx. 375 kWh/block in late-April 2013 (excluding invalid blocks) and that is already out of date due to the inherent increases of the block target/difficulty. – LateralFractal Oct 17 '13 at 22:18
  • @Christian To put this in perspective - 375 kWh every 10 minutes would power approximately [6000 American homes](http://www.eia.gov/tools/faqs/faq.cfm?id=97&t=3) all year round. – LateralFractal Oct 17 '13 at 22:24
  • A sane distributed TSA network ("d-TSA") would consume a sensible `O(N)` amount of energy. For example - with *scrypt()* being the most energy expensive part - a large 10,000 node d-TSA network with two 1-second *scrypt()* invocations per node per hourly timestamp (assume [50W](https://secure.www.upenn.edu/computing/resources/category/hardware/article/computer-power-usage) headless computing as a d-TSA has no visual component) would consume 0.28 kWh. – LateralFractal Oct 17 '13 at 23:01
  • There are block chain technologies that do not use proof of work to create a new block. The reason being the costs you rightly point out. – DHorse Dec 02 '20 at 17:13

1 Answers1

11

First things first. What's the problem you are trying to solve ? What is the attack model ? We cannot tell whether a protocol achieves a given goal if the said goal has not been given.

Apparently, you are dissatisfied with Time Stamp Authorities as they are commonly used. Let's see what TSA do and on what security property they rely.

A TSA emits time stamps, where a time stamp is a signature over a structure containing the signature date and time, and a hash value. By this signature, the TSA asserts that the hash value "existed" at the specified time. "Existence" requires some semantic caution, since we could say that the set of all sequence of 256 bits is well-defined mathematically, and finite, so it "exists" already. What we mean is that, given a message m and a time stamp t which contains h(m) and the time T, then "someone" must have handled m before time T as a sequence of bits in its own right.

A time stamp is a verifiable proof of existence of a given message m at a past date T, and can serve as a basis for constructs which need such proofs (e.g. long-term archival of signed documents, which must be verified long after all the involved certificates have expired). To verify a time stamp, we must have access to the public key of the TSA, and the message, and recompute the hash, and verify the signature from the TSA.

The goal of the attacker is to produce at date T' (after date T) a so-called time stamp t over a message m which did not exist at date T, but such that the verifier will accept it nonetheless as a proof. There are several ways by which this could be done:

  • Tweaking the message m until h(m) matches a hash value for an existing time stamp (for another message) which bears the date T. This method is not feasible as long as h is resistant to preimages (included multi-target second preimages).
  • Subverting a TSA to issue (sign) the fake time stamp.
  • Stealing the TSA private key (by breaking its public key, or purloining the private key from where it is stored) in order to sign the fake time stamp.
  • Convincing the verifier that a given public key is owned by the TSA, whereas it is in reality controlled by the attacker.
  • Tricking the TSA into assuming that the current time is T, so that it unwillingly issues time stamps "in the past".

A very important point to make is that all of this is about security of the involved elements (hash functions, private keys, clocks) at the current date, i.e. not at date T. It is not sufficient to ensure that everything was "secure" at the time the time stamp was issued; security must be enforced as long as the time stamp is to be verifiable. In all generality, since time stamps tend to expire (because security of the private key cannot be maintained indefinitely, because of possible cryptanalytic advances), time stamps have to be chained by regularly re-timestamping (e.g. as is described in ERS).

What you describe seems to be a protocol by which a number of "nodes" may come up with a shared agreed-upon designation of one of them. Apparently you want that selection to be somewhat "random", i.e. such that none of the node may arrange to be selected more often than any other. However, as far as I can tell, this is not achieved: a dishonest node may invest in some CPU in order to compute several hash candidates, then keeping the one which is closest to the middle-point of the hash space, thus improving its chances of being selected.

But, more importantly, whatever you do at that point may only offer some protection at a given date against "subversion". This does not do what we want with regards to time stamps. For instance, suppose that there is one (only one) TSA which is corrupt. All that TSA needs to do is to wait for being the "TSA du jour" and be allowed to issue valid time stamps in a time range including the time T. The corrupt TSA faithfully follows the protocol, except for one point: at the end of the day, it does not destroy its private key; instead, it keeps a copy. From the outside, this is completely undetectable. But this allows the attacker to produce afterwards as many fake time stamps as he wishes to, all for alleged date T (any date within the range during which the corrupt TSA was the TSA).

Also, nothing is said about how the verifier is supposed to learn the correct public key for the agreed-upon TSA at date T. All games about selection of one TSA have any worth only as long as this is verifiable afterwards. I don't see anything in your protocol about that point.

Summary: you don't say what your protocol tries to achieve but I am pretty sure that:

  • this is not what we want,
  • and it fails to do it properly anyway.

What can be done to make time stamping more reliable and, as you say, "distributed", is to use several TSA in parallel. When you need to time stamp a message m, go ask for time stamps over h(m) to several TSA. Say that there are n TSA in use, and TSA number i returns a time stamp bearing date Ti. Then consider the set of time stamp to prove the existence of m at date T = max(Ti): indeed, when a time stamp states that a given message exists at a given date, it implicitly also states that the message has continued to exist since that date. By taking T to be the latest of all dates in the n time stamps, you are simply using all time stamps to individually prove (independently of each other) that the message existed at date T (and possibly before, but definitely at date T).

Under these conditions, your time stamping is secure until the attacker succeeds at subverting/breaking/framing all the n TSA. Now THAT is robust. And it is decentralized and efficient; the TSA do not need to talk to each other or even to be aware of each other's existence.

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
  • Yes, Step 8 needs to highlight the directory aspect more clearly. The ability to collude a hash-space median is extremely expensive, as the birthday paradox is cancelled out by being equally available to all nodes - you really do need 50% of 2^256 for one spy node; halved for every additional colluding node using a 'haircomb' attack over the hash space. You are perfectly correct that the protection against retroactivity (deleting private keys) was jury rigged on top of a secure hash consensus model. Point well taken - the ascending hash list should be cut into, say, 10 blocks each with ... – LateralFractal Oct 17 '13 at 13:19
  • (cont.) a local median; to allow and require 10 TSA multi-sigs for each time interval. In a network of a 1000 nodes with 10 corrupted (the minimum), the chances of the attacker having all ten de jour TSA nodes at the same time would be the falling factorial of *(1000)10* or [9 E29](http://www.wolframalpha.com/input/?i=solve+FactorialPower[1000%2C+10]). Regarding parallelism: Unfortunately the range of free centralised TSAs isn't large enough for P2P or F2F network to use with confidence; otherwise yes I'd certainly just have the app use a random roulette of existing TSAs. – LateralFractal Oct 17 '13 at 13:20
  • NB. Added missing objective/context to question – LateralFractal Oct 17 '13 at 14:15