4

Is there existing work (or algorithms) that explore (or exploit) imposing an i/o cost to disadvantage hashing hardware, while still using SHA2-family of hashes?

Is this even possible, or would any such effort be trivial to circumvent?

My goal is to put dedicated or lightweight hashing hardware (ie. ASIC/ FPGA) at a disadvantage compared to general-purpose desktop or server hardware. (I've not considered parallel GPU hardware at this point, but please do feel free to discuss this also.)

I'm aware of other work such as Argon2 and scrypt, which are designed to impose a tunable RAM-cost - my understanding is that these algorithms benefit honest participants by requiring each invocation of the algorithm dedicate a chunk of RAM and therefore the maximum n running in parallel is bounded by the maximum available RAM on the host. Having said that, there are constraints imposed on this design that limit use to SHA2, HMAC, and PBKDF2 or HKDF, as well as AES-256 in various modes.

I was considering hashing using SHA2/384 from PBKDF2, along with a large input file, to generate a key for a symmetric cipher, eg:

key = SHA-384( PBKDF2(pwd, salt, iters=lots) || REALLY_BIG_FILE )

I expect that I could also use HKDF instead of SHA2/384 directly, ie.

key = HKDF( hkdf_salt=PBKDF2(pwd, salt, iters=lots) , ikm=REALLY_BIG_FILE )

In either case, I reckon that the initial state of the KDF would need to be set by the output of PBKDF2, otherwise, a clever enough design would be able to incur the i/o cost only once per node, then proceed to test PBKDF2 outputs without the i/o cost being imposed on every iteration.

Is this reasoning flawed?

^ refs:

brynk
  • 832
  • 2
  • 13

0 Answers0