2

I'm creating some software that essentially enforces a specific process for creating specific documents, which protects them from being challenged later. Afterwards, the file is signed with the user's key (USB token not related to the software). Recently there came a stakeholder suggestion to also sign it with the software's key to prove the process was followed.

But anyone could pull the key from the software and sign anything with it. So how could I include something in the file that confirms (or allows to disprove that) it was created with an authorized build of the software? Doesn't matter if the copy was obtained legitimately, just that it's not been modified.

To avoid the gory details, imagine the software calculates Pi, and we want to sign that Pi as calculated by a proper algorithm and not randomly generated past 3.1419.

I've narrowed down the threat model to two cases:

  • A. Someone (average solo developer) alters the software, or creates a replacement, then blames the software when things go wrong. I need a way to show "It wasn't our build".
  • B. Some lawyer claims the document is meaningless because anyone could have done A. The good guys need to show there were some protections against that.

The software needs to work offline, so online authentication on every signature is not acceptable. Online setup is acceptable if it can be done easily. The project's far too small for anything as monstrous as Denuvo, and security through obscurity is not normally accepted. But I still feel there's got to be a simple solution that I'm too blind to see.

I have considered using the user's token to encrypt/decrypt the signing certificate, but we need a way to ensure it's supplied to a proper build. Thought a bit about what code signing could do, but it doesn't seem to do anything for this case. What am I missing?

ZOMVID-21
  • 2,450
  • 11
  • 17

1 Answers1

1

You're correct in saying that embedding a key in the software is no good, because an attacker could read out the same key.

One thing that stands out as a worthwhile place to look is the (new-ish) hardware modules called Trusted Platform Modules, or TPMs. There are published schemes that use TPMs to, for example, detect cheating in online games by ensuring the game binary and memory state has not been externally modified. Without inspecting them in detail it's hard to be sure, but that sounds like something you could use.

The fundamental idea here is that you're asking for confirmation not from your software, but from the computer's hardware. The hardware can, say, hash your binary and sign it with a private key stored only in that hardware: it doesn't even go to the CPU. That does imply that Intel or whoever makes the chips could, were they so inclined, mask the existence of a fake version. They might have back doors, up to and including copies of the private keys embedded in the chip. Nevertheless, protecting yourself from everyone short of those with the leverage to bully or bribe security hardware manufacturers probably covers your use case.

Josiah
  • 1,818
  • 9
  • 14
  • Most our users don't care for their hardware, but maybe for the future, when TPM become common. Could you elaborate more on how to ensure the user's TPM signs the correct binary, and how to arrive from that to signing the documents with a proof the build was an authorized one? – ZOMVID-21 Jun 22 '19 at 18:00
  • I'm afraid I don't know enough details. You would be better served looking for published protocol. – Josiah Jun 22 '19 at 18:43
  • TPMs are common, AFAIK, but you still need a chain of trust all the way from the boot loader - you'd have to be running a verified BIOS, verified bootloader, and verified operating system, as well as your verified program. – user253751 Jun 24 '19 at 22:58