The problem with static software whitelisting is that in the real world, employees with versatile jobs need to run unexpected programs. Company sets up a whitelist to limit what programs can run - cool, security! But next thing you know someone needs to run something to do some time-sensitive task, they can't, deliverables are late, company loses money, etc, etc, whitelist is dead.
I'm thinking of a more dynamic mitigation strategy:
A scenario where we can verify all programs that run on monitored hosts in our network on a wide scale. Let's say it's fine for unverified files to be downloaded by some hosts on our network, but if we detect that same file on perhaps 20% of the hosts on our network, we block it unless:
- we can call out to it's origin through some protocol
- that origin be a publisher we trust (whitelisted)
- the protocol allows us to verify the authenticity of the hash we scanned against the one they published
- all done in a cryptographically secure way, i.e. HTTPS
So now a few machines could be easily hit by a Zero Day, but we're just trying to mitigate saturation of our whole network by this potential threat.
You could say files already downloaded via HTTPS, such as JavaScript, might be verified against MITM alteration through the security provided via HTTPS, but what if:
- The attacker compromises the trusted publisher, falsifies the file they're publishing, and victims around the world trust the file explicitly due to HTTPS delivery from a trusted source
- Or (in the case described above where a hash-validation API is used) the attacker compromises the trusted publisher, falsifies the hash and file they're publishing so victims reach out to verify the hash, get a good match, and trust this malicious file.
So I'm imagining another requirement:
Let's imagine we trust a publisher, but want to prepare for the eventuality that they're breached.
- The protocol involves a global publisher blockchain where many trusted publishers maintain a blockchain verifying file hashes.
Now even if an attacker breaches the trusted publisher, as long as they're careful to verify the integrity of the hash they submit to the blockchain, the attacker won't be able to modify served files with malicious code.
Is there anything wrong with this scheme? Some vulnerability or logistical issue I'm missing?
In case I wasn't clear, an example:
- A nation-state actor hacks Google
- Google, following a standard API, sends an
HTTPS POST
totrustedpublishers.org
containing the hash of their file they're about to publish, with a mandatory human personnel validation step to sign off that the file is untainted and secure. trustedpublishers.org
forwards this new transaction to Google and every other trusted publisher with a membership to their trust org, who each do the work, similar to the "mining" done with crypto-currencies to propagate the change into the blockchain.- Google pushes an update to the JavaScript running on Google.com
- For the 1st time, one employee of Company C opens Google Chrome, a malicious version of this new JavaScript file is downloaded and Company C's Anti-Virus does some investigation.
- The user's host executes the JS, no latency is experienced, nor the execution of the script halted.
- Company C's AV reaches out via
HTTPS GET
to the API attrustedpublishers.org
and also checks with a few endpoints mirrored by members oftrustedpublishers.org
to make sure everyone agrees with the hash presented.
The hash can't be validated:
Depending on the network admin's config choice:
- The network admin is alerted and the file is immediately blocked from running
or, imagining this wasn't a trusted publisher and there's no validation that can be done:
- Time passes, 20% of the hosts on Company C's network have now executed this file
- Further executions of the file are blocked and the network admin is alerted to investigate and either whitelist the hash or not.