0

The problem with static software whitelisting is that in the real world, employees with versatile jobs need to run unexpected programs. Company sets up a whitelist to limit what programs can run - cool, security! But next thing you know someone needs to run something to do some time-sensitive task, they can't, deliverables are late, company loses money, etc, etc, whitelist is dead.

I'm thinking of a more dynamic mitigation strategy:

A scenario where we can verify all programs that run on monitored hosts in our network on a wide scale. Let's say it's fine for unverified files to be downloaded by some hosts on our network, but if we detect that same file on perhaps 20% of the hosts on our network, we block it unless:

  • we can call out to it's origin through some protocol
  • that origin be a publisher we trust (whitelisted)
  • the protocol allows us to verify the authenticity of the hash we scanned against the one they published
  • all done in a cryptographically secure way, i.e. HTTPS

So now a few machines could be easily hit by a Zero Day, but we're just trying to mitigate saturation of our whole network by this potential threat.

You could say files already downloaded via HTTPS, such as JavaScript, might be verified against MITM alteration through the security provided via HTTPS, but what if:

  • The attacker compromises the trusted publisher, falsifies the file they're publishing, and victims around the world trust the file explicitly due to HTTPS delivery from a trusted source
  • Or (in the case described above where a hash-validation API is used) the attacker compromises the trusted publisher, falsifies the hash and file they're publishing so victims reach out to verify the hash, get a good match, and trust this malicious file.

So I'm imagining another requirement:

Let's imagine we trust a publisher, but want to prepare for the eventuality that they're breached.

  • The protocol involves a global publisher blockchain where many trusted publishers maintain a blockchain verifying file hashes.

Now even if an attacker breaches the trusted publisher, as long as they're careful to verify the integrity of the hash they submit to the blockchain, the attacker won't be able to modify served files with malicious code.

Is there anything wrong with this scheme? Some vulnerability or logistical issue I'm missing?


In case I wasn't clear, an example:

  • A nation-state actor hacks Google
  • Google, following a standard API, sends an HTTPS POST to trustedpublishers.org containing the hash of their file they're about to publish, with a mandatory human personnel validation step to sign off that the file is untainted and secure.
  • trustedpublishers.org forwards this new transaction to Google and every other trusted publisher with a membership to their trust org, who each do the work, similar to the "mining" done with crypto-currencies to propagate the change into the blockchain.
  • Google pushes an update to the JavaScript running on Google.com
  • For the 1st time, one employee of Company C opens Google Chrome, a malicious version of this new JavaScript file is downloaded and Company C's Anti-Virus does some investigation.
  • The user's host executes the JS, no latency is experienced, nor the execution of the script halted.
  • Company C's AV reaches out via HTTPS GET to the API at trustedpublishers.org and also checks with a few endpoints mirrored by members of trustedpublishers.org to make sure everyone agrees with the hash presented.

The hash can't be validated:

Depending on the network admin's config choice:

  • The network admin is alerted and the file is immediately blocked from running

or, imagining this wasn't a trusted publisher and there's no validation that can be done:

  • Time passes, 20% of the hosts on Company C's network have now executed this file
  • Further executions of the file are blocked and the network admin is alerted to investigate and either whitelist the hash or not.
J.Todd
  • 1,300
  • 1
  • 10
  • 20
  • "human personnel validation step to sign off that the file is untainted and secure" -- oh my sweet summer child ... – schroeder May 30 '21 at 16:55
  • 1
    " a malicious version of this new JavaScript file is downloaded" -- from where? Is your whole assumption that bad JS comes from a malicious actor who somehow got access to the CI\CD pipeline and bypassed controls? But, if they could enter the pipeline, then they would also get a valid hash on the chain... – schroeder May 30 '21 at 17:01
  • @schroeder Alright, fair enough, but that's a hurdle not a barrier: Let's say instead this trusted publisher API requires publishers to submit hashes in a progression based version format so that if the client requests a previous version, they're sent a link to a where the publisher hosts a previous version, like with `NPM` or `apt` or `yum` requesting a specific version, so `trustedpublishers.org` says "this file hash was submitted 7 days ago" and the client says "Ok, 7 days would be enough time for the publisher to likely notice a breach and recall/update an unsafe publication" or (1/2) – J.Todd May 30 '21 at 17:03
  • @schroeder (2/2) "I'm not comfortable with this recent of a publication, give me a URL to a version X days old" so the client can choose to run a prior more trustworthy version – J.Todd May 30 '21 at 17:04
  • 1
    I'm getting the sense that you have not worked in development. You can't allow the end user to randomly choose a version of JS. There are dependencies that need to be verified, too. And it's the *developer's* site, not the users', and the developers get to determine the version of the site that gets presented to the user. Can you imagine the useability, versioning, and vulnerability nightmare that would exist if people could do what you suggest? – schroeder May 30 '21 at 17:07
  • 1
    OP, after reading your question and following the ensuing comments, I'm not sure how your solution would benefit from a blockchain any more than it would benefit from a traditional distributed database (such as DNS). What does blockchain provide that other distributed database don't provide, which makes blockchain more suited for this application than a traditional distributed database? – mti2935 May 30 '21 at 17:08
  • @mti2935 meh, blockchain or distributed database, I'm not fussed either way. this approach will not accomplish the desired goal. – schroeder May 30 '21 at 17:13
  • @mti2935 as a mechanism to allow multiple trusted organizations and the client to all validate collectively the hash value, else: Attacker penetrates Trusted Companies A and B, updates their registry and game over. With a blockchain system, the client can verify the hash validations expressed by Company A and B at 5:01PM are the same as they were a minute earlier, before they were breached and started lying about the hash. – J.Todd May 30 '21 at 17:14
  • @J.Todd DNS and distributed databases hash their entries and sync with each other to maintain integrity. So a blockchain is not unique in that respect. – schroeder May 30 '21 at 17:16
  • OP and @schroeder, OK so as I understand it, OP wants to leverage the *immutability* property of the blockchain, which makes it very difficult for anyone (including an attacker) to change or delete records in the blockchain once they have been written to it. That makes sense. However, I still have the same questions as schroeder with regard to other aspects of this solution. – mti2935 May 30 '21 at 17:20
  • @mti2935 [see chat since this became a discussion (mostly due to my question being too broad, really)](https://chat.stackexchange.com/rooms/124874/discussion-between-j-todd-and-schroeder) – J.Todd May 30 '21 at 17:44
  • 1
    I don't fully understand the question at all, you talk about whitelisting, then you jump to running code that has validated hash via blockchain, then you talk of some mythical beast that can hack google as well as other companies.It very much feels like you started with a solution and worked yourself backwards try and fit problems into it – yeah_well May 30 '21 at 17:53
  • Voting to close my own question, it was too unfocused and I dont think I can focus it down enough to any one problem after some consideration. – J.Todd May 30 '21 at 19:29
  • @yeah_well Firstly, if you think a nation state couldn't hack Google, you should reconsider your position. Resources: [1](https://zerodium.com/program.html), [2](https://www.usenix.org/system/files/conference/woot16/woot16-paper-blackthorne.pdf), Those two resources in combination mean every company in the world is really, really vulnerable. The only question is how many zero days the attacker is willing to buy. There's no need to work backwards to find a problem that exists in every company in the world. – J.Todd May 30 '21 at 19:39
  • 2
    You don't need 20% of hosts to execute malicious software. A **single** execution can be sufficient for successful attack like it was in case of SolarWinds. – mentallurg May 30 '21 at 20:47
  • @mentallurg Only because remote code execution solutions make lateral movement so easy. Based on the white papers and DefCon / Blackhat presentations I've gone through on the way lateral movement is done, there doesn't seem to be a good solution to stopping that lateral movement. Maybe we can re-think about the way remote code execution solutions are done, perhaps in a way that decentralizes the normally very centralized process (a network admin propagating changes to a network with Windows Remote Management and similar). Something to break the effectiveness of tools like Bloodhound – J.Todd May 31 '21 at 19:02
  • I don't mean remote code execution. I mean following. A *single* execution of infected file (an executable, or some script within MS office, or similar) may infect many other files. Also it can download other malicious software that will attack your network from inside. So 20% will not be needed at all. The attacker can reach the goal after a *single* execution of a *single* infected file. – mentallurg May 31 '21 at 20:03

2 Answers2

3

You appear to not be aware of established and more robust options:

  • approve signed apps: that's much, much better than your "more dynamic mitigation strategy"
  • javascript is not saved as an "app" but in browser storage: your strategies are much easier for JS when you realise this
  • global publisher blockchain doesn't mitigate compromised code because the compromised code will have a legitimate entry in the blockchain
  • having a hash of all versions of the app to check against confirmed bad versions is what you are looking for: no blockchain required

It seems like you are trying to shoehorn a blockchain use case onto a faulty notion that released programs can be "verified and trusted beyond reproach". A. that's not how product development works, and B. how can a publisher be so sure themselves?

It is better and more efficient to know when there is a bad version, not to know what all the "good" versions are. The former validates a known bad, the former tries to prove a negative by inference.

So the problem is not with the blockchain, the problem is with the logic. You are assuming that "good" can be empirically determined, then encode that in a distributed leger. But if we could do that, then we wouldn't need your solution at all.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • You're selling me a known exploit mitigation strategy, when I was asking about an (admittedly rough draft idea) mitigation strategy for Zero-Day attacks. The purpose of the blockchain would be for multiple publishers to agree on the same publication, requiring multiple individual organizations to be simultaneously breached and made to lie about the nature of a hash. Yes the entry could be falsified but that's a hurdle, not a barrier. Consider the solutions rather than sidestepping to a less effective solution. How about the publication date based client choice. – J.Todd May 30 '21 at 17:09
  • 1
    Your logic is faulty. That's what I'm trying to show you. I'm not trying to "sell" you anything. – schroeder May 30 '21 at 17:09
  • 1
    "the entry could be falsified" -- you haven't read or understood what I said. I'm not talking about a fraudulent entry. – schroeder May 30 '21 at 17:11
  • 1
    Just because I am finding issues with your proposed, specific solution does not mean that I am not willing to consider alternatives to the current schemes. I'm dealing with what you have presented here. If you want a collaborative solution design session, then a Q&A site is not the best place for that. – schroeder May 30 '21 at 17:14
  • I'm not asking for that, but I've seen lots of great answers on SE that look past an imperfectly presented question and see what's really being asked. Which in this case is whether a trusted publication scheme where client and multiple orgs all have to be breached for the trust to be faked. You pointed out the publisher cant be sure it's sending a good hash (maybe the attacker snuck a malicious line into its code), but if we're trusting this publisher as someone who over 20% of our hosts are executing code from, we can bank on them noticing a breach within N time. Usually. (1/2) – J.Todd May 30 '21 at 17:17
  • 1
    And some proposed solutions just don't work. Your proposal doesn't suffer from minor implementation details. The basis is flawed. There is no way to get hand-wavy about the issues. – schroeder May 30 '21 at 17:20
  • (2/2) Your argument against the versioning problem: Why not? If the previous version worked, it and all its dependencies were submitted and hashed back then, did that version of, lets say Google.com not work? Sure it did, and if we're the publisher there's nothing wrong with letting our client, our customer potentially, access a previous version of our program if they're more comfortable with that, for security sake. It's not even difficult. I'm not trying to argue for the sake of argument, I just think you might be overlooking a solution to this problem in Info Sec. – J.Todd May 30 '21 at 17:21
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/124874/discussion-between-j-todd-and-schroeder). – J.Todd May 30 '21 at 17:24
3

Blockchain is good in exactly one scenario: establishing trust at a peer-to-peer level when there are zero valid options for agreeing upon a mutually trustworthy third party. That's it. In every other case, using a mutually trusted third party (i.e. a Certificate Authority) can provide the trust needed in a much easier, cheaper, and more standardized way.

All a blockchain does is certify that transactions took place in the order specified in the ledger; it doesn't even specify a valid timestamp of "when" the transaction took place, except as bookended by other events. For security purposes, a blockchain transaction is the equivalent of obtaining a trusted timestamp authority's timestamp, plus validating the timestamp. Whatever transaction has the oldest valid timestamp is the legitimate transaction.

The typical scenario for blockchain is digital currency, where people want to securely exchange value without government involvement. Another case might be something like responsible timber harvesting, where the local government regulators are corrupt and issue meaningless logging permits allowing anyone who pays the bribes to chop down protected trees; neither the loggers nor the EU-based consumers trust the official government stamps, but the loggers don't trust the EU authorities either.

Also important for selecting blockchain is that the barrier to mutual trust must be so high that all parties are willing to pay exorbitant prices for a per-transaction cost. Mining is deliberately kept inefficient to provide for Proof of Work. That inefficiency costs in terms of hardware and energy consumption. A trusted CA can be verified with a simple cryptographic check that costs virtually nothing to create or execute.

In the case you describe, full trust can be derived from a single agreed-upon authority, such as a CA. Therefore blockchain is no longer the best option.

Instead, you're trying to solve a different problem: the compromise of a party that is not the CA. Blockchain doesn't (and can't) provide additional assurance that isn't already provided by trusting digital signatures.

When such compromises happen in today's code-signing environments, the issuing code-signing certificate is simply revoked. The publisher's clients are protected the next time they check the signature, where the invalidated certificate prevents the compromised code from running.

Is it a 100% perfect solution? Of course not. That doesn't stop it from being trusted by Apple to keep their entire iOS and app store ecosystem essentially free from rogue apps.

John Deters
  • 33,650
  • 3
  • 57
  • 110