9

Can it be proved to the user that the running code behind a website with security related code is the same as published?

I'm currently looking at a few new project ideas and one involves secure communication. I would like to make it open source in order to be review-able by the user and more importantly security experts. This might hopefully be successful to prove the code provided is secure, but how can the user be assured that this reviewed code is really running on the website?

I have thought of providing a kind of hash of the files doing the security related work and printing it out, but that hash could easily be printed static, thus providing no real certainty. Doing it with apps that can be compiled from the published source and compared is easy, but isn't applicable on websites. A certificate might only prove who is operating the site, but they wouldn't know if the operator is to be trusted.

Can the user in any kind be assured without doing some leap of faith?

Sven
  • 212
  • 1
  • 5

7 Answers7

3

In all reality no one can really prove that a system is totally secure from all attacks. Provable security can never really be perfect because new attacks are developed regularly. Security is a new field and we don't really know all of the ways that software can be abused. There are projects that are open source for the purpose of security and there are two major reasons to do this.

Some companies "crowed source" their security and offer a bug bounty program. Mozilla is one of the most successful with this model and all of their applications and infrastructure is open source. I have found four critical vulnerabilities in their infrastructure, and I collected $3,000 a piece for these findings. I am happy that I was able make Mozilla safer. Collecting these bounties where not easy, bug bounty programs make hunting for bugs into a fierce competition.

There are also projects that make parts of their application open source such that their users can interdependently verify that the software is in fact secure. Good examples of this is Whisper Systems (now apart of twitter) and their mobile product RedPhone. Parts of HushMail are also Open Source for this reason.

rook
  • 46,916
  • 10
  • 92
  • 181
  • 1
    Thanks for your answer, altough I knew the first two parts already, I guess my title was kind of misleading, please see my comment to my question.. I had something similar in mind as Whisper Systems, but how can the user be assured that the application they offer is the same, specially in web development? Hashing an apk, dll or exe that can be compiled and compared doesn't seem applicable to the web. – Sven Dec 29 '12 at 03:40
  • @BeatMe thats not possible. – rook Dec 29 '12 at 15:23
  • 1
    @Rook, This doesn't really answer the question at all... He is asking, How to prove to users that he is running the code in question, and not some other arbitrary code? – Pacerier Jun 04 '14 at 17:08
3

I'm not sure this applies to your specific question or situation, but it applies to the title of the question. I'm posting this as an answer here because the issue of unauthorized code (in general) making it's was onto a web server isn't very well covered in any guidance I'm aware of. I'm hoping that this answer will help others that are in a situation more similar to ours than yours.


You cannot be 100% sure, but a Secure Development Lifecycle augmented by tools can go a long way toward helping. It can be labor-intensive, and involve manual checking of logs, or at least some human interaction, but if you've got sensitive data, and the budget to implement any of the approaches, it is worth the effort IMO.

We have a process in place to help assure that code published to our server is approved, reviewed code. I'm not going to list any products, but here is the general outline of our procedures and policies:

Policies:

  1. Developers do not have access to the live system, period. Developers cannot push code to the servers. Code needs to be pushed by a designated team outside of development.
  2. All changes are tracked (source control)
  3. All changes must be reviewed prior to being published. (and the review is documented and signed by the reviewer and developer.)
  4. The code review specifies a specific source code revision # that is approved for publish.
  5. The team that publishes the code publishes it directly from source control via an automated build/deployment script. They simply supply the revision # and tag. The script does the job of taking the code from the right location in source control and pushing it out to the live server. They don't know how to do anything else.

Tools that help to enforce this:

  • We have software running on our web server that logs any file changes.
    • changes to code raise a flag for our security administrator, who checks to ensure that the change was approved, reviewed, and implemented by the process
    • This helps us catch code that could make it onto the server either through a rogue insider, or through some code upload.
  • We also have software that can compare files/directories for differences. (possible tools here) These tools can be used to scan a "known-good" set of published code and compare them to our live server. "Known good" in this case is code that's in a reviewed/approved revision directly from source control.
    • Many of these tools can be scripted.
    • We haven't implemented this, but we are toying around with the idea of having a build server configuration that would check out the code from source control and use scripts do the comparison automatically, emailing specified people if there are differences detected. On paper, we've figured out how to do this, but haven't yet gotten around to testing/implementing it.

Of course, this isn't the WHOLE of our development/risk management/deployment process, but it does highlight the pieces that deal with ensuring the code that's approved is actually in place on our live server. I'm leaving out the continuous automated pen tests, and a whole slew of other countermeasures we employ to try to minimize the risk to our customers.

It's also not foolproof. There are holes that a determined attacker could exploit. For example, if our administrator isn't actually reviewing changes to ensure they are approved, the process is not going to work. Also, there's obviously a time lag - the security officer can't monitor changes 24/7, so if he or she isn't around, there's the chance that someone can get unauthorized code on the server and it'll be there until the admin is back on the job.

As an alternative, we've toyed with the idea of simply having a build server automatically publish the entire website every x hours from a pre-determined, pre-approved tag. This way, even if bad code gets out there, it will get erased automatically every x hours. But there are obvious issues with that approach, and we're not ready to go that route yet.

It's quite possible that all our work is foolish and we're doing it wrong, but there's so little guidance out there for this particular problem that we're muddling along the best we can. This process is also based around our specific environment and business model. (maintaining our own websites, self-hosted, entire development team, business units, and Network Admins are all in-house.) It's probably not applicable to anyone in a different situation. Hopefully, someone else out there has other approaches that are more efficient and work better.

So, in agreement with the other answers, there is no 100% foolproof way to ensure that unauthorized code can make it to the servers. But that shouldn't dissuade you from doing all that you can. And if you can document and demonstrate to your user that you're doing something, you will go a long way to building their trust. If you are honest about the potential holes in the plan/process, you will likely gain their trust even further. The point is to do all you can, within the constraints of reason, budget, time, and available tools.

David Stratton
  • 2,646
  • 2
  • 20
  • 36
2

There is no technical tool, like a hash function, which can give the kind of proof that you are looking for. Regardless of what you send to the client, your server could always send the exact values that would be expected from the genuine Web site, then switch to a distinct code immediately after.

What might work is auditors. You publish detailed procedures on how you and your employees make sure that the server really runs your code and not something else. The procedures include things like your password policies, maintenance rules, code reviews, hosting service physical security, screening of employee criminal records and debt status, and zillions of other points. Then you pay an audit firm to send people inspect your procedures and see whether you are following them; and they finally write a report which tells that you really are following all your procedures. This does not prove that the Web site is really running the code it should, but it shows that you at least made a substantial effort towards that goal.

Formally, this process is called WebTrust. It is expensive.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
1

The short answer is no.

There is always some hole at some point but if your service is secure and if you can force the client to run upon certain circumstance, risk can be minimize. There is always risk, know them as what you don't know can harm you.

happy
  • 225
  • 1
  • 6
1

You can't.

In principle, you could use remote attestation (a feature of Trusted Computing, relying upon a TPM on the web server) to have the web server "attest to" ("prove") what code it is running, and the client could check this attestation. However, in practice, remote attestation is too difficult to use for this purpose; there are too many practical barriers.

So, there's no way to prove that the code running on the website has not been changed. Instead, you need to secure the web server as well as possible to limit the opportunities for an attacker to change the code in the first place.

In other words: focus on prevention, not on detection/verification, as prevention is more tractable.

D.W.
  • 98,420
  • 30
  • 267
  • 572
0

If you can trust a file integrity monitoring solution (e.g., TripWire) internally, you can try to apply something similar through some trust third party.

You publish the authorized hashes of your files somewhere on the Internet.

A website admin installs your software on a server. The server is monitored by a trusted third party. The trusted third party installs their file integrity monitoring software.

The monitoring company can certify the initial install matches the approved hashes. The third party monitoring company will control and monitor the FIM tool and report somewhere on their site if the hashes change or are no longer in compliance.

If you trust the third party and they can put in reasonable controls to make the end-user trust them, you have an outside person monitoring. I assume the fear of the end user is if you self report, you have an interest not to report a change you make that would hurt them. In a way I guess this would be somewhat similar to what the AV and Pentest companies are doing when they put those little badges on their customers sites talking about their last security check.


Another possibility might involve some type of code signing if it was applet based, allowing the end-user to confirm that the client they are using it known code from some particular source. You could possibly have some type of check for signed code on the backend, but that might be problematic with non-compiled code, and you would likely still need third party confirmation since you by definition do not want to trust the web server itself because they can always lie or pass the answer through some proxy to MitM their evidence coming to you.

Eric G
  • 9,691
  • 4
  • 31
  • 58
-1

You could create a list with file hashes, store this on a remote computer and schedule a compare week or daily. In this way you'll mention when files changes!

  • But how do you know that the hashes are computed correctly when you verify them? If the server has been compromised, it may report the expected content to the verifier but execute the infected files. – Gilles 'SO- stop being evil' Jan 02 '13 at 09:35