1

I am wanting to develop a web service (classic client/server) where the server is not trusted, so is kept (cryptographically) ignorant about the actual content/messages.

Obviously, if you don't trust the server, then you shouldn't trust any client that server hands you. So our client would have "no moving parts" - that is, one composed entirely of static files (no PHP/Ruby/whatever, no database), just a bundle of JS/HTML/CSS to be distributed separately.

A client like this could run locally on people's computers, or as a GitHub page, or via any webserver. The idea is that more people are capable of extracting a ZIP onto their computer (or using simple FTP) than would ever be likely to host their own server or configuring a database appropriately.

I'm not trying to guard against being individually targeted - I'm trying to guard against a single centralised point of attack, so every client would have to be compromised individually to get the messages for a particular user/group.

What would be the security implications/drawbacks of a setup like this?


Clarification: the method by which the ZIP of the client is obtained is not the issue I'm interested in. Technologies already exist for that (public-key signatures of hashes, et cetera). The issue of "How do I make sure I have a good copy of the client" is completely equivalent to "How do I make sure I have a good copy of my browser/Cygwin/antivirus" - I'm not interested in any concerns about this that could also be applied to installing FireFox, for instance.

What I'm interested in is any security issues with the setup, assuming that a verified version of the client is available.


P.S. - I'm aware that JavaScript crypto is frowned upon (example article here). However, the criticisms seem to be:

  1. You rely on obtaining a secure client from the untrusted server
  2. JavaScript code can't be sure that the VM running it is secure

#1 is exactly what I'm trying to tackle here, and for #2: if your browser's JavaScript VM is dodgy, then all websites are unsafe, as well as your credit-card details.

cloudfeet
  • 2,528
  • 17
  • 22
  • 1
    If you don't trust the server providing the client, what is to prevent the server from altering the client to send the information back to it? If they have to get the client elsewhere, why limit it to web technologies when native can do a far more secure and efficient job? – AJ Henderson Nov 04 '13 at 17:57
  • (1) Second paragraph, first sentence - I don't get the client from the server. – cloudfeet Nov 04 '13 at 18:04
  • (2) Web technologies are sandboxed - the user can be fairly confident that however corrupt it is, an HTML file can't do any actual damage to the rest of their system. – cloudfeet Nov 04 '13 at 18:05
  • Also, I do want this to be *possibly* available from anywhere - like checking email/Facebook at a friend's house. For that, you need it to be web-accessible - so I'm also trying to make self-hosting the client as simple as possible (static files, no other requirements/config). – cloudfeet Nov 04 '13 at 18:09
  • 1
    ok, sure, but you still need to get the client from somewhere and that server ends up being trusted since it is the source of the client. The only way around that would be to use existing technologies and let someone build their own client based on an open protocol. – AJ Henderson Nov 04 '13 at 18:14
  • How the ZIP of the client is obtained and verified is not the issue. SHA-256 hashes with public-key signatures, et cetera - all the same techniques you would use to verify that the *browser you are using* is solid. Verifying a set of files is solved problem, and not really what I'm asking about here. – cloudfeet Nov 04 '13 at 22:32
  • Now, if the user uploads the (verified) client to their own web-server, then at that point they are trusting their own web-server to not be compromised - but that's the point. Everyone can hitch their security on a different horse. – cloudfeet Nov 04 '13 at 22:34
  • at that point, why not just work with a server that doesn't contain the information that is needed to access the data but is still trusted to provide routing and handling and code. The keys can remain local, but the code itself can be provided from the server since at some point you have to start trusting. If you don't trust those operating the server to write secure code, then you don't trust them to write code you run locally either, so the code being provided doesn't matter as long as we know the server is authentic. – AJ Henderson Nov 04 '13 at 23:20
  • All that matters is that the server not be able to access the user's data with it's level of access. – AJ Henderson Nov 04 '13 at 23:20
  • The people running the server aren't necessarily the people supplying/writing the client (open standard, interoperable clients?). Or maybe the people running the server turn evil later, or the server is compromised. I want to not trust the server at all - and I think I can do that, as long as I have a good version of the client from somewhere. – cloudfeet Nov 04 '13 at 23:32
  • but how do you know which client is good and which is bad if there are both out there? You can assume that the first client is good, but how do you know that it doesn't have bugs that the new client actually needs to fix to remain secure? Fundamentally, if you don't trust the author of the software, you can't trust the software. Distributing it so everyone has a different copy makes things worse, not better since now you have a bunch of different parties with potentially incompatible clients all able to claim to be the secure one. Trusting the wrong party in any case results in compromise. – AJ Henderson Nov 05 '13 at 00:04
  • It is however a valid claim to talk about people running their own servers separate from the system author, but then it could still simply be obtained from the system author's site live. – AJ Henderson Nov 05 '13 at 00:06
  • "If you don't trust the author of the software, you can't trust the software" - this is true. The author, however, might not want to host the client for everyone - or maybe I want to customise the layout, or something. While reviewing changes in HTML/CSS is not completely trivial, I've got a better shot at it than reviewing changes to the underlying crypto code. – cloudfeet Nov 05 '13 at 00:28
  • except that complex javascript can self-mutate and thus it's pretty trivial to inject vulnerabilities nearly undetectably. – AJ Henderson Nov 05 '13 at 04:32

0 Answers0