2

So, I know rolling your own security is ill-advised, but for simplistic things like communicating with a home server, say, updating a grocery list, is a custom protocol fine? It won't be doing anything that needs to be secured, so it seems fine in that sense, but I guess someone could reverse engineer packets and send bogus grocery lists... but then "bogus grocery lists" seems hardly an issue.

So, are simplistic protocols not carrying sensitive data still a hazard to create and use?

--- Clarification

Kk, so if the data does not need protecting, simplistic custom protocols won't keep the data secure, which is fine for "worthless" data; but how about the server/client that implements these protocols? Will the use of breakable protocols create insecurities on those implementing the code to support them?

user2738698
  • 859
  • 9
  • 18
  • 8
    Bogus grocery list not an issue? How am I going to survive film night with THE WRONG BRAND OF NACHO CHIPS??? – Philipp Apr 14 '14 at 18:50
  • 1) If this is a web application, you can't achieve security against active attacks without HTTPS. 2) Why bother designing a custom security protocol when you can easily use an SSL library? While personally I like custom protocols, I also spent a lot of time learning how crypto works and how to design a protocol. If you're not willing to spend that effort, you should use an off the shelf solution. – CodesInChaos Aug 13 '14 at 09:03

4 Answers4

5

By definition, if it doesn't matter if someone has access to or modifies your data, then it isn't sensitive and doesn't have to be secure. For things that have to be secure, then it is ill advised to use a custom protocol unless you can invest the huge amounts of time and resources necessary to ensure its security (millions).

Note that there is also a difference between the security and data protocols though. You could use something like SSL and use any protocol you want for exchanging the data. Similarly, you could have your own system for verifying the user credentials, but the hash algorithms and encryption algorithms and anything else that can use standards should do so.

There is a lot more flexibility in how you use the standards together than in trying to make your own standard. Certainly it is preferable to use a complete standard system if possible, but it is far less burdensome to prove the security when you are using components that can be assumed to do their job in a secure and well understood manner.

AJ Henderson
  • 41,816
  • 5
  • 63
  • 110
4

I suspect you answered your own question already.

The mere fact that you want to protect the data implies that it is sensitive and should not be modified or leaked. If this is not the case, why bother with protecting at all?

If the opposite is true (the data should be protected) then the "rule" stands that the use of custom protocols and encryption algorithms is ill-advised.

Edit: Given your clarification - Although slight, there is always the possibility of introducing vulnerabilities on the server/client when using custom protocols (the degree of possibility will obviously depend completely on what you do and how you go about doing it). Since we have now determined that the grocery list is "worthless" data it would seem pointless trying to protect it in a way that could potentially open up additional attack vectors on the client/server.

It would not make sense trying to protect something "worthless" given even the smallest chance that the protection mechanism could expose something valuable.

In the end it's all about your risk appetite and what you deem to be more risky - The grocery list data being leaked or the potentially creating more vulnerabilities?

This is assuming you can go without the custom protocols.

ilikebeets
  • 2,646
  • 15
  • 21
  • 1
    Ah, so, a custom protocol is insecure in the sense that it may open vulnerabilities to "worthy" data, regardless of its purpose of communicating "worthless" data? – user2738698 Apr 14 '14 at 16:46
  • Exactly, it won't necessarily be the case but as I said, there is always the possibility depending on what you do and how you do it. It would be up to you to determine if the custom implementation puts your valuable data at risk in any way. – ilikebeets Apr 14 '14 at 17:06
4

You have to make a distinction between the applicative protocol and the transport protocol. SSL/TLS is a transport protocol: it ensures some security-related guarantees (confidentiality, integrity, some authentication) for a a bidirectional stream of bytes. What these bytes mean is what the "applicative protocol" defines. E.g. in HTTPS, HTTP is the applicative protocol and SSL is the transport protocol.

Defining your own applicative protocol is completely up to you. You can botch your own implementation and have, say, buffer overflows; but, otherwise, there is nothing in the applicative protocol definition which will endanger the security guarantees offered by the transport protocol: from the point of view of SSL, bytes are bytes. There is a small caveat, though: SSL guarantees confidentiality for byte values, but the length of the encrypted data leaks. Data-length leakage has been a source of issues, e.g. the so-called CRIME attack.

Defining your own transport protocol is a bad idea. Either the applicative data does not need any of the security properties of SSL as a transport protocol, in which case the simplest and most efficient method is not to use SSL at all, just raw TCP; OR you still need some security, and "rolling your own" is a known recipe for disaster. The underlying tone is that, contrary to a widespread belief, there is little room for extra optimization in SSL: there are very few parts of the protocol which can be removed without totally breaking the security.

(What you can do, though, is to strip down functionalities: support only one cipher suites on client and server, remove unneeded extensions, and so on. This is still standard SSL/TLS, and still uses existing SSL/TLS libraries, which is the point.)

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Much of the complexity of SSL stems from the need to authenticate strangers. If there exists a trustworthy outside channel that can be used to exchange private keys, even if it's too inconvenient for frequent use, protocols with far less overhead become practical (this may be very relevant when handling small transactions over slow data links). – supercat Apr 14 '14 at 17:24
  • Actually all the certificate processing is X.509, not SSL -- the SSL/TLS standard deliberately declares it as such. Using SSL/TLS with externally provided public keys is _still_ part of the standard. – Thomas Pornin Apr 14 '14 at 17:31
  • Any sort of public-key crypto will establish a minimum transaction size measured in thousands of bits, and require thousands of bits of RAM to process. By contrast, private-key crypto may do a secure transaction with minimal overhead (if data is guarded by a CRC32 before encryption, the only overhead would be that imposed by the CRC32 and/or minimum block size requirements). Some systems have enough bandwidth, RAM, and speed to handle public-key encryption, but some embedded systems don't. – supercat Apr 14 '14 at 17:57
  • 1
    TLS also includes pre-shared key cipher suites, with no asymmetric crypto involved: [RFC 4279](https://tools.ietf.org/html/rfc4279). That's still standard. Also, I kind of cringe at the suggestion of using CRC32 as an integrity check in a cryptographic context (I have seen -- and broken -- banking systems which did just that). – Thomas Pornin Apr 14 '14 at 18:07
  • Was the CRC32 used on the encrypted or unencrypted side? How would one attack CRC32 on the encrypted side, and what would you suggest as an alternative if e.g. one is e.g. performing small independent transactions over a wireless data link were byte counts matter? – supercat Apr 14 '14 at 18:10
  • 1
    That's a nice exercise that is used with students in cryptography: consider encryption with a stream cipher (e.g. RC4), and the CRC32 being encrypted as well. Hint: "it is all linear". Hint 2: WEP did that mistake as well (among others). – Thomas Pornin Apr 14 '14 at 18:13
  • For small transactions, I tend to think in terms of block ciphers. I assume your observation plays on the interaction of CRC32 with certain forms of stream cipher that xor the cleartext with a cryptographically-generated bitstream? For cases where one side can encrypt everything before the other needs to decrypt anything, I would think it desirable to encrypt things forward and backward, so every bit could avalanche to every other; that would pretty well stop the CRC32 attack, right? – supercat Apr 14 '14 at 18:28
  • [note: some years back, I've developed some protocols of that style for use over short-range communications links where denial of service would be trivial, and the processor couldn't handle more than 100 transaction requests per second, if that; I think I used a 16-bit transaction and CRC16 but checked other bit fields for validity, so less than one in four billion random packets would be valid. If packet data is adequately avalanched, do you see any weaknesses with such a scheme if the probability of a random packet being valid is low enough relative to the processing speed? – supercat Apr 14 '14 at 18:45
1

I think you're asking if a hacker could cause trouble for your home servers if you're using an insecure protocol for data. This is completely dependent on how you handle the data and how you implement parsing the information packets.

If you're ONLY using the custom protocol for non-sensitive data and implementing good security practices (like blocking too-long requests and properly truncating null-terminated strings, closing and timing out connections, etc.) then you could be sending the data in whatever format you want.

You can also just ask yourself how likely the threat of an attack is. Depending on your location and situation, it might be safe to assume you don't need to worry too much. But I personally wouldn't want to take a chance like that if I didn't have to.

From an engineering standpoint, you would probably save yourself a ton of time by using a standard protocol and decent libraries, but if you want to do your own thing, have fun with it.