27

I have recently came across an old blog post by Jeff Atwood which got me thinking.

What could be easier than a EncryptStringForBrowser() method which has security and tamper-resistance built in, that's part of a proven, domain-expert-tested set of code that thousands if not millions of developers already rely on?

Obviously the first rule of cryptography is to never roll your own. However, how far should a developer take this?

Should they build upon the cryptography libraries present in the programming language? Should they use third party cryptography libraries? Or should they take things on step further and use a library that takes care of everything, such as uLogin for PHP for example.

If a third party library is to be used, how can a non-cryptographer verify the security of said library besides going by it's reputation.

  • 6
    I would say, don't use uLogin. That would require you to use PHP. – AviD Dec 21 '12 at 11:23
  • 2
    Doesn't Jeff's post cover it? The highest level possible, and no lower than the level at which you are expert. – pipTheGeek Dec 24 '12 at 13:11
  • 2
    @pipTheGeek Because while I certainly respect Jeff's opinions, he is certainly no security expert. Nothing wrong with getting the opinions of the experts over here eh? –  Dec 24 '12 at 14:05
  • @pipTheGeek perfect answer. – K1773R Dec 24 '12 at 22:14

7 Answers7

13

Half of cryptography is about the raw algorithms, like SHA-256 or AES. The other half is about assembling these algorithms into complete protocols like, for instance, SSL/TLS; designing a secure protocol is not easier than building a secure algorithm. When a developer meddles with an algorithm, then, by definition, he is in the process of creating his own protocol, i.e. doing his own crypto. Thus, caution is advised.

In an ideal world, a developer would:

  1. analyze the problem he is trying to solve;
  2. find an existing, vetted, thoroughly verified protocol which matches his problem;
  3. use a library with a well-designed API, which implements the protocol (and its underlying cryptographic algorithms) with all the care that is needed for such things.

Unfortunately, the real world happens to be, as it were, real, i.e. not ideal at all. Available libraries often have a difficult to use API, mostly because the library designer was overenthusiastic: he took great pains to understand how the protocol works, and he wants to share. Thus, he makes an API which exposes a lot of how the protocol works internally, and the library user is confused.

Moreover, there are not so many ready-to-use protocols. You would like to find a protocol which applies to your problem; but you often have to squeeze and shove and torture your problem until it looks like something which fits one of the few existing "all-purpose" protocols (namely, SSL or bcrypt). When your only tool is a hammer, all problems are strongly expected to behave like nails.

And, of course, developers do not analyze their problems. They first code it, then they throw cryptography at it as a kind of exorcism against the demons of insecurity.

Developers should not have to think about cryptography except at a very high level of abstraction, using existing implementations of protocols. But since it is not practical, due to the general lack of ready protocols and available implementations, developers must get their hands dirty and understand cryptography down to the minutest details. This is not specific to the field of cryptography, but cryptography has an aggravating characteristic which is that you cannot test security: you can know that your product is secure only by having many people look at it and think real hard. If you are outside of the existing protocols where such scrutiny already took place, then you are doomed to do your own thinking.

Now if you think that my conclusion is that, on average, cryptography is sloppily applied by developers who do not grasp it, leading to countless vulnerabilities and a general state of rampant chaos, well, you guessed well.

Thomas Pornin
  • 320,799
  • 57
  • 780
  • 949
  • Even according to your ideal world, there is still a gap between (step 1) analyze the problem, and (step 2) find an existing protocol which matches his problem. That is, there is a not-insignificant problem of *misapplied* protocols. Using a well-vetted protocol for the wrong problem usually results in additional security vulnerabilities. There is an implicit assumption in your statement, that he understands the protocol's context well-enough to know that it matches his problem, or not. – AviD Dec 30 '12 at 09:24
11

My take-aways would be:

  • Never roll your own cryptography if you can avoid it. (Sometimes you'll have to for tailor-made solutions / embedded platforms.) If you have to roll your own, always build on known working protocols.
  • Don't rely on hiding your implementation. Obfuscation is a valid layer of defense in depth, but should only be used to add to existing security, not be your main source of security.
  • Use third party products that have some kind of trust and reputation. For C / C++ / Objective C, I use the cryptographic parts of the SSL library PolarSSL. PolarSSL has very little inside and outside dependencies and you can just rip out the parts you need and PolarSSL has been vetted by the Dutch government. Checking it yourself is not really an option.
David R.
  • 211
  • 1
  • 3
  • 6
    +1 for the rarely noted, but quite valid "Obfuscation is a valid layer of defense in depth," – MCW Dec 21 '12 at 15:35
  • Obfuscation has it's place but not everywhere is the right place. Equally obfuscation can decrease internal understanding of a configuration by the people who run it, this increases the risk of error, which decreases security over time and also reduces the ability to detect security events of interest. – Bernie White Dec 25 '12 at 21:41
7

Last night I finished an online course that is offered by Stanford University's Center for Professional Development. Video lectures were by Dan Boneh, a prominent cryptographer and professor at Stanford. He said something that stood out to me that I wanted to share with you, which was:

  1. Don't try to implement your own cryptographic protocols (as you've already said).
  2. Don't use proprietary versions of cryptographic protocols (as you eluded to).
  3. Make sure to use the cryptographic libraries correctly. In other words, if you need authenticated message integrity, even if you have provably secure libraries, MAC'ing then encrypting (instead of encrypting then MAC'ing) can still leak information about your plaintext. Prof. Boneh mentioned that he often sees crypto being used incorrectly in practice.

I guess the only way, as a non-cryptographer, to be sure that a library is secure is to perhaps ask someone who is a cryptographer? Otherwise, it'd be really hard to prove that a function/library was actually secure.

I know I'm speaking in a more academic sense than a practical one, but I just thought I'd share what I learned. I think he's at the forefront of the crypto community and I was impressed by his lectures.

Joseph K.
  • 71
  • 1
  • 3
    A link to the class would have been nice, and I'm not sure you've really addressed the OP's question. But welcome to the site. – MCW Dec 21 '12 at 13:29
  • Presumably he's talking about the [coursera crypto course](https://www.coursera.org/course/crypto). Seems to fit the description. – tylerl Dec 24 '12 at 20:45
7

Not the best answer, but I'd say developers need to learn enough to make good decisions with respect to the risk to their project.

I have seen that any project needs someone who knows enough about the entire system to be able to:

  • Diagnose and fix cases where parts won't connect - increases with the number of diverse systems you try to connect together.
  • Can test to make sure that cryptography is being implemented as expected.
  • Can make the judgement calls on "how good" the crypto functions need to be and where they should be placed.

It's a good practice to have 1 guy who really, really knows this stuff, who can set up a pattern for everyone else. In most web development projects I've worked on, we've had a pattern for how to initiate secure sessions and what cases require the pattern, and a second set of patterns relating to secure authentication. This gets written as a reusable tool set, so it can be used by everyone else, who works at the most abstract level.

How the technology base is chosen is as much a senior level architectural decision as anything else and it's an area where if you have a big risk, you need to hire people with the expertise to know what to vet. One of the common problems I see is that the average software engineer will focus on a particular type of concern (often the strength of the algorithm) and miss the larger picture that the system is only as secure as the weakest link, so good practices that are totally separate from cryptography (memory handling, input verification, appropriate measures for credential distribution) - are as important (if not more so) as the strength of the crypto apparatus.

In a low-cost, low-risk environment, I favor the idea of slavish devotion to doing security exactly the way the platform provider says. It reduces the complexity of integration and it usually isn't insane... so long as you follow the guidance exactly. So it's taking the "never roll your own" maxim to the nth degree.

As risk and security requirements increase, the need to have someone who can evaluate tradeoffs and try other options increase. In just about every platform I've encountered, there's some severe limitations to what the default crypto libraries can do. Breaking out of the standard guidance can vary from being pretty simple and low-risk to being impossible, highly risky to security, or simply expensive to integrate. The problems in this area can make or break a product in two main ways - either integraing an uncommon scenario can drive product costs into the stratosphere, or you can introduce a security risk you didn't see coming that hits the product way down the line and can cause business catastrophe post release... Both situtations can be mitigated by hiring people with security control integration experience, but those folks cost money, so the costs will go up as you mitigate the risks.

Having worked in a high-end security development shop, I can say that the best practices I've seen have been to have a lot of mentoring in this area where developers that have made previous products mentor newer engineers through vetting the architectural choices and figuring out the level of abstraction required in each case.

There's no right answer on some of the questions above... for example:

Should they build upon the cryptography libraries present in the programming language?

Sometimes. .NET and Java both have perfectly acceptable implementations of SSL, for example, but they get both get difficult if you change authentication mechanisms, or try to integrate with specialized devices with very limited options for algorithm choices. In many cases, the default implementation of standard crypto protocols limits your ability to make configuration choices or to bundle more complex add-ons without cracking the cover.

Should they use third party cryptography libraries?

Sometimes - I do this most especially when I need a more complicated option than the programming language provides natively. The challenge here is that in many high end environments, some third party libraries are not sufficiently vetted for customer approval, or you may be at the mercy of an evolving product that is unstable for what you need. That said, over time, developers experienced in this area build up a collection of favorite APIs in a given framework.

Or should they take things on step further and use a library that takes care of everything, such as uLogin for PHP for example.

If it works, sure. I'll admit, in my experience, I have had to crack the cover too many times, to provide specialized authentication methods or to address the need to integrate with not-regularly-supported equipment - so I rarely think of a total package like this as a viable option - but I'm willing to bet that my experience is the extreme case.

Whenever you give this many choices away, you have to hope that the implementor made the same choices you'd make. It's worth it to investigate hacks in this area and so see what the vulnerabilities are.

If a third party library is to be used, how can a non-cryptographer verify the security of said library besides going by it's reputation.

This is the one easy one - there's a few standards out there:

  • FIPS - covers that a crypto library or device meets specifications for a certain degree of protection. The lowest level includes software-only implementations

  • Common Criteria - covers a wide number of devices, OSes and other products - it's a test that the product does what the documentation says it does.

That's my two most common sources, but I'd bet there were others. I also check length of time in the industry and the support model for upgrades. When I look for a crypto product, I look for a certain longetivity and a wide range of interoperability testing and a clear way of distributing patches and responding to detected vulnerabilities. For example, seeing a long lag between US-CERT vulnerability releases and product upgrades would be a bad sign. But not seeing any US-CERT vulnerabilities is not a good sign - it can be a sign that the product's market share is so low that no one is even trying to break it.

If you ask if any of these standards are perfect- I'd say no. But they are a good sign that the product has been out long enough to have invested in some of these certifications, which means they've put some serious energy into making sure the security characteristics are up to snuff.

bethlakshmi
  • 11,606
  • 1
  • 27
  • 58
5

If developers shouldn't use cryptography libraries, then perhaps they shouldn't be available to developers. Given that they are, I simply must assume, as a developer, that I should be able to use them.

.NET has many built-in cryptographic algorithms, both in fully "managed" code and as wrappers to the CryptoServiceProviders in the MFCs. Java has a few of the same. Unmanaged languages would use the MFCs. They're all well-documented and any competent developer should be able to use those implementations to build a secure system.

Now, here's the rub; there are a lot of incompetent developers out there. It has come to my attention that even people with years of experience who write very good code can be completely out of their depth when implementing a system that uses cryptographic primitives.

As such, I say, let programmers program, and cryptographers... crypt. Programmers should implement designs that have been vetted by professional cryptographers (this goes beyond simply using the "built-in" primitives), and should have those implementations reviewed, if possible. Someone's gotta write the code that makes one company's product unique, but what's unique about it should not be how it uses AES, or RSA, or even an entire predefined security layer like SSL.

KeithS
  • 6,678
  • 1
  • 22
  • 38
2

A good rule-of-thumb with respect to security is to use software, packages, protocols, and mechanisms that have been thoroughly vetted by respected cryptographers.

So for example:

AES: yes
TLS: yes
OpenSSL: yes
MyPHPLoginSoft: no

Even commonly-used products (such as Joomla) frequently contain critical implementation and logic errors. Having a product vetted by PHP developers is not quite the same thing as having it vetted by cryptographers.

That's not to say that all amateur crypto-code should be avoided; it just means that you should trust it about as deeply as you trust code written by Jerry the programmer two cubicles over.

If, of the other hand, there was a heavily-vetted, well-respected library for doing what you want to do, then you're probably best off trying to fit it into your workflow instead of reimplementing it yourself.

In other words, the appropriate level of abstraction is the deepest level to which a trusted solution exists. After that, you have no choice but to be on your own.

tylerl
  • 82,225
  • 25
  • 148
  • 226
0

My opinion is: at every level of abstraction. Protecting information means protecting it at every level of abstraction, from storage, to processing, to transferring, to interpreting it, so there can't really be a "protectAllMyStuff()" method.

I'll make a concrete example to show what I mean: let's say that I'm developing a website, so I store passwords in a database: the level at which you need cryptography is everyone.

  • You'll need it at the transport level, with a good use of transport security
  • You'll need it at the storage level, with a strong hashing function [e.g.: PBKDF2]
  • Depending on your security requirements, you could need to encrypt your swap to prevent attackers from extracting leftovers of unencrypted passwords from it
miniBill
  • 335
  • 1
  • 8
  • 1
    I think you misunderstood the question. This isn't about at which stage of processing cryptography should be used, but what [level of abstraction](http://encyclopedia2.thefreedictionary.com/level+of+abstraction) most developers should view cryptography from: do they need to understand Merkle-Damgård constructions or can they treat SHA-2 as a black box? Is it enough to view SSL as a secure channel or is a more detailed understanding necessary? etc. – Gilles 'SO- stop being evil' Dec 24 '12 at 16:41
  • Yes, you are right, I completely misunderstood the question :) Should I delete my answer? – miniBill Dec 24 '12 at 20:18