52

I'm a Security Architect, and I'm used to defining the security of project as a specification that gets carried out by others. I have been recently tasked with teaching new coders how to design and program using the principles of "Secure by Design" (and in the near future "Privacy by Design"). I have 30-45 minutes (yeah, I know), and the talk needs to be language-agnostic. This means I need to present actionable rules than can be applied by the web devs, application devs, and infrastructure devs.

I came up with 5 Basic Rules and a Supplement:

  1. Trust no internal/external input (covers sanitation, buffer overflows, etc.)
  2. Least Privilege for any entity, object or user
  3. Fail "no privilege"
  4. Secure, even if design is known/public
  5. Log so that someone unfamiliar with the system can audit every action

Supplement: If you violate a Rule, prove the mitigation can survive future programmers adding functionality.

Each of those rules can be augmented with examples from any language or application, for specific guidance. I believe this handles most of the general principles of "Secure by Design" from a high-level perspective. Have I missed anything?

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • 10
    Don't leave out AviD's usability maxim, please! – Deer Hunter Jan 06 '16 at 10:49
  • 3
    Please do not forget to include a "how do I find out more?" section. It sounds like this will be a quick taster to instill the importance of secure-by-default and the real learning (of the implementation) will come later and it in important to signpost either to yourself or colleagues or suitable online resources suitable assistance for "later". – kwah Jan 06 '16 at 13:13
  • Also, what is your anticipated 'class' size? – kwah Jan 06 '16 at 13:13
  • @DeerHunter dug deeper. "Security at the expense of usability comes at the expense of security.”  — Avi Douglen – Trojan Jan 06 '16 at 17:02
  • If you have 30 to 45 it's no use to try to teach 5 principles. If you can get the audience to actually apply a single principle that would be success. – Christian Jan 06 '16 at 19:43
  • 2
    I'm not entirely sure what you mean in point 3, but I would also include something like "don't cache permissions" - for example, a user could be logged in, and an admin changes their permissions, and if they were cached at login, they could still retain access to actions they should no longer have. So my advice would be to check/validate the user has the necessary permissions before each action. – Keith Hall Jan 07 '16 at 07:57
  • "Secure by design" is quite a vague term, when you really think about it. A question for you: how would "secure by design" prevent SQL injection? If you can explain that, I can better understand what you're trying to do. – paj28 Jan 07 '16 at 11:35
  • @paj28 "Secure by design" is an approach. Your SQLi question doesn't fit. To borrow from the zen thinking, the answer is "mu". – schroeder Jan 07 '16 at 15:37
  • 1
    You could take a look at several European Projects that have recently dealt with Secure by Design principles and techniques. I worked on one of them in the area of Trust Management. The project is called NESSoS and its focus was on Security Engineering for Future Internet. You can read the deliverables of the project here: http://www.nessos-project.eu/index.php?option=com_content&view=article&id=104&Itemid=126 – Francis Moy Jan 08 '16 at 12:27
  • Go through the OWASP Top 10 or at least mention it? People have to be aware security is a constantly changing field and they should keep themselves up to date long after the talk. – billc.cn Jan 08 '16 at 16:09

5 Answers5

39

The canonical resource for the concept of secure-by-design is "The Protection of Information in Computer Systems" by Saltzer and Schroeder. The essence is distilled into their 8 principles of secure design:

  1. Economy of mechanism
  2. Fail-safe defaults
  3. Complete mediation
  4. Open design
  5. Separation of privilege
  6. Least privilege
  7. Least common mechanism
  8. Psychological acceptability

These principles, laid out in 1974, are still fully applicable today.

bonsaiviking
  • 11,316
  • 1
  • 27
  • 50
  • 7
    40 years and we still have millions of webapps blindly accepting whatever input is sent their way... – corsiKa Jan 06 '16 at 22:48
  • 3
    I came across this which seems like a useful perspective http://cryptosmith.com/2013/10/19/security-design-principles/ – Adam Shostack Jan 07 '16 at 00:31
21

It will be hard to teach design principles in 30 minutes. I agree with others who say that you have to get them excited in some fashion. I developed the "Elevation of Privilege" card game to get people excited about threat modeling, it might be helpful. (https://blogs.microsoft.com/cybertrust/2010/03/02/announcing-elevation-of-privilege-the-threat-modeling-game/)

Teaching people how to think like an attacker is very challenging, it's easier to teach them about a few attacks like SQL injection or cross-site scripting.

Lastly, if you do want to try to teach principles, I did a series of blog posts illustrating Saltzer and Schroeder with scenes from Star Wars: http://emergentchaos.com/the-security-principles-of-saltzer-and-schroeder

Adam Shostack
  • 2,659
  • 1
  • 10
  • 12
  • 2
    The links http://microsoft.com/security/sdl/eop inside https://blogs.microsoft.com/cybertrust/2010/03/02/announcing-elevation-of-privilege-the-threat-modeling-game/ is not working any more. Can you provide the link to download the card game? – roguesecurity Jan 06 '16 at 04:29
  • 2
    @PiyushSaurabh This looks like the new page: https://www.microsoft.com/en-us/SDL/adopt/eop.aspx – bonsaiviking Jan 06 '16 at 05:23
  • Thanks @bonsaiviking! I didn't realize that the link broke, and since I've moved on, its a challenge to get fixed. – Adam Shostack Jan 07 '16 at 00:32
8

Rather than focus on rules and "follow these 5 rules, and you're secure", I'd focus on teaching developers about attackers, and how they think. You can't really cover 5 different things, each of which requires some in-depth knowledge to really implement properly, so why try?

The developers I've talked to seem to think hacking is "really hard", and don't really understand how easy it can really be. So explaining what people really do to thwart security can be eye-opening.

An example:

A few years ago I was reviewing a 3rd party web based reporting product and we had a developer from the vendor in to create some reports using the product. I asked about security and how it worked in their product. He proceeded to do a "view source" on the report web page, and show me how everything was dynamic HTML, and therefore was unhackable. I sat dumbfounded for a minute, but told him that this wasn't really workable security, that you can't trust the client end, blah blah blah.

He didn't believe me, and asked how anyone could possibly hack this product. I thought for a minute, said that I'd hook up the browser to a proxy server, and examine what the request/response was. (Today I'd just use the tamper-data plugin). He then said this would be "The hack of the century!" At this point I just threw up my hands in defeat, since he'd already decided his product was "secure". The only way to convince him would be to actually hack his product, which wasn't really worth my time since I wasn't going to buy the product anyway.

The point is, you need to start with the need for security and what we're all up against. If they don't understand that, it's game over. At the very least you'll instill a bit of fear in them, which is a good motivator. From what I've seen many developers don't really "get it", and they need to understand what they're up against first. Primarily to be able to understand WHY they need to develop secure applications.

Get people actually interested in security, and you might get something out of it. Otherwise I fear whatever you present in 35 minutes will just fall on deaf ears.

Steve Sether
  • 21,480
  • 8
  • 50
  • 76
  • In my experience, the biggest security risk is not attackers, but poor design. I *need* to teach them about proper design principles. – schroeder Jan 05 '16 at 19:13
  • 3
    @schroeder You can't teach good design in 35 minutes. Forget about it. Give them a hook to come back for more. – Steve Sether Jan 05 '16 at 19:14
  • 1
    I don't need to teach "good design" - I need hooks for them to work with. Then I can compare and ask them to prove how their design/code meets the rules. – schroeder Jan 05 '16 at 19:15
  • 5
    @schroeder If they don't understand the threats, how can they really understand how to evaluate their own code? Security doesn't come from a teacher who goes and asks you to prove your own work is secure. The person with the least knowledge about something will evaluate their own abilities highly. This is known in the social sciences as the Dunning-Krueger effect. Are developers with 35 minutes of training going to be able to know if they've designed secure software, or are they going to vastly overestimate their own abilities? My guess is the latter. – Steve Sether Jan 05 '16 at 19:24
3

You may want to grab their attention first. A demo of a SQL injection attack is simple, understandable, and might underscore the topic's importance. You can refer back to it throughout the talk as you make points.

I like that you get into trust boundaries. With input validation, I'd hit that in more detail. Length validation first, then mention whitelisting and blacklisting. Do you recommend they try to automatically fix bad data, or should bad input be rejected? Touch on strategies that you'd recommend.

Regarding least privilege, this might be an opportunity to introduce the ideas of role-based access control, and the advantages over a user-based system.

I think there's an opportunity to mention the principle of defense in depth. Input sanitization is critical, but following it up in the code by requiring parameterized SQL would help even if someone misses the boat on the input.

Regarding logging, make sure they understand the delta between an error message displayed to the user, and the contents of the log files. And be sure they aren't logging anything sensitive.

Also consider discussing a development process that helps ensure they stay on a secure track. Make sure peer code reviews, use of static code analyzers, and dynamic analyzers are a part of their development process. That may be outside of the scope of the training, but you could to teach them how reviews and tools help improve their code.

30-45 minutes ... ouch. You could go back to the organizer and request two or three days, see what kind of reaction that gets. Or maybe a 10 semester program... Anyway, good luck!

John Deters
  • 33,650
  • 3
  • 57
  • 110
  • 2
    A demo is a great thing to start with - security guy showed us (years ago) an IIS exploit, typed some magic into the address bar, and had himself a command prompt with admin privileges in return. At that point it was obvious how any security features we might have put on that entire box was worthless. (so separate your app logic onto a different server at least, no direct DB connections from the web server allowed!) – gbjbaanb Jan 05 '16 at 23:43
1

You've got a tough task, obviously, and all of the considerations to talk about you've mentioned will give you a very full plate. But I think some mention or tie-back to everyone's most-favored-buzzword, less-frequently-implemented overarching strategy of defense-in-depth would be very valuable to work in, if you could. Or, perhaps phrase it in another way, "The need to value and create redundancy of security measures against any given major threat vector in any decent security design."

You're building a web app, and you're pretty confident you've got a robust mechanism in place to sanitize any & all code injection efforts out of your input? That's nice. Now assume a creative attacker finds an implementation flaw that you never even thought of, a flaw that allows some really nasty, powerful code to get in front of your actual application. Are you designing & hardening your app logic with the idea that it might need to face that scenario, or are you just going to assume that because you have what you see as a strong, reliable single defensive mechanism against getting code in front of your application you can shift your focus to other things? Because the difference between those two choices is often going to be the difference between arriving at something that might be a robust security design vs. coming up with what will almost certainly be an inherently fragile security design.

(Note: as I'm writing this I'm realizing the example of relying on input sanitization as a perfect defense isn't the best one, because in the Real World we live in today hopefully few competent developers would fall into a temptation to take something as known-to-be-very imperfect as input sanitization as a rock solid, impenetrable line of defense. Hopefully. But you take my broader point point... )

mostlyinformed
  • 2,715
  • 16
  • 38
  • think of a zero-day exploit in the web server itself that allows an attacker to effectively bypass any input sanitisation. Once they own the webserver, they can "select * from users"and download all those juicy passwords. – gbjbaanb Jan 07 '16 at 10:12