I'm a Security Architect, and I'm used to defining the security of project as a specification that gets carried out by others. I have been recently tasked with teaching new coders how to design and program using the principles of "Secure by Design" (and in the near future "Privacy by Design"). I have 30-45 minutes (yeah, I know), and the talk needs to be language-agnostic. This means I need to present actionable rules than can be applied by the web devs, application devs, and infrastructure devs.
I came up with 5 Basic Rules and a Supplement:
- Trust no internal/external input (covers sanitation, buffer overflows, etc.)
- Least Privilege for any entity, object or user
- Fail "no privilege"
- Secure, even if design is known/public
- Log so that someone unfamiliar with the system can audit every action
Supplement: If you violate a Rule, prove the mitigation can survive future programmers adding functionality.
Each of those rules can be augmented with examples from any language or application, for specific guidance. I believe this handles most of the general principles of "Secure by Design" from a high-level perspective. Have I missed anything?