29

If we give a security auditing company a working system, and ask them to audit it, and only do that once during a project because it's expensive, this is basically waterfall.

How can security auditing be integrated into an agile project without turning it into a waterfall project, and thereby introducing audit fail risk?

What we want to do is know the detailed security requirements upfront, so that we can create stories for them (and/or integrate them into existing stories), and write automated tests for them which give us some degree of confidence that the security requirements have been fulfilled. This is the agile way.

But the trouble is, if you don't know exactly what the first production deployment is going to look like until soon before deploying to production, because it's an agile project, you can't tell a security auditing company what exactly it's going to look like. So they may tell you "the number of possible vulnerabilities in an arbitrary system is extremely large, so we have to know what it looks like to narrow it down, so come back when you know what it's going to look like, and then we will give you the requirements". In that case, you can't do it in an agile way.

Robin Green
  • 640
  • 6
  • 11
  • 1
    "How can audits be introduced without introducing audit fail risk?" - tell your auditors to fake the audits? (assuming there's no external entity requiring honest audits) Of course, that's worse than not having audits. – user253751 Oct 07 '14 at 02:55
  • 2
    Does the customer require an external audit? (Many of the answers so far seem to be addressing "how to make your software secure", which is important, but not the same question as "how to pass an external audit"). – DNA Oct 07 '14 at 11:08
  • 1
    The same way performance and functional audits are. – keshlam Oct 07 '14 at 22:26

6 Answers6

21

Microsoft's guidelines for Security Development Life-cycle (SDL) for Agile recommends security practices during design, implementation, and release of the project. Regardless of the development methodology in use, no line of code should make it into production until it has undergone a security review. If financial constraints are preventing this level of security review from a professional, then a security review must be conducted by peers before a release can be finalized. Finalizing a release can be made into a fun process, I have seen companies host company-wide hack-a-thons and hand out prizes for interesting bugs.

Microsoft has done a lot of work in SDL and Microsoft's security has improved.

Whymarrh
  • 312
  • 3
  • 17
rook
  • 46,916
  • 10
  • 92
  • 181
18

The short answer is: integrate security into your software development lifecycle. It should be integrated into every stage: design, implementation, and testing.

There are many resources on how to build security into your software development lifecycle. See, e.g., Cigital's SDLC (the 7 touch points)], Microsoft's SDLC, OpenSAMM's SDLC, BSIMM, CERT's Build Security In, or questions here such as Secure Software Development, The Creation of Secure Software Development Environments, What is considered the simplest (or lightest) secure development lifecycle?, Which Secure Development Lifecycle model to choose?.

A "security review" is not a single thing. There are different forms of security review. Integrating security into your software development lifecycle requires taking security into account at each stage of the way, and those different kinds of security review will have different implications. These elements might include:

  • You should have a security design review, to review your design to understand architectural-level security risks associated with the design and the potential for design-level flaws (this is what Microsoft calls "threat modelling"). You don't need to re-examine this every time you change the implementation, only when you change the design.

  • You should also have security code review, to review your code to identify potential implementation-level code defects that could compromise security. Any time you write new code or change existing code, you do need to review that code for implementation bugs.

  • You should also integrate security into your testing efforts. Before you release a new version, you might test its security, particularly focusing on the features that have changed.

D.W.
  • 98,420
  • 30
  • 267
  • 572
5

If your security model is based around the concept of a point-in-time external audit of your entire codebase, then you're doing security wrong.

...And you're probably using the audit wrong too. But we'll get to that.

Beyond question, all code needs to be audited for security. In many cases, this is actually a legal requirement: no code ships without an audit, period. The traditional wisdom suggests that such an audit be an event at some point in the lifecycle, but a more sensible way to do it is to audit code as you go. That is, all code gets a security audit before it can be checked in to your codebase.

The theory is simple; the repository is already audited, so we don't need to re-audit its components as a standard procedure. But when a new feature or patch or bugfix is proposed, the diff has to be signed off by the appropriate maintainer(s). You can get sign-off for whatever is important to you. For example, the Linux kernel has a pretty involved approval process which requires several endorsements along the way for quality, simplicity, consistency, performance, etc. Your requirements may vary, but a security audit should be part of that approval process.

In this case you're not auditing the product end-to-end, you're just auditing the diff. But thousands of tiny audits over the course of the product's development cycle will be far more in-depth and comprehensive than any one end-to-end audit could hope to be.

A full-product end-to-end audit is certainly helpful and shouldn't be avoided. This audit should focus on the product as a whole in a way that isn't as easy to do during the patch-level audits you've been doing. You want to look at the whole forest from time-to-time, not just the individual trees. The timing of these large-scale audits should probably correspond to major releases, major changes, or compliance certification audits.

But by keeping current on the patch-level auditing, you can ensure that the code is always maintained in a verifiable state, so you can continue to ship on a regular basis with confidence.

About commit-time approvals
If your company isn't doing this, then you're doing everything wrong. There are dozens, hundreds perhaps, of problems that are solved by requiring every code change to be approved by at least one other person, including (and especially) during initial development. You should always have at least two people who understand how every line of code works, and who agree that the code is correct.

This is at least as important as unit tests. If you're not doing this, then stop everything and re-visit you policies around quality and security.

Yes, this process does scale. As noted above, the largest software project in the world uses it, as do some of the world's most agile and successful software companies.

tylerl
  • 82,225
  • 25
  • 148
  • 226
3

Add misuse cases.

If there is a feature that the system must exhibit, you use a use case. If there is a feature that the system must not exhibit, use a misuse case.

"As a competitor, I want to query the database back end for company sensitive data; this must not happen."

"As a hacktivist, I want to use the DMZ to reflect attacks at the government; this must not happen".

The product owner can prioritize these stories along with the others, but they are testable just like any other user story.

(I freely admit that I am not an initiate of the secret mysteries of agile; Agile has become something like the 77th order of the Masons, full of mysteries and commandments that SHALL NOT BE BROKEN, for fear of ineffable horrors).

MCW
  • 2,572
  • 1
  • 15
  • 26
  • 1
    How would you go about testing for the misuse cases? – paj28 Oct 06 '14 at 20:46
  • Excellent question. Notionally, same as a use case - write a test case to ensure that whatever implementation strategy I've selected is effective. The rigor of my test cases will be proportional to the risk of the mis-use case. – MCW Oct 07 '14 at 13:13
  • 3
    It's easy enough to write a test case for one particular attack. But that only tests that specific attack - it doesn't consider what other attacks a sneaky attacker might try. You mention rigor depending on risk, but it would be incredibly hard to maintain any rigorous test suites, even just for one type of vuln (e.g. SQL injection). I think DAST/SAST tools are a more promising approach, as they automatically apply reasonably rigorous tests. – paj28 Oct 07 '14 at 13:45
  • 1
    There is truth in what you say, and for a mature security program, I might defer to your advice. But for most of the security programs I have seen, the value in the approach I proposed is that it makes the security analysis explicit. If we fail to incorporate security risk analysis in development, then no amount of tools will prevent the ultimate security failure. If we succeed in defining security "No SQL injection", then we can write a suite of tests that ensure that input is quality checked. – MCW Oct 07 '14 at 13:53
3

As others have suggested, you can build in stories about security. And I would certainly encourage you to do that.

But if you're talking about an external team coming in and spending several weeks doing an audit ... that, it seems to me, is more about agile than security.

I know that agile places a heavy emphasis on shipping frequently -- after all, how are you going to shorten the feedback cycle if your software isn't in customers' hands? But for many organizations, releasing every 2/3/4 weeks simply isn't a option. Or they have a lengthy QA/QC review process and the QA organization isn't ready to go agile. Or they have other certification requirements (e.g. ISO) which don't fit into the agile lifecycle.

Consequently many agile teams discovered early on that they needed to decouple their releases from their iterations or sprints.
That is, instead of promoting to production, you promote changes to an environment dedicated to security / QA / whatever. When the code has been certified there, you promote it to production (or to the next gate).

If you've been building in "misuse" stories, presumably your defect / issue list should be short.

If a defect is found, it can be put into the backlog or prioritized for an immediate fix, depending on its severity.

Of course the extra environment isn't free ... but in my experience it's cheaper than the alternatives (which typically result in teams stepping on each other's toes).

David
  • 171
  • 3
2

You can do audits for each release just like a waterfall project. Although you noted some problems with doing that, many security companies can work very effectively with agile projects. However, if you release frequently, the cost of this may be prohibitive.

Another approach is to move testing in-house. If you buy a scanning tool, you can perform your own audits. They may not be as thorough as a specialist security firm, but you can run them as often as you like. Perhaps you integrate them with a nightly build system, or even run them in a pre-commit hook using continuous integration. There are two main types of these tools: Dynamic Application Security Testing (DAST) which is an automated penetration test, and Static Application Security Testing (SAST) which is automated source code review. A benefit of SAST is that results are reported in the developer's terms: source code file and line number. I do think DAST/SAST testing tools fit very well with the agile model; in a way they are unit tests for security.

A development team with mature security processes will use both in-house and third-party testing. SAST/DAST is used to ensure that all code gets at least a basic security review, and to catch problems early. Penetration testing is performed periodically, to try to detect complex issues that automated testing cannot identify. This may be on a fixed schedule, or may be risk-based, looking at the changes in each release.

Your question was about security audits. Of course, there are other aspects to security an application. Threat modelling to ensure security is reflected in the design. Select development tools and frameworks that encourage security, and give developers training to use these tools securely. This is the same as for waterfall development, but with agile it is more important to embed these skills within the development team, rather than have the security team come in periodically.

paj28
  • 32,736
  • 8
  • 92
  • 130