10

As per my information, there is no hard and fast rule for doing a security code review but we all develop our own strategy for the same. I was wondering if we all can share the different strategies involved or used in security code review.

p_upadhyay
  • 1,121
  • 3
  • 14
  • 31

5 Answers5

12

As @sonofaaa mentioned, in the book, "The Art of Software Security Assessment", the authors discuss code-auditing strategies in Chapter 4 (end of Part I).

In particular, external flow sensitivity (data flow and control flow) and tracing direction (forward or backwards slicing) are discussed along with many neutral methods of review. Other topics are discussed in great detail. It's the best written material on the subject of secure code review.

I should also mention "Secure Programming with Static Analysis" -- a book by Brian Chess and Jacob West of Fortify Software. They cover the internals and use of security-focused static analysis tools and compare them to other forms/tools in the secure code review world.

If you want to check out modern security-focused static analyzers, I suggest that you first get involved with some open-source or free tools such as CAT.NET for .NET (typically C#, VB.NET, and/or ASP.NET), find-sec-bugs (or the older LAPSE+) for Java Enterprise and JSP, and RIPS Scanner for PHP. You won't commonly find security-focused static analyzers that support dynamic languages, because they don't rely on a type system, but let me know if you are interested in support of Python, Ruby, or another dynamic language (or any other language) and I'll try to point you in the right direction. For starters, try Bandit (OpenStack project for Python code) and Brakeman Pro for Ruby.

Commercial security-focused static analyzers are meant for highly trained and specialized application security oriented developers. The cost of them assumes that someone will be running and analyzing these tools daily, as a full-time job, all year round. If you are interested in seeing quick results -- check out HPFOD -- but if you are interested to integrate these tools into a long-term, at-risk application portfolio for a large-installation -- check out Cigital ESP. There are also many application security boutiques and consulting shops that run and tune these tools for their clients. Depending on your locale and strategic direction, I would choose to partner with one regardless of anything else I mentioned, as they can be invaluable to success of an appsec program. Searching on LinkedIn for "application security consulting" should work if you don't where to go next.

atdre
  • 18,885
  • 6
  • 58
  • 107
  • Hi atdre, do you have any suggestions for Python static analysis? – paj28 Feb 24 '15 at 12:05
  • @ paj28: for which parts of the app stack in Python? in webapps, there's -- https://github.com/sdelements/django-security -- from the people who make this product -- http://sdelements.com/features/ -- not static analysis, but clearly all huge wins for appsec that go beyond the norm – atdre Feb 24 '15 at 16:02
6

Contrary to your statement, I believe that (security) code review should not be a mostly ad-hoc activity. There are some pretty strong methodologies for doing an efficient code review. For best results, this should be done incrementally and iteratively.

Here is a sample of a high-level outline of such a methodology, with some guiding principles:

  • Understand your system (architecture, design, ...). Use a prepared question list...
  • Decide clear objectives
    • scope
    • constraints
    • goals
    • non-goals!
    • types of security issues
    • time limit
  • Analyze threats (e.g using Threat Modeling) to help focus on high-risk areas
  • Preliminary scan using automated tools
  • Review complex / fragile sections
    • Complex code (e.g. Cyclomatic complexity)
    • Areas with high number of historical bugs
    • Even many “false positives” from automated scan
  • Identify data validation for all input
    • Account for trust issues
  • Data output in web pages
  • Specifically review all security mechanisms in depth (e.g. authentication, crypto, etc)
  • “Interesting” junctures, e.g.
    • Creating processes
    • Threads and synchronization (especially in static methods and constructors)
    • Resources Access (e.g. data access, file system, etc)
    • Default code
    • Elevated privileges
    • Unauthenticated access
    • Networking
  • Language specific issues,
    • e.g. for C/C++: buffer overflows (stack/heap), integer overflows, format strings, dynamic LoadLibrary()s, "banned" APIs, etc.
    • or for .NET: InterOp, reflection, dynamic Assembly.Load(), CAS/Full/Partial trust, unsafe code, etc.
AviD
  • 72,138
  • 22
  • 136
  • 218
  • Don't you agree that a few of the above mentioned points are part of the design review. Code review compliments the design review too but still I was looking for an approach which could be used independently and save my time and help me in finding most of the vulnerabilities. Your well defined points would always help but by asking this question, I wanted to see if there are any methodology/strategy/checklist which can be followed. – p_upadhyay Apr 21 '11 at 08:19
  • @p_upadhyay - I think in order to do it effectively, you should be tied in as early as possible in the process - in design – Rory Alsop Apr 21 '11 at 08:31
  • @p_upadhyay I absolutely agree with @Rory's comment. However, in situations where that is not possible, or you're pulled in only at that late stage - then yes, I would agree with your comment that this has some design review built in. But that is "by design" - if you don't understand the architecture and design of the system, your code review will not be effective, nor as efficient. So yes, you definitely need to base your CR on your DR... – AviD Apr 21 '11 at 09:54
  • Agree with both of you.. – p_upadhyay Apr 21 '11 at 11:08
5

I usually start with a checklist. Say the OWASP top 10 or the CERT C coding guidelines (or the section in my own book!). I get the team to evaluate the code with relation to the checklist, and also to do undirected testing and review. "popular" issues from the open review get added to the checklist, replacing non-issues which were on there originally.

In addition, static analysis tools are used where available.

The benefit of this approach is also its biggest drawback. Issues from the checklist are easily spotted, but are often the only problems found as creative bug-hunting is not easy.

3

TAOSSA Part I covers several approaches and hybrid scenarios, and Brandon Edwards does a great job of going over technology agnostic review strategies and common pitfalls of large reviews in the videos here (http://pentest.cryptocity.net/code-audits/)

2

Some methodologies can slightly differ depending on what you are auditing - for web applications it can be one, for C/C++ applications another. Also, it depends on whether source code is available. But, generally, it involves following stages:

  1. Black-box software testing - using scanners, fuzzers or manually;
  2. Source code auditing when available - again using scanners or manually.

During this process static or dynamic code analyzers are used. Which one to use is up to you, links I gave might help you.

However, very often you have to check software that was previously audited by you, then you can make your job easier. For source code you can use WinMerge, that would allow you to find differences between old and new version. For binary you can use Darungrim, BinDiff.

Topics that might have sense to read regarding current question can be found here around: