69

A person has good knowledge of overall security risks, knows what OWASP Top 10 vulnerabilities are, and has certifications like CEH, CISSP, OSCP, etc. which are more black-box testing. And also he has gone through the OWASP Testing Guide, Code Review Guide, etc. and cheat sheets. Will he be able to perform secure code review without knowledge of multiple programming languages and mastery over them?

Philipp
  • 48,867
  • 8
  • 127
  • 157
Krishna Pandey
  • 1,497
  • 1
  • 16
  • 26
  • 57
    He will not be able to perform a thorough secure code review if he does not know the languages in which the code to be reviewed has been written. – Xander Nov 24 '15 at 18:28
  • 20
    Please don't write in the third person :P – J Sargent Nov 24 '15 at 20:38
  • 30
    Can you catch and understand all the exploits at http://www.underhanded-c.org/ (which are fairly short and explained in detail)? That's one language. –  Nov 24 '15 at 20:53
  • @drewbenn - Nice Website you mentioned. #NᴏᴠɪᴄᴇIɴDɪsɢᴜɪsᴇ :) – Krishna Pandey Nov 25 '15 at 08:19
  • 6
    @drewbenn Another great example is http://escape.alf.nu/ - it's incredibly hard to prevent XSS using sanitization. Most of the tasks require you to know exactly how JavaScript and HTML work and interact. – Luaan Nov 25 '15 at 09:04
  • 3
    "To understand somebody's else code you need to be twice as good as when writing it from scratch." – Agent_L Nov 26 '15 at 16:10
  • If the subject of the «secure code review» doesn't cover multiple programming languages, neither does that person need to master many programming languages just for the sake of it. Knowing the language(s) used in the application should be enough. – Ángel Nov 27 '15 at 00:30
  • Even being a good programmer is likely not enough to perform a thorough secure code analysis against code trying intentionally to obfuscate something. One needs to be a great and patient programmer. – simon Nov 27 '15 at 18:08
  • Is the code to be investigated possibly written in Brainf*ck? I wouldn't dare recognize a correct implementation even of factorial in that case – Hagen von Eitzen Nov 29 '15 at 21:24
  • 1
    There is the point to consider that much of this work is incredibly tedious, and it's hard for many "SMEs" to keep their attention on the issues. There is ample room in the field for people who, while not Einsteins, are methodical and very detailed in their work, perhaps helping coordinate the work of other "experts". – Hot Licks Nov 30 '15 at 03:32

7 Answers7

116

It depends on what is meant by "secure source code analysis." One can do anything one pleases. The issue, I presume, is when someone else has asked for something called "secure source code analysis," and one wonders why one is not qualified for it.

In many cases, such analysis must be done by a Subject Matter Expert (SME). In the final product, a SME will deliver a statement basically saying "I declare this code to be secure," with an understanding that is a more profound statement than "I looked for a bunch of known patterns, and found no problems."

If you were interested in the authentic translation of a Chinese philosophy, would you trust an individual who knew a great deal about philosophy, and had a bunch of cheat sheets to help decipher it, but did not actually know Chinese?

One great example that comes to mind is a bug that hit a SQL engine. Forgive me for not having the name of which engine, or which version so you can verify, I have had trouble finding it since. However, the error was poignant. The error was in code that looked like this:

int storeDataInCircularBuffer(Buffer* dest, const char* src, size_t length)
{
    if (dest->putPtr + length < dest->putPtr)
        return ERROR; // prevent buffer overflow caused by overflow
    if (dest->putPtr + length > dest->endPtr) {
        ... // write the data in two parts
        return OK;
    } else {
        ... // write the data in one part
        return OK;
    }
}

This code was intended to be part of a circular buffer. In a circular buffer when you reach the end of the buffer, you wrap around. Sometimes this forces you to break the incoming message into two parts, which is okay. However, in this SQL program, they were concerned with the case where length could be large enough to cause dest->putPtr + length to overflow, creating an opportunity for a buffer overflow because the next check wouldn't work right. So they put in a test: if (dest->putPtr + length < dest->putPtr). Their logic was that the only way this statement could ever be true is if an overflow occurred, thus we catch the overflow.

This created a security hole that actually got exploited, and had to be patched. Why? Well, unbeknownst to the original author, the C++ spec declares that overflow of a pointer is undefined behavior, meaning the compiler can do anything it wants. As it so happened, when the original author tested it, gcc actually emitted the correct code. However, a few versions later, gcc did have optimizations to leverage this. It saw that there was no defined behavior where that if statement could pass its test, and optimized it out!

Thus, for a few versions, people had SQL servers which had an exploit, even though the code had explicit checks to prevent said exploit!

Fundamentally programming languages are very powerful tools that can bite the developer with ease. Analyzing whether this will occur does require a solid foundation in the language in question.

(Edit: Greg Bacon was great enough to dig up a CERT warning on this: Vulnerability Note VU#162289 C compilers may silently discard some wraparound checks., and also this related one. Thanks Greg!)

Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/32179/discussion-on-answer-by-cort-ammon-does-one-need-to-be-a-good-programmer-to-perf). – Rory Alsop Nov 26 '15 at 23:07
  • Yeah. I should note that without knowing the language, an analyst might not even know what the programmer is doing (let alone be able to find all the security problems), or why. Some languages have some very interesting features that are not obvious if you don't know the language well. Hopefully most stuff like that would have comments for the analyst, but I wouldn't count completely on comments to guide you. – Brōtsyorfuzthrāx Nov 27 '15 at 09:20
  • 1
    This kind of behaviour from a compiler always makes me wonder: given that the compiler *knows* that there is undefined behaviour, and that 100% of the time undefined behaviour is something you don't want in your code, couldn't the compiler warn when doing such a thing? This could prevent tons of bugs.... – Bakuriu Nov 27 '15 at 11:57
  • 2
    @Bakuriu: In C++ such "can't happen" cases _routinely_ come up, without any bug being present, when specializing templates or optimizing inlined function calls with constant parameters -- and in those cases optimizing them away can be absolutely crucial for performance. It would be quite hard for a compiler to distinguish reliably between "programmer wrote something undefined" and "programmer used a valid generic function in a valid way that I can generate better code for than the generic case" and report warnings only in the former case. – hmakholm left over Monica Nov 27 '15 at 14:19
27

I think you need to be a good programmer to be successful, so I'd recommend becoming one. There may be lots of things that your toolkit / scanner misses. I honestly don't recommend relying completely on tools scan your code for you, as exploits change constantly, and someone may have coded in a way where the scanner can't detect the vulnerabilities.

The ability to step through code and see how it works, and how it shouldn't work, is fundamental to secure software development. Having a developer aware of security issues is exactly what you want when it comes to producing a solid product, and exactly what you need during a code review.

While yes, you can point and click and check for vulnerabilities with your scanners and toolkits, it's not going to be very effective in the grand scheme of things. Do you know how much more effective you'd be if you could look at code yourself and determine whether it's good or bad? Waaaay better.

Don't try to pass a secure code review if you don't know what you're doing, but don't outright give up on the idea if you feel you aren't at a point where you can do a good review. I recommend trying to learn by creating your own mockup secure code reviews, and going through it a few times to ensure everything is okay.

But still, you should definitely learn not only to code, but to code well.

Mark Buffalo
  • 22,498
  • 8
  • 74
  • 91
14

It's doubtful that a security expert would be effective at performing source code analysis without also being a skilled programmer. Many vulnerabilities are the result of technical or syntactical coding practices that are misused in some minor way. A missing semi-colon, an equal sign instead of a double-equals, an array boundary that is doubly defined on day one but one version is changed in the subsequent release, missing brackets, memory leaks, all have led to vulnerabilities. An experienced developer may see these, but a novice likely would not.

What your security expert should be doing is encouraging the engineers to use automated scanning tools such as static code analyzers, fuzz testers, and dynamic app verifiers. Help the engineering teams to understand input validation, injection attacks, trust boundaries. Build awareness that security defects need to be prioritized appropriately and addressed quickly. Schedule and conduct pen tests. And most important, get the engineers to do code reviews of each others' work.

Yes, the security expert should be able to read the code, but that does not mean he is qualified to be the arbiter of code security.

John Deters
  • 33,650
  • 3
  • 57
  • 110
12

It depends on your expectations. Security vulnerabilities caused by design problems (i.e. missing CSRF protection, only rudimentary implementation of a protocol etc) can probably be found if the tester has a deep knowledge of security concepts, even if (s)he is only able to follow the code flow without having a deeper knowledge of the specific programming language.

But language specific security problems like buffer overflows, off-by-one errors, handling of unicode or \0, problems caused by the size of the data types and signed vs. unsigned etc will not be found if the tester has no deeper knowledge of the language, of bad practices and of typical insecurity patterns specific to the language. Take the history of Java vulnerabilities as an example, where not even the Java expert developers noticed the holes they punched into the sandbox of the language by adding reflection to the language and only external experts with a deep understanding of the language internals detected the flaws.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
3

Not only does secure code review require knowledge of the high level language, but also of the compiler options and HOW THE CODE ACTUALLY WILL WORK ON THE CPU! High level languages are efficient to write code in because they hide a lot of the complexity. But many errors and bugs hide within the complexity. As pointed out in another answer, compilers try to do the right thing, but you really need to understand what is going on by disassembling the code and developing deep understanding of how it works.

This understanding is also required with scripting languages like JavaScript that interpret the high level code to CPU instructions and memory allocation. Unfortunately, this review would be platform dependent. See for example at https://en.m.wikipedia.org/wiki/Interpreted_language.

Stone True
  • 2,022
  • 2
  • 17
  • 25
  • 2
    This is the true answer. 15 years ago security experts found bugs, wrote exploits, wrote papers about new techniques, etc. Now they just get "certified", throw around a few terms that they might only understand on a conceptual level (eg. buffer overflow, have you ever written one?), and think they're the same as the pioneer hackers. Two entirely different ballgames. Passing a tool's check isn't the same as being secure. Neither is surviving a fuzzer. – HorseHair Nov 25 '15 at 09:38
  • Not true if your focusing on many modern languages such as Java and JavaScript. – Neil Smithline Nov 25 '15 at 15:30
  • 1
    @NeilSmithline In those cases, it's _even worse_ because how it will run on the CPU now depends on _which CPU it's running on_ (and, in the case of JavaScript, which interpreter it's running in.) – reirab Nov 25 '15 at 15:57
  • 1
    @NeilSmithline - I believe you would still need an understanding of how the scripting languages like JavaScript interpret the high level code to CPU instructions and memory allocation to be able to say with certainty that a bit of code is secure. Unfortunately, this review would be platform dependent. – Stone True Nov 25 '15 at 16:00
  • Hmm... It doesn't seem that way to me but obviously others agree with you. – Neil Smithline Nov 25 '15 at 16:19
  • @NeilSmithline - Edited answer to provide link on interpreted / scripting languages operation. – Stone True Nov 25 '15 at 18:23
  • @horsehair and 15 years ago there were figuratively 3 security experts spending a month of human time on one bug, and now there are tens of thousands finding dozens of bugs in minutes. Progress from automated tools built on the previous work that few could do, but many can use, and standing on the shoulders of giants. Tremendously useful and improving the world a lot. Citation needed for your dismissive putdown *and think they're the same as the pioneer hackers*. Who, specifically, thinks that and how do you know? – TessellatingHeckler Nov 25 '15 at 22:08
  • @TessellatingHeckler - There were many more than 3 security experts reviewing code 15 years ago. As far back as 1998 hundreds of very young coders were reviewing code and inventing things like integer overflows (in the context of security exploits), while the corporate security experts were largely relying on (at that time) inadequate tools. I was involved with some of these groups in my youth. Your point about tens of thousands of people finding bugs now is relevant. Modern bug hunters are coders, not just "certified security professionals", and many spend full time reviewing code. – HorseHair Nov 26 '15 at 09:48
  • @TessellatingHeckler - Adding to the above (and straying a bit from relevance), in the "old times" people audited code because they were interested in doing it (sometimes for less than honest reasons.) The famous buffer overflow vulnerability was expounded by aleph one in his famous paper in phrack, though was known to "hackers" before that. The difference between then and today is that then, people were into security because they loved it (by and large.) Now many people look at salaries, get certified, get a security job. Not the same motivation, nor skill level. Exceptions exist. – HorseHair Nov 26 '15 at 10:12
3

Does one need to be a good programmer to perform secure source code analysis?

No.

Will he be able to perform secure code review without knowledge of multiple programming languages and mastery over them?

No.

There's more to programming than expertise in the details of how various languages work. It's one of the things you need to be a good programmer, and it's also one of the things you need to be able to analyse source code from a security perspective (or any other quality).

So while you don't need to be a good programmer, you do need mastery of the languages involved.

Jon Hanna
  • 269
  • 1
  • 5
  • So as per you, Mastery of the language = ? – Krishna Pandey Nov 25 '15 at 12:01
  • @K.P. Sorry, I don't follow you. – Jon Hanna Nov 25 '15 at 12:05
  • My understanding is, mastery of the language should make someone good programmer. isn't so? – Krishna Pandey Nov 25 '15 at 12:08
  • 4
    Hardly. You could know every nuance of a language and be no good at design, problem-solving, algorithm choice, invariant definition, test development, or most of debugging. Mastery of the language is certainly important, but it's not the most vital thing. Indeed perhaps less so for programming than source analysis (a programmer who doesn't know of a particular feature can attack the problem another way, someone analysing a program who doesn't know of a particular feature that was actually used had better learn it to be able to analyse the implications). – Jon Hanna Nov 25 '15 at 12:19
  • 1
    Design knowledge is probably needed, too, though, since vulnerabilities can sometimes lie in a poor design rather than a faulty implementation. – reirab Nov 25 '15 at 16:01
  • @reirab I'd say all of the skills of a programmer would be helpful to analysis, but with design being able to pick out a flaw requires a different level of skill than being able to decide on the best design (like critiquing and producing art), but being able to note a quirk in a language behaviour is if anything more vital to the analysis than the writing. – Jon Hanna Nov 25 '15 at 16:04
  • A good programmer must be able to put all the bits together to create a working application. You don't need that for security analysis. Like you don't have to be able to build a car in order to check whether it is secure to drive. You can check that an application is secure without being able yourself to build a secure application. You can be a good judge in a beauty competition while being butt ugly yourself. – gnasher729 Nov 25 '15 at 22:24
  • @gnasher729 And certainly, you can check an application is secure if you had the necessary skills to build a secure application that sucked by many other criteria. – Jon Hanna Nov 26 '15 at 14:48
  • @JonHanna - You won't develop mastery of a language without spending a lot of time programming in it. I'd be surprised to come across someone who was a master in a language, but not good at programming in it (though both are ambiguous terms.) – HorseHair Nov 27 '15 at 07:42
  • @horsehair well, they could increase their mastery of the language by analysing software written in it, perhaps. Certainly for programmers a lot more deep knowledge of a language will come from the sort of analysis we need to do in reviewing code than in writing it. – Jon Hanna Nov 27 '15 at 09:44
  • And total mastery of the language isn't needed if we assume that anything too complicated for the security reviewer is automatically insecure. – gnasher729 Nov 28 '15 at 23:45
1

In order to properly figure out the danger of side-channel attacks, you need to know hardware. There are really ugly side-channel attacks, like running a non-privileged process on a multi-CPU setup in parallel with a privileged one doing some encryption/decryption task and probing which shared cache lines get dirtied in what kind of temporal pattern or by timing the respective delivery of specific pattern sequences getting encrypted.

With encryption algorithms being under intense scrutiny of mathematicians and other theoreticists, side channel attacks are increasingly important ways of poking the game open again. The disadvantage is that they have to be crafted towards a particular implementation, code, and hardware.

user92881
  • 27
  • 1