4

I am a .net and php developer and since java has been in the news recently thanks to the string of zero days, I decided to brush up on security.

In regards to the java zero days, this question has been very helpful: Security of JVM for Server. My understanding is that the vulnerabilities exists on the java applets running on the browser and not for web applications hosted on a server.

If this is correct, then server side web applications running on platforms like .net, springmvc, lamp are pretty secure, aside from developer introduced attack vectors like not sanitizing input and the like (see owasp).

My question is this: Has there been any studies to see if any of the popular web platforms (I can think of lamp, spring, .net) are inherently more vulnerable than others?

ton.yeung
  • 245
  • 2
  • 8

3 Answers3

3

You are right in your assessment of "Java 0-day" for server code. These attacks are about hostile code breaking out of the applet sandbox, which is the security model used by applets: an applet is code which might be malicious, and thus will run under heavy restrictions (e.g. no access to local files, no loading of native code, no network connections except back to the server which sent the applet in the first place, no full introspection on other packages...). On a server, the code is, by definition, non-hostile, and does not run in a sandbox.

Java on a server still runs in the Java Virtual Machine, which is not the same as the sandbox. Roughly speaking, the JVM enforces type rules (no field access or method calls on objects which do not have the designated field or method) and array bounds (no write in adjacent bytes upon buffer overflows), and runs the garbage collector; the sandbox adds a lot of extra checks about which standard methods may be called or not, with an elaborate permissions system. Server-side code is not sandboxed, but the JVM will still contain the damage in case of exploitable holes:

  • In case of a buffer overflow, the offending thread is terminated (well, it receives an exception which it can trap, but in practice termination is the normal end result) and no object in memory has been actually damaged. Buffer overflows are still bugs, but, at least, consequences are limited.

  • The garbage collector, by definition, will prevent any type of access-after-free.

  • The strict type system will prevent accessing any data byte with an interpretation distinct from what was used to store that byte in the first place.

  • Uninitialized local variables cannot be read (the flow analysis when code is loaded prevents that). Fields are systematically forced to sane default values (0, false, 0.0, null).

All of this, though, equally applies to .NET and PHP and Perl and just about everything except very low-level programming languages.


Trying to establish levels of inherent vulnerabilities is a risky business. The characteristics of Java, that I explained above, can be turned into an argument about how Java is "inherently secure". However, the same would apply to most other programming frameworks, including PHP. Ultimately, most vulnerabilities are programming errors, not in the framework, but errors made by the developer who used the framework. Counting published vulnerabilities will tell you only about security holes in the framework libraries, but not about holes in code developed for the framework, and that's where most holes will be.

So the vulnerabilities relate to how easy to framework is to use, and how well (or bad) the framework features help the programmer into not making bugs. This is relative to the developer. A skilled PHP developer who does not know anything about Java will not be good at making a secure Java-based server: he will have to learn some Java and use it right away, it will be a painful process, and the developer will not have much time for tests. This does not imply at all that PHP is inherently more secure than Java; the situation would be reversed with a developer who knows Java but not PHP.

At best, we can point out that "low-level" programming languages like C, Forth, Assembly, C++... are objectively harder to use than "high-level" ones (like C#, Java, PHP, Node.js...). The point where this is most apparent is handling of character strings. Typical servers do that a lot. In C, character string handling is very manual, with all the allocations and copying, and taking care of buffer lengths. Languages with automatic memory management (e.g. all the ones with a GC) make such jobs much easier, i.e. much harder to get wrong.

That's about the end of the exercise. For platforms based on high-level languages (be it LAMP, Spring, .NET...), the "most secure" will be the one that the developer knows best. This "knowledge factor" trumps all others.

(You might turn that into: "the most secure Web framework will be the one for which it is easiest to find a competent developer".)

Tom Leek
  • 168,808
  • 28
  • 337
  • 475
1

One can use a vulnerabilities database like the NVD to get an idea on how many vulnerabilities have been found for a particular web framework. For example, a search of rails returns 64 results while a search of asp.net returns 57 results. One can filter for vulnerabilities found in the last three months for a more current view of things. In this scenario, a search of rails returns 9 results while asp.net returns 0.

However, this usually does not present any inherently helpful view on the overall situation. Vulnerabilities are found all the time, vulnerabilities are fixed all the time. The underlying security of a chosen framework usually will not be the weakest link in your web application if you always use a recent version of said framework.

1

Has there been any studies to see if any of the popular web platforms (I can think of lamp, spring, .net) are inherently more vulnerable than others?

Well, such a study would by and large be not accurate. As the popularity and usage of a platform increases, so does the impact surface, and so in turn the research which goes into exploiting it. Vice-versa is also true.

Let me elaborate by giving a bit of an unrelated example.

Let's take OS X and Windows. You will find a bazillion vulnerabilities in Windows and quite a few in OS X. This does not mean that inherently OS X is secure and Windows is a mess. It's due to the fact that the impact surface of a vulnerability in Windows is huge compared to that of one in OS X, so lots of research goes into attacking Windows, and less into Linux, and even less into OS X, though in the past couple of years we have seen a surge in OS X vulnerabilities.

Xander
  • 35,525
  • 27
  • 113
  • 141
oldnoob
  • 300
  • 1
  • 3