You are right in your assessment of "Java 0-day" for server code. These attacks are about hostile code breaking out of the applet sandbox, which is the security model used by applets: an applet is code which might be malicious, and thus will run under heavy restrictions (e.g. no access to local files, no loading of native code, no network connections except back to the server which sent the applet in the first place, no full introspection on other packages...). On a server, the code is, by definition, non-hostile, and does not run in a sandbox.
Java on a server still runs in the Java Virtual Machine, which is not the same as the sandbox. Roughly speaking, the JVM enforces type rules (no field access or method calls on objects which do not have the designated field or method) and array bounds (no write in adjacent bytes upon buffer overflows), and runs the garbage collector; the sandbox adds a lot of extra checks about which standard methods may be called or not, with an elaborate permissions system. Server-side code is not sandboxed, but the JVM will still contain the damage in case of exploitable holes:
In case of a buffer overflow, the offending thread is terminated (well, it receives an exception which it can trap, but in practice termination is the normal end result) and no object in memory has been actually damaged. Buffer overflows are still bugs, but, at least, consequences are limited.
The garbage collector, by definition, will prevent any type of access-after-free.
The strict type system will prevent accessing any data byte with an interpretation distinct from what was used to store that byte in the first place.
Uninitialized local variables cannot be read (the flow analysis when code is loaded prevents that). Fields are systematically forced to sane default values (0, false
, 0.0, null
).
All of this, though, equally applies to .NET and PHP and Perl and just about everything except very low-level programming languages.
Trying to establish levels of inherent vulnerabilities is a risky business. The characteristics of Java, that I explained above, can be turned into an argument about how Java is "inherently secure". However, the same would apply to most other programming frameworks, including PHP. Ultimately, most vulnerabilities are programming errors, not in the framework, but errors made by the developer who used the framework. Counting published vulnerabilities will tell you only about security holes in the framework libraries, but not about holes in code developed for the framework, and that's where most holes will be.
So the vulnerabilities relate to how easy to framework is to use, and how well (or bad) the framework features help the programmer into not making bugs. This is relative to the developer. A skilled PHP developer who does not know anything about Java will not be good at making a secure Java-based server: he will have to learn some Java and use it right away, it will be a painful process, and the developer will not have much time for tests. This does not imply at all that PHP is inherently more secure than Java; the situation would be reversed with a developer who knows Java but not PHP.
At best, we can point out that "low-level" programming languages like C, Forth, Assembly, C++... are objectively harder to use than "high-level" ones (like C#, Java, PHP, Node.js...). The point where this is most apparent is handling of character strings. Typical servers do that a lot. In C, character string handling is very manual, with all the allocations and copying, and taking care of buffer lengths. Languages with automatic memory management (e.g. all the ones with a GC) make such jobs much easier, i.e. much harder to get wrong.
That's about the end of the exercise. For platforms based on high-level languages (be it LAMP, Spring, .NET...), the "most secure" will be the one that the developer knows best. This "knowledge factor" trumps all others.
(You might turn that into: "the most secure Web framework will be the one for which it is easiest to find a competent developer".)