5

After yet another exploit of and malware injection into his website, host Leo Laporte of the Twit network has recently asserted/been told by his security expert that falling victim to such attacks is simply the cost of doing business on the Internet, and that the best one could do about it is to react quickly, remove the malware and fix whatever exploit was used, if possible. The biggest blame for this was laid at the feet of PHP, which is "inherently insecure" with its execute-files-in-any-folder architecture.

This assertion simply blew my mind. Is this the best advice a security expert can offer? While it's certainly true that PHP isn't exactly the tightest language out there, and that there are almost inevitably exploits somewhere, especially if you use a lot of drop-in, pre-fab packages, the biggest blame should still be laid on incompetent programmers and server administrators. No?

Is it really an insurmountable task for a security conscious developer who knows what he's doing to develop an exploit-free moderately complex website in a reasonable amount of time? While I'm sure that the systems I have developed aren't 100% bug free, I do take great care with anything that may enable an attacker to modify my server in any way; and so far I have not had to deal with malware injection. Is the state of web development really such that the only solution to code injection is to constantly clean up after the bad guys? Is there any real-world data on this issue?

deceze
  • 715
  • 3
  • 12
  • This is a great question, I think it boils down to opinions, but I'm fairly sure PHP is not the problem here. It's like saying you'll be healthier by drinking diet coke. Sincerely, whoever said that shouldn't be in charge, he thinks failure is ok and acceptable as a cost in security, it's not at all! – Camilo Martin Mar 20 '12 at 03:18

2 Answers2

3

I'd say that it's not so much that exploits are inevitable but that the security industry for a long time focused more or less exclusively on the idea of preventing exploits and didn't put enough effort into detecting or reacting to problems when they occur.

Unfortunately what's transpiring is that preventative efforts (in a lot of cases) aren't good enough, which is down to a number of factors like poor understanding of what "secure" means in a given setup, lacking economic incentives to invest properly in good development security practices, the prevalence of "snake-oil" in security product sales ("Here buy this black-box and you'll be "secure"!"), etc, etc.

The result is that a lot of companies are now getting compromised, as attackers are moving faster than defenders in many cases, and the defenders have years of security debt built up due to under-investment.

So I'd say that it is sensible advice for companies to realise that it's likely they'll suffer a breach and plan accordingly.

If you want more information on the state of play, I'd recommend the Verizon Data Breach reports, the Mandiant M-Trends reports and the Veracode state of software security reports amongst others.

Rory McCune
  • 60,923
  • 14
  • 136
  • 217
  • @Roy - your answer would be far more useful if you included links to the reports cited. – JonnyBoats Mar 18 '12 at 12:44
  • 2
    I'd surely agree that you should have a contingency plan for the case that you *are* breached; but ultimately breaches are enabled by bad practices/sloppiness/incompetence during development. I guess what I'm asking is: are exploitable errors during development, even with the best possible developers, so inevitable that the statement that cleanup is all you can do true? Or would exploits be a lot less common if developers would focus more on security? – deceze Mar 18 '12 at 13:02
  • @JonnyBoats yeah I know I was in a bit of a hurry this morning feel free to edit and add or I will when I get a chance, they're all easily googleable anyway. – Rory McCune Mar 18 '12 at 15:25
  • 1
    @deceze exploits would be less common if more time and money was spent on security definitely. One of the points I was making is that it's a commercial trade-off (effort on security = increased costs) so a lot of companies don't spend as much time as necessary to get really good security. Also as with everything in security it depends on who the attacker is. lower levels of cost/effort can deter casual attackers, but detering really high-end attackers is very expensive and may not even be practicable depending on the situation – Rory McCune Mar 18 '12 at 15:27
3

No, it is not true. Vulnerabilities on the web are not an inevitable fact of life on the web that you just have to sit there passively and accept. By following proper, secure web development practices, one can greatly reduce the likelihood/incidence of vulnerabilities. It is one of those "pay now, or pay later" situations.

Of course, one should also be prepared to react quickly, but one can reduce the need for "fire drills" by using proper methods in advance. It sounds like Laporte's expert implicitly accepts this, when he says part of the problem is use of PHP. If use of PHP were indeed the root cause of most vulnerabilities on the web, then one could simply avoid use of PHP (for example). I don't actually accept his premise that the use of PHP is the root cause of most vulnerabilities on the web, but the broader theme is correct: the practices you use during development influence the likelihood/frequency of security vulnerabilities later.

For further information on good web development practices, you can take a look at OWASP. See also:

D.W.
  • 98,420
  • 30
  • 267
  • 572
  • The OWASP top 10 is a great starting point for web application security. https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project – Mark S. Mar 18 '12 at 17:29