0

Programmers do not receive security training, is it due to costs? How can we overcome the externality problem?

Mike Smyth
  • 57
  • 1
  • 6
  • 2
    While it does not look like a duplicate from the title [SQL injection is 17 years old. Why is it still around?](http://security.stackexchange.com/questions/128412/sql-injection-is-17-years-old-why-is-it-still-around) still covers this problem. – Steffen Ullrich Feb 16 '17 at 16:58
  • Because writing secure code is HARD. – Stone True Feb 16 '17 at 17:02
  • 2
    The superset problem of "Why is doing it right so much harder than just doing it?" is arguably an eternal, universal problem. – gowenfawr Feb 16 '17 at 17:05
  • @StoneTrue Not necessarily. – Arminius Feb 16 '17 at 17:05
  • How is "why doesn't management prioritize security training and work?" primarily opinion-based? There's no opinion there - the reality is that those decisions do get made, and get made for certain (valid or invalid) reasons. – Xiong Chiamiov Feb 16 '17 at 19:42

3 Answers3

4

It's due to the fact that application security is an economic externality (https://en.wikipedia.org/wiki/Externality); The party who is suffering from the lack of application security is not the one who makes decisions about the security. It's the same economic problem as pollution.

Let's make up an example. I write server software for SOHO routers. Multiple router manufacturers use my software, and end users by the routers from those manufacturers. Spending the time/money to implement a secure SDLC means that my prices will go up. Since the router manufacturers are not impacted by any security flaws (the end users are, in this case,) they have no incentive to pay a higher price. As such, I have no incentive to spend extra money to secure my product.

Edit: To answer your question about externalities, the answer is regulation. I'm not sure if it needs to come from the industry level or the government level, but the only answer is regulation.

Dan Landberg
  • 3,312
  • 12
  • 17
1

Security has one attribute that separates it from other features and makes it often overlooked: most of the time, you can't tell if it's implemented.

To put it another way, a secure system and an insecure system appear identical to a user until there's a breach.

This leads to project managers inevitably prioritizing other projects over working on security. If there's a lot of work to be done, how do you justify spending a bunch of time working on something that has no effect (until there's an attack)? Besides, it'll probably be ok - the chances of someone trying to attack us are pretty low, right?

This is also the root of security theater; you implement visible procedures to make people feel like they're more secure, even if actual security comes from things they can't see.

How do you solve this? With sufficient discipline, you can force your team to work on security early. Or more likely to actually be successful, you create a team (even a team of one) who is responsible for working on security, no matter what else management wants everyone's priorities to be.

Xiong Chiamiov
  • 9,384
  • 2
  • 34
  • 76
1

Part of the problem is incompetence - in a lot of cases it's really easy to get an app working, which means a lot of people who shouldn't be allowed anywhere near an internet-facing server can make full blown apps which work while being insecure (which you don't see, and which those "developers" don't know anything about), so nobody is aware of the disaster until it's too late.

André Borie
  • 12,706
  • 3
  • 39
  • 76