8

I am curious about this. I saw this thread:

Why shouldn't we roll our own?

and this answer:

https://security.stackexchange.com/a/18198/144241

which had on it this comment, which had the second highest number of votes and that can't just be ignored:

"The biggest problem I notice among beginning coders, is that they don't take security risks seriously. If I mention what kind of weaknesses some system has, the common response is rolling their eyes... The attacks are real and do happen, and if you can't even apply proper security at school then how's your application supposed to fare when it's live for hundreds of customers? :/ The only way to convince them is usually providing examples, and best of all a live demo. So if you have examples... I think it'd be a nice addition!"

This makes me wonder, then. The context of this discussion was about "rolling one's own" cryptographic software or algorithms, and why that's generally a bad idea due to the numerous security vulnerabilities that one may create without the proper expertise (and there seemed to be a strong air that this doesn't just apply to creating your own ciphers per se but also implementing existing, vetted ciphers in your own code) which may make the product considerably less secure than it should be, or advertises to be. And the concern is, of course, eminently reasonable. Moreover, the amount of expertise required is apparently quite large - according to one story I heard on this a very, very long time ago, someone once said (I think it was Bruce Schneier?) that they would not (approximating) "trust any crypto written by anyone who had not first 'earned their bones' by spending a lot of time breaking codes".

The problem, however, is that while I understand this in regard to cryptographic software, the comment above raises a point that suggests this carries implications that are vastly more general in that security issues apply to all software, and thus the developer of any software needs to take security into account no matter what it is, even if it is not explicitly cryptographical or we are not writing explicitly cryptographic parts (e.g. consider how often a buffer exploit pops up on various general-purpose softwares like web browsers and some cracker hits it to deal damage, esp. stealing information and/or money.). It seems like that, intuitively, there is a basic ethical obligation on the part of any software developer who is going to release code that will be used by the wider public, to ensure that it meets some "reasonable" standard of security, even if it's not specifically 'security' code. If one releases code one knows may be insecure, then one is acting in a manner that again, very reasonably, could be construed as containing unethical negligence.

And that's where I am concerned. Because, ultimately, since you cannot just get all software "pre-made" - as then what'd be the point of developing any software - you will have to roll at least SOME security-embodying code on your own. You have to roll you own, like it or not, and that means it takes expertise. So the question is: if we follow the dictum to not "roll one's own" insofar as it applies to specifically security-related code such as encryption, BUT we have to roll our own code as part of ALL application development, then how much expertise in security do we as a general application programmer need under our belt to meet that intuitive ethical bound? It seems like the amount has to be "more than zero" but (hopefully!) "less than that of a computer security expert" of the calibre who develop the actual encryption algorithms (like AES, etc.) that become world standards. But what is that amount, and which end of that spectrum is it closer to, and what exactly is needed to learn it?

  • Comments are not for extended discussion; this conversation has been [moved to chat](https://chat.stackexchange.com/rooms/86600/discussion-on-question-by-the-sympathizer-how-much-security-expertise-does-a-gen). – schroeder Dec 04 '18 at 16:56

4 Answers4

0

Thats an interesting question. At first we as software developers create code (which means, applications) for a specific business purpose. So a good programmer should be able to dive into business domains and transform functional requirements into working code.

But software is complex and there are dozens of technical / non-functional requirements that a developer should be aware of, for example performance, stability, user interface, security and so on.

I think it's not possible to be an "expert" in every technical domain. But - and I think thats the point - if you or your company wants to build "professional" software, there should be at least one expert for every technical domain. Talking about security, there should be at least one security engineer who can do code reviews, internal penetration tests and so on. While doing such things in coorperation with the actual developer who wrote the code, there should be some kind of knowledge transfer between the expert and the "novice". That way, every developer should earn a basic knowledge over the time.

The challanges for the company are a) sensibilize every developer for these topics so that they don't "roll their eyes" when it comes to such things and b) establish a software development process / lifecyle where these topics are included.

Alex
  • 273
  • 1
  • 2
  • 7
  • A developer needs to know where his technical bounds are. He also needs to understand that bad security code and bug in such code is not the same as typical coding bugs, that security errors put users at risk. – zaph Dec 03 '18 at 21:15
  • Agreed, and seconded @zaph. This is *xactly* what I was after with this now-closed question. The question is, then, while this is not necessarily so hard for a _company_ , what should an _individual_ developer do, given the simultaneous facts of having bounds _and_ that security is sort of unique among concerns it that it is at once _absolutely universal_ and moreover a _potentially serious liability_? If every individual developer has to essentially hire, at a small personal fortune, a security expert, then it seems that all individual developers are acting unethically. Is this right? – The_Sympathizer Jan 31 '19 at 09:17
  • The question is, how important is the security aspect in your individual software project? Are you bulding a blogging app for internal uses in your company or are you buildung an internet facing webshop or a public banking app? The effort in security you have to put in depends on the potential threats. For big customer projects you might want to get an security consultant, for smaller internal projects you normally don't. And as I said, one developer can't be an expert in everything. Softwaredevelopment is such a big thing that the individual dev has to specialize on some topics. – Alex Jan 31 '19 at 09:59
  • @Alex : Sure, but security is perhaps one that seems to pose a more "across-the-board" risk. And what I'm referring to is something that's neither - it would be something used in the wider world indeed, so definitely not "internal" and for a customer, but it wouldn't be something like banking or purchasing either that has a more specific focus of handling sensitive data. I'm just talking about general software development overall, for software that other people will use, not just oneself, and esp. when one lacks the money to hire another person. – The_Sympathizer Jul 20 '19 at 21:24
  • Basically I'm asking about "freelance" coding and whether that is inherently unethical to some extent because you are "rolling your own security software" to the extent that any part of your program can in theory have a security vulnerability, even if you take care of the "obvious" _directly_ - security-related features like cryptography, if needed, using existing and trusted libraries. It's that _indirect_ stuff I'm thinking about. – The_Sympathizer Jul 20 '19 at 21:26
0

In my opinion, in theory developers should at least know the basics of information security, so that they can at least avoid the most obvious mistakes and have a knowledge base they can build upon. This in practice means that, for example, a web dev should know what XSS is, and that it can be avoided with output sanitization. They don't need to be experts on XSS and know every trick in the "XSS cheatsheet" by heart. They just need to understand that it is a problem, why it is a problem, and what the solution involves.

As a result of knowing what XSS is:

  • Devs can at least look up stuff about XSS, read, ask, learn more;
  • They will make mistakes but they are less likely to build something so outrageously insecure to be unfixable without a complete rewrite; Security bugs will be fixed more easily;
  • They can understand what a security researcher is talking about when an issue is reported; They can decide to learn more, or ask for help;

Without some basic knowledge about information security the above points would not be possible. Of course beginners don't know anything, and will learn the basics over time, often from other developers, also online (participating in communities, joining open-source projects, etc).

But I'm not sure though what "ethics" has to do with this. You might think that building software that is outrageously insecure should be considered unethical, but can you say that ignorance is ethically wrong? I'm not sure. Building very insecure software is definitely unprofessional though. On the other hand, if you think about it, a lot of software comes with a license where they say it's "provided as is", with no warranty whatsoever. Lol!

reed
  • 15,398
  • 6
  • 43
  • 64
-1

Let's start with the problem of ethicality and point out that it will always be subject to opinion and can't be absolute by definition (according to most philosophers). However, I think it's a very interesting question and I'll try to shine some light on it.

Assumption - Knowledge and ethics

What I think is an interesting viewpoint, is that having expertise doesn't make you an 'ethical' programmer. Take for example the Volkswagen emissions scandal. Apparently the programmers were quite skilled at making secure software. So skilled in fact, that they managed to trick others' software to ensure that their cars got through some tests. Most people would find this unethical.

Doing harm on purpose

I know of programmers who have to push code even when they know it contains security bugs. They reported it to managers but get the Dilbert-style "the customer doesn't pay for that". Here, in my opinion, the manager is at fault. They know that they're pushing insecure code, but they just don't care, and the programmer is doing his job. However, it's an interesting debate: What about the Volkswagen emissions scandal? Can you blame the programmers?

The threat model

So what if some kid writes a website for your flower shop and it has a Cross-Site Scripting vulnerability? It's really important to note that the type of application matters. If you're writing control software for a power plant you need a higher level of security expertise than some kid writing a PHP form for his local whatever shop.

The field

A web developer that creates applications with certain requirements, such as having a secure backend, needs to know basic - known - attack vectors, such as SQL-injection. However, it can also be viewed as a fault of the 'educational system', which in this case is the information scattered across the internet, to not mention this property as a requirement. It's nice to see that many languages and frameworks now have a 'security' section in their manual.

Teams and honesty

Like you mentioned; nobody known all these things. That's why most project require teams of people to develop software. There is an architect, a tester, a bunch of developers, etc. The designers of the application need to make sure that the security requirements are made clear. If you can use a basic TLS library that would be great. If more specific knowledge is needed the team needs to be able to determine that the knowledge is - or is not - available in the team. Here it would be - in my opinion - ethical of the developer to admit he doesn't know how to securely implement certain things. The project manager can then commit to having some external party or a security expert check it out.

The point

The developer has to admit he doesn't know how to implement something according to some security standard. Here is where I think ethicality vs knowledge comes in. He needs to know that he doesn't know. He or she needs to simply know something like "rolling your own crypto is a pain in the ass". "If I were to create such a thing, we would need some security expert to come and double check it". But then you end up with more questions: How can you know what you don't know? Who says the security expert knows enough for us to have an 'ethical' level of security?

This is not a question with an absolute answer.

Beurtschipper
  • 693
  • 4
  • 10
  • So what about for an independent developer making small applications who is _not_ part of a team? E.g. for a mobile, like all those "kids" who make various apps. Are they all acting wrongfully? When I say ethical, I also mean specifically "ethical insofar as being able to avoid negligence", not that knowledge somehow prevents you from doing something harmful on purpose. That is, I mean where you don't _intend_ to do harm, but by acting in your state of ignorance you may be putting other people at unjustifiable _risks_ . I suspect many who are fairly adept may be _aware_ of security as (cont'd) – The_Sympathizer Dec 04 '18 at 03:47
  • (cont'd) an important issue, but nonetheless not know enough about it to actually handle it properly when it comes to even these non-cryptographic elements. – The_Sympathizer Dec 04 '18 at 03:50
  • 1
    Or to put it yet another way: knowledge is not _sufficient_ for ethicality, but it may be _necessary_ for such, and the question is thus how to meet that minimum _necessity_ thershold. – The_Sympathizer Dec 04 '18 at 03:51
-2

Zero - they need 0 security knowledge to develop ethically.

Security is not a developers job - building a widget for the business within a time frame and under budget is. If they have knowledge of security that is a bonus for any employer but expected knowledge in that area is zero, that's not their job. Databases, frameworks, design patterns are their bread and butter - you cannot know everything.

They do need domain knowledge on what they are building so if they are building a control for authentication - they need to understand authentication or have access to a resource that does. This is much like if a developer is building on a mobile device or using framework X - they need to understand it to use it correctly.

Our job is to help them with what they produce - if we can educate and increase their knowledge from our domain, excellent. No not every team will have a security resource but they won't have a network resource or a database resource either.

Ethically they do not need any security knowledge.

McMatty
  • 3,192
  • 1
  • 7
  • 16
  • In many small development environments there is no security person but often there are security aspects. Security does fall on the developer and many developers have no real knowledge of security and you said that is ethically OK? No. – zaph Dec 04 '18 at 03:09
  • Care to address the Marriott Data Breach and lack of security and "they need 0 security knowledge". How did that work out, ethically for those who had their information exposed? See [Marriott Data Breach](https://thehackernews.com/2018/11/marriott-starwood-data-breach.html). Note: Marriott could face a maximum fine of 17 million pounds. – zaph Dec 04 '18 at 13:50
  • Not only are they out of compliance with GDPR but there had credit card CVV information in the customer DB and that is a PCI compliance validation. Interestingly that apparently was not caught by the PCI auditor, I hope they have real good EE insurance. But all ethical, right? – zaph Dec 04 '18 at 13:56
  • Does a builder ethically need to know about home security there are multiple burglaries that occur after all. The answer is no home locks and doors are consumer grade unless required by request at which point an expert is used. – McMatty Dec 04 '18 at 18:59
  • Updated: The home builder does need to know about best security practices, that just a snap lock is not enough, a deadbolt is needed etc. But understand that a home builder is a licensed professional, the design is done by a licensed professional engineer and the construction is inspected all along the way—this is not even close to software developers. The real question is: Is it ethical for a developer to put the user at risk by not knowing about security at least at the level of what is required and when to consult with a security professional? – zaph Feb 01 '19 at 00:34