0

Ok. I believe no one has thought of this perspective, so here goes.

I really don't understand why software need to be constantly patch for security when programmers do a good and complete job in the first place. Computers are programmed straight forwards and won't do why their creator didn't program them to do. Traditional computers aren't intelligent enough to make decisions without inputs. Even if loopholes exists in software, there should be only an upper limit, given that there are limited lines of code in software, is the possible ways of exploiting a piece of software infinite? If not, is there a mathematical formula to calculate the approximate possible exploits relative to amount of code?

My case in point is Windows XP and its predecessors. I believe the former has been in the market for more than 20 years and have its code throughly analysed by its manufacturer and external security vendors. Yet we are told that it could still have security bugs and therefore cease using it because the software creator is not willing to continue supporting it.

Or does patching itself create more security vulnerabilities?

Nederealm
  • 113
  • 2
  • Look at smart locks for doors: even when they pay for security audits on the auth code, someone with a super magnet, a hammer, or an oscilloscope finds a way in the developer never imagined... – dandavis Jul 27 '17 at 04:14

5 Answers5

5

This question includes two parts: why we have so many bugs in the first place and why not all bugs gets detected and solved given enough time.

Why we have software bugs

Writing software takes resources, i.e. time, knowledge of developers and money. There is always a shortage of time (to go to market) and the number and experience of developers. Also, more developers just does not mean that the product gets done faster because more developers means more communication overhead which adds to the complexity. And there is a shortage of money, because there is the need for the product to have a positive return on investment.

Thus to write more secure software in the first place you should aim for a minimal complexity which you can manage with as few developers as possible. But unfortunately the complex needs of the customers often contradict this goal. On top of this requirements change with the time because the environment where the software gets used changes.

This way even software with a good initial design developed by experienced developers gets more complex over time. And, most software was not even developed with a good initial design by experienced developers but many just started as a prototype which worked good enough initially and then just got extended and extended over time often by different developers (with a limited understanding of the initial design).

And this is just one reason you have bugs. Other reasons are that the software is used in an environment it was never designed for, like software designed for a closed and protected environments now gets connected to the open internet.

Why not all bugs gets found and solved

Security researchers and developers inside and outside of software companies face the same limits: there is only a limited number of time and a limited number of researchers while there is lots of software with potential problems. Thus the security researchers focus on the software which promises the largest return of investment first, i.e. where the most high impact bugs can be found in the shortest time using the specific knowledge and experience of the researcher.

This of course leaves lots of undiscovered bugs because some might need a specific experience, some seems to be not worth to detect or just because there was no time to look into the specific area of the software. And even if a bug gets found it might not be critical enough to invest resources to fix it (some or just that deep in the design that it is too costly) or the software vendor might not exist any more or the software is declared end of life so nobody should use it anyway (even though many do for various reasons).

Can't we just ignore the bugs not found yet?

Finding and exploiting bugs is similar to extraction of natural resources: there are still lots of undiscovered resources out there and there are discovered places where exploitation of these resources is too expensive. But, new knowledge, new techniques or just an increased marked value of specific resources make it attractive to exploit these resources or to look for more hidden resources. In case of bugs this might be some new technique or tool which makes it easy to find new classes of bugs. Or change of use cases make previously impossible attack vectors working, like connecting some software to the internet. Or there might be a valuable target using a specific software which makes it attractive to look for vulnerabilities in this software and exploit these.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
2

I would just refere to this well known quote attributed to Einstein:

Two things are infinite: the universe and human stupidity; and I’m not sure about th’universe!

As programmers are mere human being they sometimes do mistakes because of the above quote. As programs are now more and more complex, they represent a huge number of lines of code. So there are bugs in any large code base.

Of course, many of them are found during initial tests (before code is ever released), and others are found and patched after code was released. The bad news, is that even patches can contain bugs, or break some other part of the code, and worse new feature are constantly added all coming with their large code and new bugs.

I have spoken of bugs here because vulnerabilities are not much different, and IMHO are just one special case of bugs: something that should have been better coded.

TL/DR: small and stable programs can be made clean of any bug or vulnerability with some work, but huge and ever evolving software like OS cannot.

Serge Ballesta
  • 25,636
  • 4
  • 42
  • 84
1

I'm unsure if you can actually calculate exact numbers of possible vulnerabilities or exploits based on code and the number of lines. But risk of exploitable and non-exploitable vulnerabilities do increase with the age of the software and especially when it becomes out of the main stream support cycle. If you think about what a vulnerability actually is and how complex an entire Operating System or any software could be-- it's not just the source code itself that is vulnerable; there could be a combination of configuration and code that add to those numbers. There could also be vulnerabilities not directly in the source code but the way the software or operating system decides to execute or handle data during processing.

With that said, you can potentially detect a number of recognizable vulnerabilities based on static code source analysis using tools or via a manual mean. To answer you second part of the question, yes it is possible for additional patches to open new vulnerabilities. You are modifying and adding in code after all, so a failure or a mistake could create new problems. Although not an exact example, a case in point are denial of service attacks. Up to date windows at the time could've still been susceptible to very old LAND DoS based attacks on the stack.

With that also in mind, think about the 0-days that get released even with all the updates and security researchers disclosing vulnerabilities for patching and secure coding. Eterneblue was one such example.

1

They are not technically limitless, but in practice they might as well be limitless. Well-written and well-tested code might be able to get the number of bugs down to as few as one bug per hundred lines of code. Windows XP has 45 million lines of code. So that would be 450,000 bugs. Those won't all be security vulnerabilities, but even if 1% of them were, we would never run out.

Mike Scott
  • 10,118
  • 1
  • 27
  • 35
  • So let's assume that there is a way to fix all 1% by security researchers and programmers. Does that mean that an Internet connected machine running Windows XP will be secure at a code level, notwithstanding interaction with external software? – Nederealm Jul 26 '17 at 06:01
  • @Nederealm Yes, by definition, if we fix all the security-related bugs then the software is secure. But there's no way to fix all the security-related bugs. – Mike Scott Jul 26 '17 at 06:03
1

You can get rid of all of the bugs by removing all the complexity of the software as long as the hardware behaves. An example would be some machine code that turns on a light when a button is pressed.

The problems start when you try to solve more challenging problems, adding a user that wants lots of functionality for example. The amount of code increases, it soon makes sense to use high level programming languages which may have some strange or undefined behavior.

The more buttons you add the more complex the code becomes and the more bugs you can expect. Spending more money to avoid or find these bugs may not ever find them all, even for very expensive, professional organizations.

daniel
  • 774
  • 3
  • 12