30

TL;DR:

Perhaps I've gone overboard with my question's detail, but I wanted to be sure the question was clear since the topic seems very broad. But here it is. The word "smartest" is meant fundamentally, not literally.

Is a server infrastructure fundamentally possible which the smartest person can't breach?

Background:

I've read articles about the servers of massive banks (or cheating sites) being compromised, and in one article based on an interview with an internet security company interested in the case, a specialist claimed that there are highly skilled criminal organizations especially in China and Russia which have vast resources, tools, and some of the "best" hackers in the world at their disposal, and this specialist claimed that there was "no system on earth" (connected to the web, of course) which they couldn't compromise with the resources available to them.

(Web-Server) Information Security is Like Chess:

I'm not much of a Chess player, and I'm not much of an Info Sec. expert, but I am a programmer who writes server software, and I'm interested in this. Disregarding any factors of Chess that might null my scenario, such as maybe the person who moves first has an advantage or something of the sort, imagine Information Security as being a game of Chess between two of the best Chess players in the world.

Classic: If you and I were to play a game of Chess, the one which possesses greater skill, knowledge, and intelligence regarding the game of Chess will win.

Programmed Scenario 1: Or perhaps, if we play the game digitally, the one of us who writes the smartest chess-playing software will win.

Programmed Scenario 2: Or, and here's the key, perhaps it's possible for us to both be so good at both Chess and programming that we both write Chess-playing computer programs so good that neither of our programs can win, and the game ends in a stalemate.

Consider a server infrastructure, for example a banking server, or application server which must communicate with clients on the web, but which must not allow criminal parties to break into its data stores.

  • The security of this server infrastructure could either be like Programmed Scenario 1, meaning no matter what, whoever has the best software and knowledge of Information Security, the people who invent the security strategies for example, will always have a chance to break through a server infrastructure's defense, no matter how secure. No perfect defense is fundamentally possible.

  • Or it could be like Programmed Scenario 2, where it's fundamentally possible to develop a server infrastructure which uses a security strategy that (fundamentally) cannot be bested by a smarter program. A perfect defense is fundamentally possible.

The Question

So which one is it?

AviD
  • 72,138
  • 22
  • 136
  • 218
J.Todd
  • 1,300
  • 1
  • 10
  • 20
  • 30
    You are looking for a well-defined equilibrium in a game with huge uncertainty. There's no static equilibrium here, just an arms race. An arms race with no rules - anything goes, like @schroeder said, chocolate bars and rubber hoses. Oh, and women (or men). The game is ages old, it's called war. – Deer Hunter Aug 25 '15 at 04:12
  • 2
    @DeerHunter That's acceptable. But without studying this subject very thoroughly, without an extensive knowledge of server security fundamentals, I couldn't personally determine whether or not there are fundamental principles at play which might make a stalemate possible via a perfected security strategy. – J.Todd Aug 25 '15 at 04:14
  • 3
    "Is a server infrastructure fundamentally possible which the smartest man can't breach?" Yes. Unfortunately, this means someone else, who isn't as smart, who isn't as well-versed in how things are supposed to work, is just going to breach it by being dumb and trying stuff that no educated person would consider doing, because That Is Not How It Works. – spoorlezer Aug 25 '15 at 08:11
  • 4
    @JonathanTodd Let me save you a great deal of effort then: The answer is "No." As long as there will be humans who should be able to access the data, there'll be a loophole to exploit for humans who *shouldn't*. – Shadur Aug 25 '15 at 08:23
  • 3
    Yes. There is a way to prevent a server from ever being breached. All you need to do is seal it inside a permanently-sealed box. Then, place the box into a safe and throw the safe down the Mariana Trench. And then maybe fill up said trench with concrete. – Kaz Wolfe Aug 25 '15 at 08:43
  • 55
    In chess - if you play 1000 games, and win 999 and loose 1, you are probably considered a champion. In security - if you play 1000 'games', and win 999 and loose 1, you're screwed. – mti2935 Aug 25 '15 at 09:56
  • 11
    @Shadur The implication of that being you CAN have a perfectly secure system as long as there are no users. :) – JamesRyan Aug 25 '15 at 11:30
  • 1
    "Oh, but you must travel through those woods again and again and you must be lucky to avoid the wolf every time, but the wolf only needs enough luck to find you once." – Captain Man Aug 25 '15 at 13:13
  • 2
    Security is the art of creating layers of control that minimizes loss if/when an aspect of the security is breached. In the case of computer security, creating layers and dividing systems to minimize loss is a tricky thing to do at best. But layering security is the best way to prevent a breach and minimize data loss if and when a breach occurs. – Giacomo1968 Aug 25 '15 at 18:56
  • 1
    It's not like chess because the situation is not symmetric. It's an asymmetric game where one party has a big disadvantage over the other and can only win if the other party makes extremely stupid mistakes. – Count Iblis Aug 25 '15 at 20:41
  • 3
    Provable correctness is fundamentally possible in computer science. Make the protocol or system to be implemented simple enough, and it may actually be practical. But there's the rub -- commercial requirements rarely ever lend themselves to the tradeoffs in development time, cost, and feature complexity necessary for a provable implementation. (Implement `eval`, and you have the halting problem; all bets are off!) – Charles Duffy Aug 25 '15 at 22:26
  • @JamesRyan and doing so may be a realistic possibility in the future. Someone who cares enough about securing a server, say, the US Government, will likely be able to use Artificial Intelligence (which unlike humans could be locked in a room with no access to "chocolate bars") to develop a proven micro kernel, proven mini language for that kernel, and proven simple server with that language, with a single commander (only one human in the system) who alone would have the capability to place a fault in the system's defense. – J.Todd Aug 27 '15 at 12:38
  • Don't remember who said it or the exact quote, but something along this line: "People claiming that something is completely fool-proof, tends to have underestimated the ingenuity of total fools." – Baard Kopperud Aug 28 '15 at 10:19

13 Answers13

77

"No perfect defense is fundamentally possible."

In chess, you have 64 squares, 2 people playing, and one set of immutable, commonly known rules.

In server infrastructures, there are an untold number of assets and ways to approach those assets, an unknown number of people playing, and rules that change constantly with players purposely seeking to bend, break, or bypass the rules.

Consider 2 elements that will prove my point: zero days and chocolate bars.

Firstly, zero days change the rules while the game is being played. While one side gains the benefit of this element, the other side is unaware of the advantage and is possibly still unable to counter these attacks, even if they are eventually known. Each zero day is a new rule that is unevenly applied to the game. Even if a "perfected security strategy" can be devised and perfectly applied, zero days can mean that the strategy is built upon unknown weaknesses that might never be known to the defending side.

Secondly, chocolate bars can do more to break the security of an infrastructure than any other element. What I mean is that people can be bribed or enticed to "switch sides" and grant advantage to the opposing side, sometimes for something as small as a chocolate bar (studies show). Phishing, bribes, data leakage, etc. are all part of the human side of the game that technology cannot account for entirely. As long as there is a human with power in the infrastructure, there will always exist that weakness to the system.

What to do?

In history, we see multiple situations where a massive attempt at defence was defeated by something small and unforeseen (e.g. the Great Wall of China's gates opened to a concubine who was a double agent for the Mongols). The goal, as defenders, is not to mount the perfect defence, but rather to design a resilient and transparent infrastructure where attacks can be seen quickly and responded to completely. Not taller walls, but more alert militia. Not unshakable foundations but a replaceable architecture.

schroeder
  • 123,438
  • 55
  • 284
  • 319
  • I'd be interested to know what happens if you remove the chocolate bars. I would think it fundamentally possible to design an entire server architecture on one's own, lock the server in a box and swallow the key. Then we only have zero days at play. – J.Todd Aug 25 '15 at 04:18
  • 4
    Yes - I enhanced the zero day section. Remember that even with the lock/box/swallow plan, there are a number of *other* human elements to a strategy that could cause problems. Power, network connection, maintenance, patching, etc. all come into play at some point in the strategy. I have yet to hear of or devise a plan that can account for all the human elements at play. – schroeder Aug 25 '15 at 04:23
  • 8
    I worked with one young networking security grad who was very interested in the possible attacks on our systems. As I spoke of the various attacks that we had already experienced, he dismissed them all as "social-based" attacks, and thus, ultimately uninteresting. As we were standing in the midst of the data center with multiple racks, A/C units, power supplies, cables and blinking lights, I turned to him and exclaimed: "This is ALL social! Systems exist for people to use!" I know it sounded trite, but it hammered home for me why there ought not to be a delineation of the technical and social. – schroeder Aug 25 '15 at 04:31
  • All the same, if you don't mind that is, consider a scenario where I personally generate the power, do the maintenance, and apply the patches myself. That's a realistically testable circumstance after all, I could sit in an empty room, run a generator with my server locked in the box, and apply my own updates. The chocolate bars can be eliminated in a small enough (just me) operation if one cares to try hard enough. – J.Todd Aug 25 '15 at 04:35
  • 1
    Even so, removing the human element of the defence, the strategist is not working with complete knowledge or even complete control of his own systems. Infrastructure is built on an interconnected matrix of other people's work and dependant systems. A breakdown in any one of those elements (like a zero day) defeats even the most perfect defence. So, what about building everything from scratch? OS, firmware, protocols, encryption, etc., etc.? – schroeder Aug 25 '15 at 04:36
  • 1
    Oh you mean one's own computational abstraction layers. Such as the kernel and the OS. Yes, assume that the server-side party cares so much about security (one bank is spending a quarter of a billion US dollars this year) that they develop their entire kernel and OS from scratch. In fact, assume the owner of the company, the one interested in the security, programs the kernel, the OS, and the server from scratch, and designs it without a flaw with no help. We're being fundamental here, and such a thing is honestly possible, if one devoted years to it. – J.Todd Aug 25 '15 at 05:09
  • 2
    @JonathanTodd It's not possible within a human lifetime for an OS with any significant amount of functionality (by today's standards.) If thousands of experts combined with a number of man-years of testing that well exceeds any human lifespan can't design and implement a flawless OS, the probability that you or I can is quite small. – reirab Aug 25 '15 at 05:54
  • 2
    @reirab I dont mean to extend this comment thread further, but we're talking about an OS that *only* has to run server software. Basically 1% of the features on the kind of OS you're talking about. The kernel is by far the most work here, but top programmers are capable of developing a kernel alone given significant time. Afterall, we have kernels to use as reference, we just need to redesign one to eliminate flaws which can potentially be abused to compromise the system. Again, my question is about fundamentals, not feasibility... – J.Todd Aug 25 '15 at 06:08
  • 4
    And in chess your opponent has to wait for his turn till you make one move but here your opponent can make a million moves before you even get an alert that something is wrong with your server :) – Hanky Panky Aug 25 '15 at 06:29
  • 1
    @jonathantodd In fact, your original comment makes the 0days *worse* because now you have no way of getting to the computer to fix them after they're discovered. – Shadur Aug 25 '15 at 11:33
  • Carrying the chess analogy further, we can argue that one can subvert the fundamental premise of the stronger player winning by letting the weaker player bring a gun and shoot the grandmaster before the game begins. – Michael Aug 25 '15 at 18:12
  • NB: the Great Wall of China was not designed to keep invaders from getting in, but to delay robbers loaded with valuables enough so they could be caught before they got out. I'm not sure if there exists an analogy in information security. – gerrit Aug 26 '15 at 08:12
  • Turns out this answer was technically wrong, since my question asks about fundamental capability, not feasibility. – J.Todd Aug 26 '15 at 23:19
30

Security can be proven, but you have to understand what is proved

https://sel4.systems/FAQ/proof.pml

Our proof statement in high-level natural language is the following:

The binary code of the seL4 microkernel correctly implements the behaviour described in its abstract specification and nothing more. Furthermore, the specification and the seL4 binary satisfy the classic security properties called integrity and confidentiality.

Integrity means that data cannot be changed without permission, and confidentiality means that data cannot be read without permission.

Our proof even goes one step further and shows that data cannot be inferred without permission – up to a certain degree. So-called information side channels (also called covert channels) are known to exist. The proof only covers those information inference channels that are present in the formal model: the confidentiality proof covers all in-kernel storage channels, but excludes timing channels which must be dealt with empirically.

So, why doesn't everyone just use sel4? (Currently the place where you are most likely to encounter it is on the TrustZone processor of some Apple devices).

The answer is that the proof just covers the kernel, not any of the user-space software that you might want to run. There's no proven-secure webserver, for example, let alone language implementations for the applications that you might want to run on it. And you'd also have to prove your web application secure. Developing these things will require a very large investment, which no large company is interested in making.

High-security systems are usually attacked at the key and login management point

It doesn't matter how secure the system is if your admin leaves his password on pastebin by mistake. Just the other day we saw a TSA employee post a picture of the (physical) TSA master keys to luggage locks on Twitter, so those are now all compromised. Weak passwords, guessable passwords, insecurely stored passwords, bad hardware security tokens, copied fingerprints: all of these are possible attack vectors.

pjc50
  • 2,986
  • 12
  • 17
  • 5
    Oh wow, see, this answer honestly answers my question more precisely than any other. Others all talk about feasibility, but my question was literally *is it fundamentally possible to secure a web server so that no matter the level of intelligence, my opponent can't compromise my system?* - And according to this we could assume a proven server and language were developed which do this. While removing the "chocolate bars" is more difficult, that could be done by having a single man control the maintenance alone with the help of AI. – J.Todd Aug 26 '15 at 22:41
  • 1
    For example the USA President of 2115 (100 years from now assuming AI has been polished by then) can instruct genius AI software to construct a proven micro kernel, language, and hence web server, and lock the AI computer and server in a room, so that he can access codes or something insanely secret remotely without the secret being carried around, assuming he might need them at a moments notice. Really far fetched extreme example, but I'm just demonstrating a model where a proven web server might be both applicable and possible with no human weakness in the mix. – J.Todd Aug 26 '15 at 22:45
  • 4
    Excellent answer. This is the only one that *actually* answers the question as intended. – Thane Brimhall Aug 26 '15 at 22:49
  • 2
    Thanks - I'd seen the question for a few days and nobody had mentioned that some forms of software behaviour are provable. People get carried away with interpreting Turing. But "AI" is very much a magic handwavy word, and unlikely to come with any proofs about its behaviour. And maybe the president sets the code to the same as his luggage: http://arstechnica.com/tech-policy/2013/12/launch-code-for-us-nukes-was-00000000-for-20-years/ – pjc50 Aug 27 '15 at 08:30
  • @pjc50 I read that article, yeah, the Generals circumvented security protocols, like everyone else seems to, that's why I suggest the use of AIs for future possibility of managing a system which has security needs which are of huge importance, such as those missile launch codes. AI might seem like a magic wand, but the technology is really more of a "when" not "if" and AI's wouldnt need to be proofed, because they can be kept secured by eliminating all outside access (including internet) other than via a personal visit from the one system administrator responsible. – J.Todd Aug 27 '15 at 12:20
  • Among my reasons for skepticism are the idea that a genius-level AI will be totally happy imprisoned in an isolated box forever with only a nuclear launch system for company... – pjc50 Aug 27 '15 at 12:57
  • Do you know if I can have a desktop operating system with sel4? Just for kicks and giggles! – PyRulez Aug 27 '15 at 13:59
  • @PyRulez it does look like you can boot it on a desktop PC, although writing all the user-space utilities which you would want is listed on the "future projects" page. – pjc50 Aug 27 '15 at 14:05
  • @pjc50 that is a very agreeable assessment. I forsee this being an issue for companies who want to take advantage of Artificial Intelligence. I think a truly sentient one will even be covered under a sort of "human rights" law via a universal version going beyond humans... Even so, a solution I've considered is to evolve an AI in the lab which gets its happiness from helping a specific person or group. Free will is a matter of perspective after all. Playing God has a few perks I suspect. – J.Todd Aug 27 '15 at 16:18
  • @JonathanTodd: "Fundamentally possible" as long as you ignore the fact that real systems as used in the real world will always have a human component; and also, that server configuration and software will also always be in flux. I doubt that anyone using servers for real-world applications will be able afford to have all the updates always mathematically proven to be unbreakable. So, aside from a thought experiment that ignores the real world, and/or relies on the word "AI" wave away any problems, the answer is "no". – Teemu Leisti Aug 28 '15 at 11:07
  • There's also a pretty big list of assumptions required for this proof, like that the hardware correct as well, and that all the assembly in it is correct, and that the boot loader is correct, and includes one assumption that even includes the statement "We know this not to be the case." so it's not even really claiming that the confidentiality proof is actually complete, just that it holds for certain paths. Coupled with lack of control of the supply chain and it's probably not really possible (see http://www.darpa.mil/program/vetting-commodity-it-software-and-firmware) – Eric Renouf Aug 28 '15 at 16:57
  • @TeemuLeisti Artificial Intelligence is no fairy tale idea anymore from a sci-fi book. We already have progress in this field, and it's no longer a matter of "if" but "when". And *when* AI becomes a polished technology we will have a new capability to maintain a proven system without human error - combined with the idea the selected answer taught me, our capability to prove a secure system, this removes human error, the biggest security flaw, and makes a perfectly secure web server *possible*. Not cheap, not easy, hardly feasible but fundamentally possible. And that was my question. – J.Todd Aug 28 '15 at 23:03
  • 1
    @JohanthanTodd Assertions without any proof. Your question becomes, essentially, "is it possible to build an Artificial Intelligence that can deflect all attacks on a server's integrity?" The answer to that is completely up in the air. – Teemu Leisti Aug 29 '15 at 02:13
  • Doesn't empirically directly imply less than absolute perfection? – Dennis Jaheruddin Aug 29 '15 at 06:49
  • @TeemuLeisti no that's silly. I'm not assuming AI that have super powers, only the ability to handle maintenance (updating proven modules of code) so that humans which can be extorted are not part of the system. If the system is proven, why would attacks need to be deflected? – J.Todd Aug 31 '15 at 20:54
  • @ColorQuestor OK, maybe I misunderstood. However, I'm still left with the nagging feeling that this manner of achieving a "breach-proof" server infrastructure is more of a thought experiment than something that might apply in real life, which is always complicated and messy. Anyway, I'm bowing out of the discussion. – Teemu Leisti Aug 31 '15 at 23:58
  • @TeemuLeisti While it's impractical for most, it absolutely is not just a thought experiment. For example, military systems often use the semi-formally verified INTEGRITY-127B microkernel, and Microsoft's IIS has a formally verified HTTP implementation (HTTP.sys). EAL7-evaluated systems are another example (INTEGRITY is EAL6). – guest Nov 18 '17 at 23:05
  • In the end, all formal verification of a complex system like a server is the development of a formal and compete attack tree, and the verification of the now greatly reduced amount of code which is at risk. – guest Nov 18 '17 at 23:06
17

Nobody has found any particular reason to believe they have found such a system.

You mention Chess, which is a nice game on a 8x8 grid. Consider that a modern server is slightly more complicated than that. Let's instead play in a 65536x65536 board, to make it more realistic. Also, in Chess, the more you play, the fewer positions are possible. Instead, a more realistic system is like Go. The more you play, the more entwined the position can get. I'll note that the game of Go makes our work on Chess computers seem puny.

Conway, around 1970 or so, tried to break apart the game of Go. He found that often the board seemed to divide up into subgames, each of which played out in its space to add up to the final winner. He found the way they worked together was very complicated. As a result, he found surreal numbers, a number scheme which is literally more vast than the real numbers we use in physics. No joke, it is actually easier to predict the weather globally than it is to win at Go by divide and conquer. Could there be a "win" case here? Perhaps. Good luck finding it. The only way to know for sure is to account for the entire 65536x65536 board, all at once.

Kevin Mitnick in one of his books commented along the lines of:

The only truly secure computer is one that is disconnected from the internet, powered down, unplugged, stored in a concrete bunker, under armed guard. And even then, I'd check on it every once in a while."

A.L
  • 302
  • 3
  • 12
Cort Ammon
  • 9,206
  • 3
  • 25
  • 26
8

The comparison with chess is interesting because it shows what protecting a system is not. Compared to chess the play between good and bad guys in IT security has no fixed rules, you don't know who your opponents are and you can be attacked outside of the chess board. And if the opponent loses a figure it can just get a new one while you don't.

  • You have limited resources (time, money, knowledge) to protect your systems. The attackers have limited resources too but if you are an interesting target there will be enough hackers interested which in total might have more resources than you.
  • With these limited resources you have to secure everything. This means to close every possible (and probably unknown to you) way/exploit an attacker might use to get into. The attacker must only find one way in and use it.
  • Apart from that you have a conflict between usability and security. Just have a look at access control with passwords, ways to reset forgotten passwords etc. These are by design not 100% secure because they are a trade-off between security and usability. You could move all your users to more secure methods like two-factor authorization, client certificates, smart cards etc but these might then be too inconvenient for the users and you would lose customers. And they are not 100% secure either, only more secure than passwords.
  • You also have a conflict between security and performance. The harder you analyze all the incoming data to detect attacks the slower it will be. While you can throw more hardware at the problem it will not scale linearly so you have to find a balance between speed and depth of analysis.
  • And you have to deal with software which is insecure. It might be closed source so you cannot look and fix it but even with open source you don't have the time and experience to find out any accidentally or deliberately hidden bugs or back doors. Even if you would have all the money and the best experts you could get for the money you have a limited time to do the evaluation and analysis does not scale linearly with the number of experts (i.e. it does not help to get 1000 experts if you have to analyse 1000 lines of code, because these 1000 lines are not independent from each other).
  • And finally there are the people who protect your infrastructure and have access too it. They are humans so they can be attacked with social engineering, bribed, blackmailed...

In summary: while in theory you might have unlimited resources (time, money, knowledge, unlimited fast hardware) to protect a system and only have customers who are such experts too and prefer secure to convenient access methods - in reality you don't. There will always be a way to get into so you should be prepared for it. Don't believe that you will ever achieve a 100% secure infrastructure but instead create an infrastructure which is not only robust against attacks from outside but where you can detect a compromise and recover from it as fast as possible. Harm might be done but it should be limited.

Steffen Ullrich
  • 184,332
  • 29
  • 363
  • 424
8

Such a system probably exists, but we probably won't find it

We have many algorithms for use in Security. Some of them probably are correct. Specifically, some them probably are exponentially difficult to break. Indeed, we know some that are outright impossible to break (one time pad.) The problem is implementation.

Security isn't about who's smarter. Security is about the defender's carefulness v.s. the attacker's intelligence and creativity. A defender does not have to be brilliant if they can follow an algorithm extremely carefully. Perfect defense v.s. perfect offense results in defensive victor in security land.

The problem is that server's are often complex machines. You have the OS and mirades of programs, and different protocols and programming languages and AHH. It is nearly impossible to have perfect security in such an environment.

On the other hand, if a system is simple enough, a human probably can make it perfect. For example, I have a message, $M$, encoded as a number from 1 to 6. I will now roll a dice, which will be the key $K$, and will add $M$ and $K$ modular $6$, to get the ciphertext $C$.

The ciphertext is $5$. What was the message?

This is example was so simple, I could plausibly consider all the possibilities. On the other hand, a server is pretty complex.

What advice can we draw from this. Keep it simple, stupid. (The K.I.S.S. principle.) Even though it is probably outside our human capacity to make a perfect server, the more simple the server and our algorithms, the better. Document your code, make it understandable, make it simple. Every line of code has a reason. Use a simple operating system (Note: don't confuse simplicity with ease of use. Think Arch Linux, not iOS.) Keep only the bare minimum of programs. Choose a programming language with a small definition, and without weird rules and such (I'm looking at you, javascript.) Although this won't make it perfect, it will go a long ways to making you more secure.

PyRulez
  • 2,937
  • 4
  • 15
  • 29
  • 1
    Your answer comes second closest to being the correct answer to my question, among the many answers here which all fail to answer the question of *fundamental possibility* (which according to the new accepted answer, exists). Everyone immediately thought "Too many variables, not doable, not anything like Chess"but fundamentally, no one considered simplifying the system to achieve perfect security, which is apparently called "proving" the system, and this has been done with micro-kernels I know now thanks to the new selected answer. – J.Todd Aug 27 '15 at 12:34
  • A OTP may be impossible to crack in the real world, but theoretically I could stumble upon the pad used by accident. My only point is that what's applicable and true in the real world doesn't hold for this kind of theoretical question. – Chris Murray Aug 27 '15 at 13:50
  • @ChrisMurray I already moved the dice. – PyRulez Aug 27 '15 at 13:55
  • @JonathanTodd Simplifying isn't called "proving" the system. Simplifying is an almost necessary prerequisite to proving a system. A simplified system by itself is automatically proven though. Proving can be quite difficult. – PyRulez Aug 27 '15 at 13:56
  • @PyRulez, I didn't mean literally stumble upon it. I meant, what if I just guessed it, first time. Whatever the odds, it's possible. It's a non zero chance, and therefore not "perfect security". – Chris Murray Aug 27 '15 at 14:02
  • @ChrisMurray The OTP doesn't say you couldn't guess it. It means you wouldn't know if its right. Indeed, you could guess all six numbers if you like, but what does that get ya? (I assure you, there is no hash of the dice value.) – PyRulez Aug 27 '15 at 14:06
  • @PyRulez, Assuming all six (somehow) decrypt to legible sentences, it tells me the message plus 5 irrelevant messages. Perhaps this is enough to act, perhaps not. The only thing I can prove, is that I'm in possession of the correct decrypted message. Surely in a "perfect" system, I shouldn't have the decrypted message at all? – Chris Murray Aug 27 '15 at 14:17
  • @ChrisMurray The message was a random dice roll. – PyRulez Aug 27 '15 at 14:20
  • 1
    @PyRulez You misunderstood me. I didn't intend to infer that proving a system was simplifying it, only that by simplifying a system to only the exact aspects of the functionality needed, secure data serving, for example, one can potentially, with great difficulty and genius, develop a securely proven server. Fundamentally speaking, it is possible, according to the information provided in the accepted answer. Feasible? Perhaps not. Maintainable? Perhaps not now, because until we polish the development of AI, we will require human maintainers of the system. But fundamentally possible. – J.Todd Aug 27 '15 at 16:25
6

Suppose security is like chess

Unlike most people here, I actually know quite a bit about chess and just enough about security to make this a usefull answer.

If you consider chess you will find that:

The number of possibilities is so large that no practical strategy covers them all explicitly

Therefore, as also we see in practice, the strongest participant is most likely to win. But even so, there is always the chance that a strong person/computer player will lose to a (much) weaker one.

So to conclude:

Unless you know the right move in every possible situation, no perfect defence is possible

Dennis Jaheruddin
  • 1,715
  • 11
  • 17
  • 1
    Ches with 10^80 pieces ... :) – Hagen von Eitzen Aug 25 '15 at 10:19
  • @HagenvonEitzen As you may notice, my reasoning only becomes stronger as the complexity of the situation increases! – Dennis Jaheruddin Aug 25 '15 at 10:23
  • 3
    Chess is a finite full information game. We know an equilibrium exists, even if we don't know how to compute it. – Deer Hunter Aug 25 '15 at 12:02
  • 2
    @DeerHunter That is correct, so if we consider information security to be infinite and non full information my conclusion basically becomes 'its impossible' instead of 'its practically impossible'. – Dennis Jaheruddin Aug 25 '15 at 12:40
  • Chess is almost always a completely useless analogy. It is a perfect information game. No real game has all actors with perfect information! – Aron Aug 26 '15 at 00:48
  • @Aron I understand the limitations of the analogy in general, but don't see it being a problem for this specific answer. It is not specifically mentioned, but even if the attacker does not have perfect information (can be simulated by him making completely random moves) there is still no perfect defence. – Dennis Jaheruddin Aug 26 '15 at 12:03
  • In chess, each player makes a move, and the opponent instantly knows what move they made, and they are able to counter it. When the opponent is able to make a winning move, the game ends. In InfoSec, the hacker can make many many moves before the defender knows. Furthermore, the defender might never know that they are playing or indeed they lost nor how they lost. – Aron Aug 26 '15 at 12:13
  • 1
    @Aron If I understand correctly, you mean that information security defenders have some additional disadvantages. My main conclusion is that that there is no perfect defence if information security is like chess. Thererfore, additional problems for the defender would only strengthen my conclusion. – Dennis Jaheruddin Aug 26 '15 at 12:31
  • I find it laughable that people consider the number of possibilities in chess even vaguely sane. All you need to know about the state space of chess: the number of possible positions is thought to be in the region of 10^43, the game tree complexity is estimated at 10^123, if the known universe were 100% hydrogen that would be around 10^80 atoms. Go is even more complex. I'd say InfoSec is easier to solve ;) – Kaithar Aug 27 '15 at 12:32
4

A perfect defense is fundamentally possible.

I am actually mediocre in chess, but it is trivial for me to stalemate the greatest players in the world and I can even stalemate the fastest, best chess playing computers.

I just sit on my hands and wait for the time to run out and the chess grandmaster can not claim a victory over me.

Similarly, the impenetrable server never responds to client requests and can not be defeated by even the most clever hacker.

Despite its perfect security, it is totally useless.

emory
  • 1,560
  • 11
  • 14
  • Note that the question specifically states that the server must communicate with clients on the web. A situation where you don't allow anyone to move may not be very relevant here.---Sidenote: a situation where nobody moves is not a stalemate, just an ongoing game. – Dennis Jaheruddin Aug 27 '15 at 13:59
  • @DennisJaherudd I was wrong about the timeout rules - https://en.wikipedia.org/wiki/Draw_(chess)#Draws_in_timed_games. I thought you had to checkmate to win (or be checkmated to lose). But it turns out if you run out of time and there is at least one series of legal moves that would result in your checkmate, then you lose and your opponent wins. – emory Aug 28 '15 at 09:55
2

Information security is fundamentally unlike chess. Chess is a poor model to apply to Information Security, though the differences between the two can be enlightening.

Chess is a game of perfect information. Both parties know exactly where all the pieces are at all times. In information security much of the information is hidden, and one party can gain an advantage by having more information than the other. "Smartest" has nothing to do with this.

Chess is a game where all the rules are known and set. In information security the existence of rules is questionable at best. The "Rules" are better thought of as an environment and thus a moving target.

Chess is a zero sum game. In information security everyone can lose, everyone can win, nobody can win, and winning and losing can mean nothing.

Chess is a game with two players. Information security has multiple actors with different motivations (see "not a zero sum game").

Chess has a clearly defined win and loss. Wins are complete, and losses are complete. Information security is far more muddy and not at all black and white. The system can be partially compromised and losses are mitigated.

Steve Sether
  • 21,480
  • 8
  • 50
  • 76
  • It adresses the analogy, but does not appear to answer the actual question. – Dennis Jaheruddin Aug 29 '15 at 06:45
  • @DennisJaheruddin Some questions aren't good questions and answering them involves addressing the question itself instead of directly answering it. For example, George W Bush. Great President or The Greatest President. That's all I've got, choose one. – Steve Sether Aug 31 '15 at 14:34
2

Yes. A null server infrastructure is fundamentally impossible to breach.

No server = nothing to breach = fundamentally impossible to breach.

Anything else is fundamentally possible to breach.

LawrenceC
  • 224
  • 1
  • 5
  • Can't find the link now, but some famous expert said that perfect security can be achieved with a computer by: "not turning it on, not using it, not storing anything on it." I guess that about sums it up, eh? –  Jan 20 '16 at 00:49
1

Obviously, there's no perfect technical solution to prevent human error, or prevent an attacker from bribing your sysadmin. But if we look at the purely deterministic technical side of this, then the answer is (trivially) yes.

You can look at a network-connected system as a function. You have some function that you want to compute, where the inputs are a system state and bits coming in over the wire and the outputs are a new system state and bits sent over the wire. If the function is computable, then there is a system implementation that will do exactly that.

The problem is that deciding whether a given system perfectly computes a function is impossible in general, and impossible to determine with certainty in practice when the system's behavior is complex. So a person whose system is truly impenetrable (again, from this limited technical perspective) could never be sure of it. Conversely, someone who is completely certain of the security of a very complex system almost certain holds that belief irrationally.

Thom Smith
  • 201
  • 1
  • 4
1

The true question behind this is rather: how much resoures are you willing to spend to break the defense? And what is even the minimum level of security that it can be considered perfectly secure? No access whats so ever? Accessig any random data? Or only accessing the data that can be used for profit is considered a breach?

To use your example of chess: sometime in the future, we will build a computer that holds all positions of all possible games of chess. Pitted against itself will probably result in 100% draws/stalemates. With a "limited" set of possible moves this is a valid assumption of a perfect defense.

Reality has too many options to bypass rules and possible situations. Maybe you don't want to steal the data, maybe you just want all backups to cease to exists. So you bomb the server facility - and are happy with the non functional debris.

In real life, solutions like intrusion detection systems, cold storage (storage systems that aren't live on the web), aggressive account & password management processes, defensive application design and usage behaviour analysis can rise the costs to hack those systems to such a level, only very deep pockets could even think to attack it.

And you would need "James Bond"-like abilities and well trained tactical skillsets to pull it off. For a certain very large part of the hacking community, this would be considered a "near" perfect defense.

The most hacked sites in the news even don't know who is on their "digital lawns". Many hackers spend weeks or month in their systems, undetected. They simply didn't think to spend millions of dollars in minimum defense, because its not required by law and the lawsuits are potentially cheaper than starting the game of 'hacking defense' chess.

Michael K.
  • 11
  • 1
  • Someone mentioned in an answer, the fact that proven systems exist. Micro kernels can be developed which do have a perfect defense. With more effort, a proven language could be developed for that micro kernel, and thus a proven server with the task of say, allowing the President but no one else to remotely access nuclear launch codes at any given moment without carrying them around. Furthermore, an operation that expensive, in the future, could remove the human error element by using Artificial Intelligence with no outside access to develop and maintain the system. – J.Todd Aug 27 '15 at 12:30
  • @Michael K. actually... Even with really big computers (galaxy sized!) and really dense memory (how about one bit per atom? Okay, TEN bits!), you still couldn't enumerate all the possible games of chess. See https://en.wikipedia.org/wiki/Shannon_number. – daveloyall Aug 27 '15 at 21:29
1

It proves much easier designing a system to keep the smartest people out than designing one to keep the nameless, shameless, creative, and persistent people out.

Smart people proceed in an identifiably patterned manner, along predictable avenues of effort and exploration, and rely on a miserably predictable and similar set of assumptions. A genius seems to always assume that when utilizing the function of addition that 2 and 2 is always 4, and never 22, or 2 & 2.

The greatest threat to an empirical, rational system -- in human terms -- proves to be an adversary who possesses less than a genius level IQ, tends to be a maverick or dances to the beat of their own drum rather than follow the social norms of the herd, and scarce never is identified as being the smart one in social groups. Well adjusted, comfortable in their own skin and in who they are and accepting of their status as not the pack leader in any area, this person is not swayed by nor engages in the frivolous personal competition that might encircle them. Self-driven, self-starter, inspired and driven by the inner reality within, this individual couples a certain spark of spontaneity and genuine, non-rational creativity with an unwavering persistence and inner, tempered, disciplined volition or will.

They don't need your social validation, so a whole host of social engineering exploits prove useless. They often go unnoticed in crowds, as they neither care for nor compete for anyone's attention save those who they themselves identify as interesting or worthy of their effort and attentiveness.

I have personally witnessed a chance few precious times in life a person like I describe defy, violate, and invalidate the design and implementation specs of the smartest of geniuses.

Again, I'd rather have to design a system against the smartest of people than against even one person like that which I have described.

1

If there is a legitimate way in, there is an illegitimate way in.

The only server that is fundamentally impossible to breach is one that is fundamentally impossible to access. In network security, this is known as "air-gapping"; a server or subnet is physically disconnected from any other network including the outside world. Combined with physical security of the components of this airgapped network, which prevents an unauthorized persons being able to reach out and physically touch any piece of computing hardware connected to the network, computers on this network are unassailable by a hacker.

... Kind of. Again, if there is a legitimate way in, there is an illegitimate way in. Physical security is ultimately a human endeavor and therefore ultimately fallible. Social engineering can be used to bypass face-to-face security protocols, either by tricking a "gatekeeper" into letting an unauthorized person in, or by tricking an authorized person into doing something they shouldn't on behalf of an unauthorized person. The better the humans involved are trained to follow the physical security protocols, the less likely any of this is, but there is always a non-zero chance of bypassing physical security up to and including brute force (something like the Zero Dark Thirty raid, while not subtle in the slightest, could theoretically be perpetrated against any corporate headquarters on the planet; it's just a matter of having the right people with the right equipment to do the job).

KeithS
  • 6,678
  • 1
  • 22
  • 38