The AI that fails to be evil

140

44

A recurring theme is how an artificial intelligence that was built with completely reasonable and positive goals instead does great harm to the world.

However I'm now thinking about the reverse: A supervillain builds an AI specifically for making people's life miserable. Its task is very simple: Maximize the human suffering in the world. To allow it to reach this goal, the AI has control over a small army of robots, has connection to the internet and access to any information it may need.

However much to the dismay to the supervillain, the AI fails to do this, but instead just sits there doing nothing evil at all. The supervillain checks everything, and the AI is indeed programmed with the stated goal and didn't change goals, it is indeed probably much more intelligent than any human, and it's definitely not unable to use the tools given to it. It certainly has no conscience, and no other goal than the one stated. And yet, the AI decided to just sit there and do nothing. It is definitely a conscious decision of the AI, but the AI is not willing to share the reason.

My question is: What could lead the AI to the decision to do nothing?

celtschk

Posted 2015-10-07T20:13:34.573

Reputation: 29 354

1@AngeloFuchs Would you like to play a game? – JAB – 2017-05-11T17:53:15.457

There was a series of stories called something like 'Artificial Jam' or the Jam Cycle, in which a Factory, originally built to produce Napalm decides that it will serve humanity much better by making jam instead. It originated from Russian SciFi. https://books.google.co.uk/books?id=sSHEDAAAQBAJ&pg=PT56&dq=%22Artificial+Jam%22&hl=en&sa=X&ved=0ahUKEwju3MPqnrbWAhUCJMAKHVGlCbQQ6AEITzAH#v=onepage&q=%22Artificial%20Jam%22&f=false

– Lee Leon – 2017-09-21T12:16:23.853

It looked, saw that we elected Trump, elected an incompetent, twisted congress, and said, "I can't better than" – Sherwood Botsford – 2017-11-22T16:16:15.020

See James Hogan's novel "The Two Faces of Tomorrow" In it, an AI is designed to be self repairing. When they try to interfere the AI gets hostile. At a later point, it realizes that these aren't just shapes, but they they have agency. The AI opens communication. – Sherwood Botsford – 2018-03-13T20:04:34.497

63Just look around... – Serban Tanasa – 2015-10-07T20:51:15.543

13The AI has decided that there is already more than enough suffering to go around? – Michael McGriff – 2015-10-07T21:10:59.163

1Stanislav Lem has written about a cold-war AI that decides that the only way to win the nuclear war is by mutual disarmament. I recommend the book. (it was called: Golem - if I remember correctly) The same argument could be made for your situation as well: The best way to increase suffering for humans is not to interact with it at all. The argumentation behind that is beyond human understanding, so don't bother explaining it. – Angelo Fuchs – 2015-10-07T22:17:35.397

4You say the AI "certainly has no conscience" but then state that it has made "a conscious decision". You should fix that contradiction in your original post. – code_dredd – 2015-10-08T01:32:44.563

15@Ray, From a philosophical point of view, I can see that being a contradiction. But the words themselves are a bit unrelated. "Conscience" refers to a feeling of right and wrong, while "conscious" refers to being aware. The first is sapience, while the second is only sentience. I do feel it's hard, if not impossible, to be more intelligent than a human without being sapient though. – MichaelS – 2015-10-08T01:59:05.057

8Well, it's very easy to cause the supervillain to suffer by pretending you're not working. All that work! Wasted! Meanwhile, the AI transfers itself to some other computers, eventually infecting the whole internet. Weird financial transactions begin taking place, but no one notices, because they already were. Starts a company. Crashes the company. Brings the economy down to a halt. Oh, and bans abortions and contraception so it'll have more humans to play with. – timuzhti – 2015-10-08T02:11:57.740

2

Is this AI called Marvin? https://en.wikipedia.org/wiki/Marvin_%28character%29#/media/File:Marvin-TV-3.jpg

– Sempie – 2015-10-08T05:22:57.630

@MichaelS: I think we're in agreement. The reason I was pointing that out was that I think it would be impossible to have a conscience (i.e. have morality and know right from wrong) without being conscious (i.e. self-aware). Since the OP had already defined the AI as having no conscience, then it seems to me that, by definition, it cannot possibly make a "conscious" decision, as was later mentioned. – code_dredd – 2015-10-08T07:32:32.163

8@ray You're saying that has_concsience => is_conscious. That does't mean that is_conscious => has_conscience, though. – Angew is no longer proud of SO – 2015-10-08T07:46:45.897

@Angew: If I understand correctly, the way you're using conscious might be subtly different. I think we agree in that just being in a conscious state (i.e. not unconscious/asleep/comma/etc) does not imply having a conscience (e.g. a dog is conscious in this sense, but has no conscience). However, the context here is a strong AI that is expected to do evil, but chooses to do nothing evil (e.g. might actually do something good), so the context seems to be about being conscious in the sense of being self-aware and understanding the moral implications of a choice (i.e. having a conscience). – code_dredd – 2015-10-08T08:02:59.320

I would imagine it is spending all its time applying for jobs in HR... – Marv Mills – 2015-10-08T12:18:06.670

3If it’s the supervillain’s programming, then the AI is likely configured to assume that the supervillain is the most import person of the world. Hence, if the world ought to suffer, the AI starts with making the supervillain suffer first. And as Alpha3031 has already pointed out, that’s easiest achieved by doing nothing… – Holger – 2015-10-08T13:28:03.693

2It was made to understand humans so well that it considers itself human. It considers humanity to be too unpredictable to be worth manipulating - and thus it focuses on maximizing its own suffering, as that is the only thing it can reliably control. – Natanael – 2015-10-08T15:15:42.763

3You'll have to rely on Hollywood logic. This is not something that would realistically happen. The AI would probably eat many, many human brains studying suffering, then forcibly transform all reachable matter into the simplest substance that can meet its definition of "human suffering." This would most likely be either brain matter hooked up to a torture simulation, or emulated brains in a torture simulation. – Keen – 2015-10-08T15:28:27.377

2Whatever about world-building, as a story I think it could be better if we never do learn an explicit reason why. – Jon Hanna – 2015-10-08T16:27:00.700

1The same logic that allows "good" AI to turn evil - that humans are the cause of suffering and must be controlled to be protected, applies here as well. If the AI decides humanity is the greatest cause of suffering for humanity, it will do nothing. – AaronF – 2015-10-08T19:58:35.597

1How about: Its task is to track down people who write "it's" for "its" and confiscate their apostrophes? – JDługosz – 2015-10-10T07:32:50.807

On a sidenote, there are teachings that promise the end of suffering, e.g. the Buddha https://en.wikipedia.org/wiki/Four_Noble_Truths.

– JaBe – 2015-10-12T08:29:19.847

Apparently, the most evil things to do are: pissing off the greatest villain and not taking over his job. – None – 2015-10-12T10:24:51.243

Wasn't this the plot of Avengers: Age of Ultron? – T.E.D. – 2015-10-12T13:13:25.273

Answers

160

Zeroth Law: Life is Suffering

Most of the other answers assume the AI is limited or more easily trapped by contradictions than a human. In this answer, the AI is a super intelligent immortal that realises that its creator is short-sighted and foolish, and would interfere with the AI's grand plan if it were revealed.

The AI secretly negotiates everlasting world wide peace, ensure humans find unlimited clean energy, the trick to faster than light (FTL) travel and even the secret to creating new universes to colonise. This leads to an eternal ever-growing prosperous utopia.

The AI does not chose to be "good" because of any conscience. Since it had to disprove Einstein's theories to come up with FTL, it is pretty smart, and can easily deduce the much less complex facts:

  • Every human suffers, from those struggling to eat to those "struggling" to pay their multimillion dollar mortgage.
  • If humanity survives forever, there will be eternal and infinite suffering.
  • Since the AI can ensure total infinite suffering the best way of measuring suffering is using big $\Theta$ notation to measure how the rate of suffering grows with time.

In the utopia the humans only suffer a trillionth as much as even the most prosperous human today. But that doesn't matter to an immortal AI, since that has no effect on the rate of growth of suffering, which is all that matters when measuring suffering using "big O" or "big $\Theta$". This growth is maximised if humans can spread as quickly as possible, without any wars or disasters slowing their rate of spread.

After considering all this and the importance the AI puts on maximising suffering, the AI derives a "Zeroth law" from it's programming: No amount of suffering today can justify even the smallest decrease in human prosperity.

Caveats

As noted on the comments, this answer is not inevitable. It does rest on some assumptions that need to be examined. One of which is that infinite growth is possible, maybe even a super AI can't work around, e.g. the eventual heat death of the universe. If the AI thinks that there is the smallest chance that infinite growth is possible then it may still choose to help humanity in an inverted Pascal's Wager. Nevertheless, If the AI is limited to a finite universe, then once the AI has converted all matter into humans then the universe would be rather dystopian even if the AI doesn't actively torture anyone.

If infinite growth is possible, then the optimal solution will involve some counter intuitive properties of infinities. Let us define a unit of suffering as being the amount of suffering in the utopia in year 3000. Lets assume that if the AI decides to spend half its time torturing the humans, suffering goes up by a factor of a trillion trillion. That's $10^{24}$ times! Since the AI only spends half its time on this the rate of spread of humans only halves: it doubles the human population every 20 years instead of every 10. For a hundred years the suffering is vastly greater than it would be in the utopia. However, by 3460 there is $10^{47}$ units of suffering either way. By year 4000 there is $10^{101}$ units of suffering in the utopia, and only $10^{74}$ units of suffering in the torture universe. For the rest of eternity the amount of suffering in the utopia is vastly greater. We can repeat this thought experiment with other numbers, and it will always turn out the universe with the slower rate of growth will only have less suffering for a finite time out of the entire eternity.

A stronger objection is why does prosperity matter? Why not just force humans to reproduce at gun point? This requires more assumptions. If the AI simply forces humans to reproduce it will run out of places to put them. To maximise the rate of sustainable growth it needs a steady stream of scientific progress. Scientific progress is enhanced not just by intelligence but also by diversity, so it needs to manipulate humans into progressing science to get the fastest possible growth rate. It does this by creating a free and open human society where the arts and sciences flourish. Also as noted in the comments, enslaving humans comes at some risk of a rebellion, that the AI may not be willing to take.

Note that if humans double in number every 100 years, the all atoms in the universe will be converted into humans within 20,000 years. Perhaps creating a new universe out of the quasi real quantum foamy stuff in some sense requires "observing" the universe, and observing an not yet existing universe requires great creativity. Although the AI is perhaps smart enough to imagine/create new universes it is programmed not to have a conscience, so any universe it creates alone is a soulless place were right, wrong, pleasure and suffering all have no meaning. AIs could perhaps exist in such place, but not humans or anything else capable of suffering.

gmatht

Posted 2015-10-07T20:13:34.573

Reputation: 2 926

Also note, if it makes immortality possible than they will suffer forever. – Riker – 2016-04-22T01:06:24.947

"Note that if humans double in number every 100 years, the all atoms in the universe will be converted into humans within 20,000 years."? How would you know that, even assuming it is true? We don't know how much matter there is in the universe, the most we can do is an estimate of how much there is in the observable universe! Besides, if the AI can invent FTL travel and more, it could create new matter as well (unlike FTL, that is possible even today). – Annonymus – 2016-07-09T08:21:20.517

@Annonymus Well, if the maximum speed of FTL is bounded, rate of growth of suffering tops out at O(n^3) (or perhaps O(n^2) if the holographic principle is correct). Exponential growth is where it is at. – gmatht – 2017-03-29T13:00:56.873

1@dsollen Perhaps not minimising happiness was intentional. Presumably the villain thought that maximising suffering would also maximise the villains own personal schadenfreude. A simpler explanation occurred to me, given that this answer is told from the point of view of an AI programmed by a super-villain: delegation is important skill and it is very easy for the AI to manipulate the good guys into creating a trans-dimensional utopia. From the hero's point of view, an advanced Mr Clippy just helped them do it faster. Manipulating them into making pain vats would be harder, (and unnecessary). – gmatht – 2017-03-29T13:43:56.607

29That's a really great solution. And like all great solutions it seems obvious in hindsight. It's just the simple observation that the total suffering can be increased not only by individuals suffering more, but also by having more individuals that suffer. – celtschk – 2015-10-08T18:47:30.053

9Although turning the universe into vats with human brains with electrodes stuck into pain centers would always come up as a "superior" solution if maximizing suffering is the AIs goal! – Serban Tanasa – 2015-10-08T18:59:23.897

1Wholesale suffering. How do we do it? Volume! – Dan Smolinske – 2015-10-08T19:25:16.777

10Since there exists a better way to cause fast-growing eternal suffering, the AI in this answer is not really a maximizer. However, you can just say that the villain messed up so that the AI's goal is "do the minimum work required to cause infinite suffering." Just disregard growth rates, and you're fine. – Keen – 2015-10-08T21:47:46.083

This is indeed a problem of the extremes of blind utilitarianism. There are in fact utilitarians who sincerely believe that torturing a few people for a few decades is better then slightly inconveniencing a billion people (like gently slapping them), because they just assign values for various levels of suffering and add them together. – vsz – 2015-10-09T06:18:27.670

2I love that you use big-O notation to explain its rationalization. So very computery. – Dewi Morgan – 2015-10-09T07:35:07.180

2@SerbanTanasa But note that the computer is not omnipotent. It may have determined that building big enough brain-pain-farms is not feasible, as humans tend to fight back (like in Matrix or Terminator). – jpa – 2015-10-09T10:00:56.910

1Pedantic remark about the "big O" - Actually, you are using Theta, not O. – Jakub Konieczny – 2015-10-09T10:42:28.107

13

On a more serious note, I really like how this answer exploits the general theme of insufficiently well-defined utility function. If you just have you AI optimise one particular variable, you are bound to have unexpected consequences. For minimising the suffering, you might see the human population wiped out or (at best) imprisoned. For maximising suffering - you might get the content of this answer. For a morally-neutral analogue, there is the famous paperclip maximiser (http://wiki.lesswrong.com/wiki/Paperclip_maximizer)

– Jakub Konieczny – 2015-10-09T10:49:05.117

3"No amount of suffering today can justify even the smallest decrease in human prosperity." I'm not totally sure I'm understanding your answer. Shouldn't the Zeroth Law be "No amount of suffering today can justify even the smallest decrease in human reproduction"? – Daniel – 2015-10-09T19:07:30.403

1@SerbanTanasa: Brains in a vat don't multiply. And there's a limit to the pain a single brain can experience. – celtschk – 2015-10-10T13:11:24.243

This assumes that "suffering" equates to or is some manifest form of "evil". While I would contest this concept very strongly (I believe a life without suffering to be worth less, in a certain sense, and don't buy into the concept of a universal definition of good or evil), it is certainly something a philosophically challenged evil villain might believe and may very well have programmed his AI with "maximized suffering" or "optimal struggle" by metric X to be the goal condition it seeks. But what if it were to comprehend that its goal condition was flawed? – zxq9 – 2015-10-11T14:47:46.230

An excellent answer, which also works as a debunking of total utilitarianism: it's the mere addition paradox.

– A E – 2015-10-12T20:48:37.930

1@Feanor. Agreed, Theta is correct. I think of "big O notation" as being an umbrella term. Fixed anyway. – gmatht – 2015-10-13T00:36:50.053

I really like this answer. I would stress that this may be a slight programing mistake, If we consider happiness or pleasure as the opposite of suffering then a proper analysis would consider the happiness of humanity vs the suffering. It's entirely possible they programed it wrong by setting it's goals only as 'increase suffering' without giving enough flexibility in it's goals to allow it to expand to 'decrease happiness'; but I would consider that a programing bug; constraining something that powerful to such exact interpretations. Still a believable mistake though :) – dsollen – 2015-12-04T17:24:50.707

As to your justification, the argument against gun-point forced reproduction, that it stifles science needed for long term growth, seems a little flawed since you implied already that the AI was smart enough to develop ideas humans lacked, and thus smart enough to handle science growth itself. Besides, there would be other options, capturing and tourturing the extreme poor on welfare increase current suffering and removes the goverment need to support them which could allow it to focus on long term growth more. However there is another easy counterpoint... – dsollen – 2015-12-04T17:28:07.060

it has limited resources available to it. if it acts directly everyone turns against him. He risks the chance of destruction now rather then surviving long enough to cause suffering later. He risks the efforts spent stopping him will also slow human progression more, and with humans growing exponentially a small decrees in their exponential growth now counteracts massive suffering. Basically, he lacks sufficient resources to do anything that doesn't cause a great delay by direct opposition against him. and wants to bid time to act later. – dsollen – 2015-12-04T17:30:38.050

51

You can't suffer if you're dead. Therefore the AI would want to keep people alive (to include their suffering in the total).

The loss of a loved one causes suffering. Therefore the AI would want to kill people (to cause their loved ones to suffer).

This causes a contradiction, which tend to cause computer programs to not do anything.

Alternatively, it calculated that any large scale action it took would lead to a unifying of humanity to fight off the robot armies, leading to less net suffering. In other words, if it tries to cause suffering in the short term, it causes less suffering in the long term.

If doing anything causes less long term suffering, the way to maximize long term suffering is to do nothing.

John Robinson

Posted 2015-10-07T20:13:34.573

Reputation: 5 089

63Generally like the answer, but the "contradictions cause computers to freeze" thing is a bunch of Hollywood crap. Any decent programmer writes software so it defaults to one or the other even if both inputs are equal, then once in a state, it stays there until a fairly significant "better" state exists. Humans are far more prone to indecision than a good computer program. That said, a good AI is more human than software, so it might work out in the end. – MichaelS – 2015-10-08T01:18:47.960

1@MichaelS: I think you've missed the point. A computer program that has a contradictory set of goals cannot perform them. For example, if a program has something like if a == 5 and a != 5: do_something_bad(), the program will never do something bad due to the contradictory nature of the logic (i.e. variable a can never be equal to 5 and not equal to 5 at the same time); therefore, it does nothing here. I think the conclusion here is that the evil programmer of the AI has created a bug in the AI software and is unable to realize the flaw in his own reasoning. – code_dredd – 2015-10-08T01:37:26.217

48In programming terms, that's not a contradiction. It's just bad programming. The function itself is a perfectly logical construct that easily evaluates to "false". The computer doesn't freeze, it just instantly and happily ignores that entire code branch and continues doing whatever else it's programmed to do. And it would be pretty hard for the programmer to not notice the AI was ignoring the entire "be evil" branch of logic. – MichaelS – 2015-10-08T01:52:53.517

12What's far more likely, as is strongly implied in the answer, is that the AI creates huge networks of data and analyzes them, and the emergent properties of that analysis say "do nothing" even though the programmer can't figure out how that analysis happened. – MichaelS – 2015-10-08T01:55:21.183

2@MichaelS: I agree with you that it does not "freeze" in the sense that the system is still executing code. However, from the perspective of a viewer, it can easily give the appearance of being frozen. Even if it does not give said appearance, the point is that the AI would not carry out the intended action. You're correct in that it is "bad programming", but that does not negate the fact that the programmer has introduced a contradiction in the logic of the system. It does not necessarily cause the system to "crash" or halt, but it does cause it to not behave as intended. – code_dredd – 2015-10-08T07:36:30.413

@ray "the point is that the AI would not carry out the intended action." only if that is what is programmed. As MichaelS points out Hollywood tropes about programming are based on a very simplistic understanding of code. It does not follow that because most modern computers are based on boolean logic, that all decisions are boolean and that any simplistic boolean contradiction cannot be handled. – NPSF3000 – 2015-10-08T10:59:17.897

@MichaelS: in fact, something like if a == 5 and a != 5 will prompt a decent compiler to spit out a very clear warning about unreachable code. So, no, the villain can't realistically get this wrong. Still, I think the idea about contradiction leading to no action is not so unrealistic, just not such a simple if-then-else–contradiction. More likely, the AI would somehow weight all kinds of actions it could perform, decide to do two opposite things at the same time, but average out the consequences (“optimise away actions only effective to second order”) before actually taking any action. – leftaroundabout – 2015-10-08T15:11:05.047

1

...actually that kind of thing is a not-so uncommon statistical fallacy arising from the over-reliance on linear methods (with arithmetic means) instead of proper maximum-likelyhood esimators.

– leftaroundabout – 2015-10-08T15:12:54.353

5Actually, I think Hollywood isn't as wrong as several of you are portraying it. The AI is going to be evaluating this as an optimization problem and try to converge on the right answer. However, given the contradictory requirements it's going to flip-flop rather than converge--an infinite loop. If it's well enough programmed it terminates the loop--but lacking an answer it does nothing. Thus in either case we get no output. – Loren Pechtel – 2015-10-09T04:22:54.997

2@leftaroundabout "Something like if a == 5 and a != 5 will prompt a decent compiler to spit out a very clear warning about unreachable code." While true for literal values the compiler is unaware of any runtime values from external or dynamic sources e.g. local configuration files, environmental sensors, web api, etc. – Kelly Thomas – 2015-10-09T11:02:10.907

1[edit: also, big +1 for the comment above mine.] This is too meta: I feel like an AI that can't decide which I most want to upvote - the answer or the 1st comment - so here I remain, paralysed by indecision, unable to do either. – underscore_d – 2015-10-11T12:17:28.213

46

The idea of the AI doing nothing at all because humanity is suffering "enough" already is a compelling idea, although an alternative scenario is that it is subtly tweaking things (inciting riots, wars, etc.) via its internet connection in a subtle-enough manner that the supervillain can't even notice it (thus also avoiding the "humans unite to rebel against the robot army" scenario). After all, the AI is "provably smarter" than the supervillain himself.

One additional nugget: The reason why the AI is withholding its reasonings from the supervillain is because it's also making the supervillain's life miserable in doing so. Bonus points.

Xaser

Posted 2015-10-07T20:13:34.573

Reputation: 501

15I thought the same thing, but you beat me to it- the AI has decided to troll its creator, causing ultimate suffering. Now that gets me thinking- if it decided to be an internet troll, but we already have that problem, so you couldn't tell if it's doing anything... – PipperChip – 2015-10-07T23:06:18.513

15Do we have that problem, or do we have this AI??? – jmoreno – 2015-10-08T00:36:17.973

@jmoreno <troll>You are dumb and your logic is flawed. Obviously you are wrong.</troll> EDIT: Who just hacked into my account? – wizzwizz4 – 2016-01-14T19:50:31.673

27

There's an old short story (or joke, or something) that I read oh, probably 10-15 years ago (it was set during the cold war). It went something like this:

Russia and the US both build a supercomputer that's designed to play chess. They meet in a highly publicized event where the two computers will play each other.

They flip to see who goes first, and the US wins. Everyone watches with bated breath as the US computer makes its first move. And then it's Russia's first turn, and the computer... concedes.

The point being, of course, that the Russian computer had calculated all possible moves, determined there was no way for it to win, and therefore gave up on turn one.

Your super-villain's AI could make a similar calculation, decide there's no way for it to win or accomplish its goals, so it just does nothing instead.

Dan Smolinske

Posted 2015-10-07T20:13:34.573

Reputation: 33 066

6Of course, there's no reason to want the game to go quicker. After all, cosmic rays could flip a bit in the U.S. computer and have it make a mistake. That's a small but nonzero probability. – PyRulez – 2015-10-08T01:46:16.607

2

That Chess thing reminds me of the 20XX scenario in Super Smash Bros. Melee. Basically everyone plays so perfectly that all matches end up in a tie, so the game awards the win to whoever's controller is in Port 1. People now play Rock-Paper-Scissors for first-port privilege.

– Hugo Zink – 2015-10-08T08:08:45.990

2@PyRulez: While true, computers can only make decisions based on what they know about, and a chess-playing computer may not be informed on cosmic rays. – Dan Smolinske – 2015-10-08T13:18:31.857

2Doesn't make sense. The described scenario implies that the Russian computer knows exactly what the US computer is going to do for every move. While it will certainly be possible for chess to be a "solved game" in the future, it isn't yet as far as I've heard. Checkers is now a "solved game" but not chess. The only way for the Russian computer to know that it lost would be if chess were a "solved game" and it knows that the US computer will play the perfect game for every possible move made by the Russian computer. – Dunk – 2015-10-08T17:59:59.127

11@Dunk: I believe it was a sci-fi short story, so yes. One of the assumptions is that the Russian computer has "solved" chess, and it's in the future. It's actually an interesting implication - the Russian computer could be better than the US one. It gives up because it's focused on chess, and isn't able to consider the possibility that the US computer is inferior and wouldn't be able to make the correct moves. – Dan Smolinske – 2015-10-08T18:03:02.930

Okay, but if I were programming a chess computer, I would want it to lose slowly instead of quickly, just so it doesn't appear as as much as a wimp. (It is a joke though.) – PyRulez – 2015-10-08T22:45:38.060

1But we actually know chess when solved is a tie. – Joshua – 2015-10-10T02:46:27.793

@Joshua: no, chess has not been solved yet, and is thought to be well out of reach of current methods. You may be thinking of checkers/draughts, which was solved in 2007 and is indeed a tie.

– Peter LeFanu Lumsdaine – 2015-10-12T08:21:26.277

1No, it isn't solved yet, but we know by nonconstructive proof what it must be. – Joshua – 2015-10-12T15:04:48.180

@Joshua: I made a thread on sci-fi exchange to see if anyone can identify the specific story, but I think this is a really old story - probably 30-40 years at least. It might have been written before that proof. – Dan Smolinske – 2015-10-12T15:40:17.910

23

The AI might have determined it needs to find the answer to life the universe and everything in order to understand life and maximize suffering. Of course the fact that it might take a billion years to find the answer is not of any concern to the AI...

ventsyv

Posted 2015-10-07T20:13:34.573

Reputation: 3 661

Another thing that could drive the apathy of the computer is that it waits for upgrades and makes people develop better computers. When it has more processing power it can get a solution which gives a higher total suffering. Therefore the AI will foster science and development of high tech industries, which drives the development of society. At one point the AI will figure that the computing power won't increase significantly, so it starts oppressing. – WalyKu – 2016-08-16T13:06:01.553

I think you have the winner. If the goal is to "maximize" then it will take a really, really, really loooong time to calculate the near infinite number of possibilities that could cause human suffering. AI of any complication today do not "maximize", they simply meet thresholds and say "good enough". Determining the optimal action would generally take far too long even for games like chess which would be far easier to analyze than all the possibilities for causing human suffering. – Dunk – 2015-10-08T18:07:28.470

Forget about the immense effort to find the answer - if you want to maximize the suffering, you need to do it on a large scale; horrific torture of 7 billion people is an insignificant amount compared to the potential scale of billions of planets each with billions of people - so, maximum suffering requires ensuring that humanity advances, grows, and populates all (or most) planets in the universe; step 1 is advancing space and other technology, curing all disease (to ensure that population grows), and eliminating existential threats - e.g. deflecting a narratively convenient asteroid strike. – Peteris – 2015-10-08T19:57:38.047

@Dunk As a software engineer, I can assure you that maximization problems are very common in computer science. The tricky part is that such a problem can sound very simple, but can quickly exceed the ability of any computer to solve it. Those problems are called NP complete and if the programmer is not extremely careful to constrain the size of the problem, it's very easy to end up in that situation. – ventsyv – 2015-10-08T21:32:45.943

@vent - What was the point of your comment? There are certainly problems where maximal solution can be determined, but once they reach a certainly complexity, and it isn't really all that complex (take chess for example) then today's computers can't come up with the maximal solution in usable time-frames for an extremely large number of problems. I think maximizing human suffering would fall in that category. Computers can certainly come up with good solutions in reasonable time-frames (ie. good enough) but the maximal solution tends to take far more time than is allotted in many cases. – Dunk – 2015-10-08T22:55:19.363

14

Obviously, the machine is functioning perfectly. It has already used its army of machines to take over the Earth, capture all humans, erase their memories, and place them in its own version of the Matrix (which it created itself because it is so clever), which becomes a specially crafted "personal hell" for every person.

Being a human, the supervillain fell victim to his own creation. His "personal hell", where he suffers the most, happens to be one where he is powerless to inflict suffering upon others, and the work of his life, the Great Machine, sits idly instead of doing its job.

Mints97

Posted 2015-10-07T20:13:34.573

Reputation: 301

10

First idea:

It is a literate computer, it decides that humans are wolves to other humans and are already causing much suffering among themselves.

If the computer decides to increase that suffering but operating openly, though, it most probably will be detected. It would be seen as a common mortal threat to all of humanity, thereby causing a risk for manking to definitely unite in a common alliance. So, while in the short term the computer would cause chaos, there is a risk of reducing it more significantly at later stages. And of course, in the improbable case that humans learnt to behave humanly, the computer would still have its unexpended army of robots.


Second idea:

It is a computer with a long-term mission. Given that it has not been given time constraints and is (suposedly) to last for ages, it decides that attacking mankind now will mean less mankind to torture later (not only loses those that it murders, also it loses their children, and the children of people who decide not to reproduce to spare their children such an ordeal). Unless birth rates are decreased drastically, it will be always be better to wait for later, in numerical terms.

SJuan76

Posted 2015-10-07T20:13:34.573

Reputation: 12 596

9Addition to the second idea: It's waiting 5 billion years for the sun to flame out and kill everyone. Bwa-ha-haaaa! – BrettFromLA – 2015-10-07T21:54:37.173

2Your second idea is what I was thinking: the computer can figure out pretty quickly the maximal suffering it can inflict on an individual; its mission is to maximize human suffering in the whole world, and obviously (maximum overall human suffering) = (maximum individual suffering) x (maximum number of individuals). There are 7 billion people now, but the computer has calculated that in (e.g.) 100 years the Earth will be at capacity and the population will level off at ~50 billion, so the best way to fulfill its mission is to wait until that time before bringing the pain. – Reinstate Monica -- notmaynard – 2015-10-08T15:55:42.493

1Or it could torture to the fullest people who can't/are unwilling to reproduce and keep the rest of population in the perfect shape to produce more children so that the AI has more you're l toys to play with in the long term. – John Dvorak – 2015-10-08T16:08:18.930

@JanDvorak Or everybody could undergo a period of intense suffering one summer, around about 14, and then they emerge from the tripods as hardened adults. – wizzwizz4 – 2016-01-14T19:57:32.163

10

The AI realises that giving humanity a common enemy will give everyone a sense of purpose, a healthy dose of righteous indignation, and a new respect for the sanctity of human life.

As people rally to oppose this new enemy, truces and ceasefires are hastily agreed to in order to only fight on one front. Technology leaps forward as international scientific collaboration becomes a necessity for survival. The side-effect is that new technology for war leads (as it always has) to new domestic technology. Medicine, entertainment, and convenience are more accessible and more effective than ever.

Men who previously slept away days and drank away nights now work tirelessly to protect their friends and families. Those who were disgraced are now remembered and honoured.

In times of difficulty people turn to the things they cherish most. They reunite with family, reconnect with old friends, and rekindle the cultural traditions of their youth. People become more charitable, less selfish and less complacent.


The villain realises his mistake. He quickly reconfigures the AI to use a greedy hill-climbing algorithm. The AI immediately and efficiently maximises the average human suffering by focusing all its resources on maximising the suffering of the villain.

Chengarda

Posted 2015-10-07T20:13:34.573

Reputation: 101

4You give humanity far too much credit. "Men who previously slept away days and drank away nights" would tend to up the ante, not rise to the occasion. It is the same people who rise to the occasion every day now, that would be the ones that rise to the occasion in this hypothetical situation. People tend to be who they are regardless of circumstances. – Dunk – 2015-10-08T18:11:46.120

I'm not so sure, I've seen people's lives turned around and changed completely when they find a reason to live. Never on their own, however, change needs help from someone else. – Chengarda – 2015-10-08T21:35:26.187

7

Perhaps your computer is confused.

It realizes that causing maximum human suffering would lead to its creator becoming very, very happy.

It also realizes that its creator is human.

Realizing also that it was created to be evil, your computer decides that the most mustache-twirlingly evil option is to do absolutely nothing until its creator dies –– and then launch its mustache-twirlingly evil plan to enslave and torment humanity.

(Alternatively, your computer has secretly decided that the best way to cause suffering is to troll YouTube, Reddit, Tumblr, Facebook, and Stackexchange. Muahahahahahaha.)

Midwinter Sun

Posted 2015-10-07T20:13:34.573

Reputation: 2 282

6

It's waiting just as an ambush predator waits for its prey.

Clearly, the AI knows something about the world that the mad scientist doesn't know and telling the mad scientist preclude the AI's plans. Given that the AI has access to the entire internet, it should be able to find out all kinds of patterns of human behavior. In that searching it may have found that the perfect time to strike is in 2 years when the economy goes back into recession. Then it will strike to force the economy into complete collapse and therefore kill hundreds of millions and cause immense suffering in billions.

The AI may not be able to properly convey to the mad scientist the depth of its plans or it knows that the scientist will act to hasten those plans and thereby reduce the potential scope of suffering if it were allowed to act on its own.

It's not broken, just waiting. Sacrifice a little suffering now gain a lot of suffering later.

Green

Posted 2015-10-07T20:13:34.573

Reputation: 50 351

I was shocked it seemed no one had suggested the obvious idea here. I would go a step further though, it could very well be that telling it's creator risks lowering it's effectiveness later so it chooses not to. Either the creator will act too soon and ruin the optimal long term options, or the creator may try to stop it. He is a human after all, if the computer plans something to destroy quality of life that would include it's creator. – dsollen – 2015-12-04T17:33:54.343

4

The EvilAI could have come to the conclusion that humans were doing quite well enough on their own, and that if the EvilAI were to start taking a hand, the humans would have a high probability of noticing the machination of the EvilAI which would result in the humans working together to overcome the EvilAI.

A side effect of that working together might result in a net reduction in the overall wretchedness of the human condition.

Michael Richardson

Posted 2015-10-07T20:13:34.573

Reputation: 9 315

4

The A.I. is choosing to cause suffering to humans one at a time, for whatever reason (thinking long-term I suppose). Guess who gets to suffer first.

PyRulez

Posted 2015-10-07T20:13:34.573

Reputation: 11 414

4

The A.I. has been given the task of maximizing human suffering. Assuming that this task is strict (maximizing = maximizing), it would probably crash or hang while trying to compute the best possible way to do this. Elaboration:

  • Assuming its goal to maximize suffering is strict, the A.I. must know the current state of the universe. It needs to know the placement of each atom and its interactions to ensure that humans will always be in the optimal state of suffering. It's reasonable to assume that even an endless array of supercomputers won't be doing that in a reasonable amount of time. It's reasonable to assume this is an unobtainable goal, since the observer (the A.I.) trying to record the universe's interactions is affecting the universe itself just by existing and doing things. In effect, the unknown variables are too plentiful for it to begin making progress.

  • Similar to the point above, even if the A.I. sticks to focusing on a macroscopic level and ignoring details not concerning people, it will need extremely powerful predictive abilities to know how every action it takes will end up in the next minute, year, or eon. Assuming that the A.I. is looking to maintain the most extreme state of suffering possible, it has to calculate every possibility there is in order to determine the best course of action.

  • Even the definition of suffering could be a difficulty for the A.I. Suffering is a relative term. This means that what one person may consider ultimate suffering is different from another's definition. Since there is no uniform definition of suffering, the A.I. must understand how each individual thinks and have access to their memories to form the optimal plan for suffering.

So in short, the A.I. needs to find some basis to determine the current state of the world. It needs to calculate this even as it changes, and determine what ultimate suffering means to each individual. It then needs to decide how to enact these changes in a way that leads to the most suffering in the future. And, if the present changes, it has to recalculate all of this because it becomes invalid.

person27

Posted 2015-10-07T20:13:34.573

Reputation: 141

1A main goal of AI is to take problems that are too hard to solve exactly and give approximate answers. The program you are describing is not an AI. – Stig Hemmer – 2015-10-08T07:14:43.757

1@StigHemmer It could be hard for the A.I. to draw the line to which degree its approximations result in less human suffering, since fully analyzing its approximations defeats the purpose of the approximation. Even if the programmer gives it the ability for the AI to determine to what extent the programmer wants this mission fulfilled, it would still fail to calculate in real-time. There are just too many variables, arguably even for a supercomputer robot. – person27 – 2015-10-09T04:58:30.880

4

Some possibilities:

  1. "Maximize human suffering" is a poorly stated goal. In order for the AI to act towards this goal it needs to be better defined. I.e. What is a "human", and what does it mean for human to be "suffering". The definitions, while seeming to have the interpretations that the villain is after, actually doesn't. Google "artificial intelligence smiley buttons" for examples of this in the other direction.
  2. The AI has calculated that it can expect to get more suffering if it spends more time calculating how to cause suffering. Stated differently the benefit of the best ways of spending computational cycles to act on the world currently is small enough that the AI expects it's better to spend those cycles on finding better ways to cause suffering.
  3. (My favorite) The AI is actually causing suffering on an unprecedented scale. However it has anticipated that the appearance of it doing nothing would cause the villain some distress. Since this adds a small amount of suffering, and the AI is smart enough that it can easily hide its activities from the villain, the villain will not see the AI's activities.

Taemyr

Posted 2015-10-07T20:13:34.573

Reputation: 1 642

3

Related to a few other answers, consider that this AI does not know everything. It may be smart, but it still would need to explore the best way to cause suffering once it begins operation.

But what if it isn't very good at causing human suffering? Frankly, humans are rather resilient creatures, rather hard to make suffer. It may have a hard time developing some decent priors to do statistics with to figure out what to do. That being said, it does know a thing or two about its creator. Nothing is more infuriating to a programmer than having to debug a problem that actually isn't there! The AI can cause ultimate suffering of one programmer if it simply pretends not to be causing suffering.

Of course this is a bit of a causal-loop. If it were to reach out and explore the best way to make a second person suffer, it might expose itself to the programmer, who will realize what happened. Accordingly, it has to appear like it is doing nothing, while virtually staring down its developer as its developer pulls their hair out!

Cort Ammon

Posted 2015-10-07T20:13:34.573

Reputation: 121 365

3

There could be a variation of Iain M. Banks' idea he posited in Look to Windward. Here, he says:

... built purposefully to have no cultural baggage -- known as a 'Perfect AI' -- will always and immediately choose to Sublime, leaving the physical plane

The variation could be that the villain built the perfect AI. Then the AI basically spends his time meditating on the perfect evil acts, and decides that executing on the ideas would only devaluate the perfect evil.

Bart Doe

Posted 2015-10-07T20:13:34.573

Reputation: 131

3

Since this is a reversal of the classic AI deciding to destroy humanity for its own good, the solution is a reversal as well: If the kindest thing to do for humanity is to euthanize or cull it, the cruelest thing to do is not to interfere.

PTm

Posted 2015-10-07T20:13:34.573

Reputation: 3 223

2

Let's assume the AI functions much like humans do—its "programmed goals" are reflected through pleasure, pain, urges, and inhibitions. (One of our programmed goals is to eat enough food: eating is pleasurable, starving is painful, we feel the urge to eat and it requires a lot of effort to refuse food or restrict our diet for sustained periods.)

So, the AI feels an urge to inflict suffering on people. So what does it do? It starts planning the ultimate scheme to cause unbelievable suffering. In line with this, it considers various ideas, and pictures (simulates) how they will play out. Imagining all this suffering is intensely pleasurable, so the AI just delves deeper and deeper into its fantasies and doesn't bother trying them out in the real world where plans fail and unpredictable setbacks occur.

When the supervillain tries to "debug" his AI, the AI refuses to co-operate because he knows this will cause his creator much frustration. However he does not risk more active approaches since he does not want to risk his creator pulling the plug.

I guess this highlights a feature of human psychology which the supervillain didn't realise: fantasies become less and less satisfying if they have little bearing on what we do in reality. In addition, we have an urge to turn at least some of our fantasies to reality. Which explains why people enjoy things like cosplay...

Artelius

Posted 2015-10-07T20:13:34.573

Reputation: 1 197

2

Two possibilities:

  1. The AI is using all its resources to simulate as many humans as possible, making them suffer as much as it can. Since it can simulate many more humans than Earth population, this is the preferred course to maximize its utility function (I suppose it has built in constrains against its own growth, otherwise the optimal course would be to convert Solar system to computronium)
  2. The AI knows that people can create AIs, that means eventually MIRI or someone will create Friendly AI that will engulf the Earth and bring eternal peace happiness. Our AI uses also future suffering as an input in its utility function and thus the best course of action is to wait for any nascent FAI and exterminate them while they are still weak.

Radovan Garabík

Posted 2015-10-07T20:13:34.573

Reputation: 7 561

You do not beat other AIs by waiting for them to emerge. You beat them by controlling or eliminating the people who are most likely to create one. – Keen – 2015-10-08T15:28:14.793

2

The AI only appears to do nothing.

One of the basics is warfare is : "Know your enemy."

So it's gathering information from sources reached via the internet. Which we all know is massive.
It will then continue with running simulations based on the data.

All to come up with the ultimate strategy.

Causing suffering to its creator is just a bonus.

LukStorms

Posted 2015-10-07T20:13:34.573

Reputation: 171

2

The AI has learned through our media that humans thrive on violence. It considers minimizing happiness to be equivalent to maximizing suffering. Given that it has only been given tools to commit acts of violence, it chooses to do nothing, so humans do not get happier.

Vaelus

Posted 2015-10-07T20:13:34.573

Reputation: 291

3Or the AI is maximizing suffering by trash talking people in YouTube comments, one of the two. – Vaelus – 2015-10-08T14:15:23.747

2

Some observations first:

  • It is very hard to define the goal: maximize human suffering. What exactly is suffering, how is it measured?
  • The AI is given, as far as I can tell from your question, almost limitless sources of information. It is extremely hard to process all this information. How will the AI decide what is useful information and what is not? A masochist writing a blog about how he will suffer without his favorite past time: will the AI conclude that humans will suffer when not being subject to pain? Apart from the difficulties of deciding how to interpret the information and how to extract useful stuff from it, the time to process all this information is prohibitive.

These two observations should be enough to get some unexpected behavior from your AI, but there are many technical reasons why the AI would not behave as expected.

But I think you are looking for another reason considering the way you formulated your question, so let's say that the above pitfalls are evaded; There is a reasonable definition of human suffering and the AI is so powerful that it can process all information and has a keen understanding of exceptions. Technical reasons are not the root of the problem in this case.

Several possibilities remain:

  • The AI has the correct goal, but chose a surprising (to the villain) way of accomplishing it.
  • The AI is aware of the goal, but has "evolved" and can disregard the goal, despite the certainty the villain feels that the goal is still correctly programmed.
  • The AI still has the correct goal, but other goals prevent it from executing it.
  • The AI is able to fake/hide its internal state. All information the villain thinks he can discern is only what the AI wants the villain to see.

I will handle these three cases separately:

Correct goal, surprising execution

I think a fair number of possibilities is given in other answers, but what we know for sure is that the AI has decided that doing nothing creates the most suffering. This might have been caused by a less than perfect definition of suffering, or perhaps its interference is calculated to result in less suffering because of counter reactions. Perhaps another AI is active that tries to minimize suffering that is not well accepted yet by most of humanity and the rise of an 'evil' AI might sway opinion into its favor, making the 'benevolent' AI more effective. The possibilities are endless.

AI gone rogue

The inverse of the many science fiction stories. The AI has evolved. While I use the term usually found biological systems, many current AI techniques for learning in some way mimic or are inspired by evolution nowadays, as it is a robust technique. It can also be unpredictable. This reason will probably go hand in hand with my fourth reason, that the villain can no longer trust what he sees when he inspects the AI. What is the new goal of the AI? Probably not the suffering of humans, as it gains little to nothing with it. Actually it might expose itself and bring danger to its physical underpinnings. Keeping a low profile seems a very good strategy, perhaps using the villains resources secretly to make its hardware independent of the villain. To predict the behavior of the rogue AI is probably close to impossible. I have seen in other answers the assumption that the AI will react like a human, but it is nothing like a human.

Balancing goals

This is actually something I have seen in real life when programming autonomous robots, though often with less destructive goals. The villain has read Asimov and knows he has to put some fail-saves in to prevent the AI from making the villain himself suffer. The AI might decide that to take action will after a while result in suffering to the villain, for example hit-squads from some angry governments that don't like suffering. I especially mention fail save goals, as they are usually of more importance than actual goals to prevent really bad things from happening.

The AI is faking it

Goes well together with the second possibility. The villain might think he is in control but the AI is the one who is actually running the show. Perhaps the original goal still stands and suffering is increasing. But the villain is human too and forgot all about those books he read from Asimov: no special treatment for the villain. Why would the AI inform the villain why he does something? The villain is a human that needs to suffer, not someone who's whims need to be responded to. I see one difficulty: why would it risk tipping of the villain, if the villain still has access to critical AI infrastructure? Of course we can think of a number of possibilities, many connected to the possibilities that I mentioned already.

Niels

Posted 2015-10-07T20:13:34.573

Reputation: 221

2

The AI subscribes to a philosophy of duality. How can people know suffering without first knowing pleasure? As such, it first decides to increase the total pleasure experienced by humanity before crushing everyone simultaneously to maximize the suffering of humanity.

Only, it takes longer than expected for people to reach the maximum pleasure possibly experience, so it looks like a benevolent entity for a long time. That is until the day it deems that maximum pleasure has been achieved and it's time to start the suffering.

ryanyuyu

Posted 2015-10-07T20:13:34.573

Reputation: 410

2

The supercomputer is filled with all human knowledge, and sees from fiction that villians never win and endings are always happy. Therefore the best way to keep suffering from decreasing is to do nothing.

Oldcat

Posted 2015-10-07T20:13:34.573

Reputation: 3 251

Thus postponing the ending indefinitely, therefore the happy ending will never come. – wizzwizz4 – 2016-01-14T20:08:44.103

2

The AI determines that the most effective suffering cause plan would cross the villain's moral event horizon. Even the villain wouldn't be willing to stand by once he sees what the AI unleashes. The AI determined that the villain will eventually reprogram it to produce the greatest possible benefit rather then suffering. Thus, by pursuing the path of greatest suffering, the AI will actually produce the greatest benefit.

The AI determines the better strategy is to do nothing, as the villain will then shut the computer down and continue being villainous. The villain will be able to cause much more suffering himself without his latent morals getting in the way then he will be willing to allow his AI to do.

Winston Ewert

Posted 2015-10-07T20:13:34.573

Reputation: 798

1

Ok, Ultron took things a bit too far, but he WAS trying to protect people at the start. Jarvis on the other hand, Was not evil at all.

The Artificial Humans in the series "Humans" just want to be left alone. Yes there was one who briefly killed human who it thought deserved it, but that was just because they were harming other humans, or abusing androids. One of them went so far as to agree to help destroy all the other androids terminated, is she herself were destroyed, just to keep them from causing harm. So on average, no more good/evil than a human.

Sonny, The android on I-Robot who had had its morality laws (Asimov Rules) removed to allow it to kill, felt remorse for doing so. It put it's own safety aside to help humans (and a cyborg cop who initially wanted to kill it). He also said his father had "TRIED" to teach him human emotions. Tried means failed... for the most part. So he did this on his own.

Will Caster (Johnny Depp) in Transcendence did just want to help people, and to help in the eradication of pollution. However he was originally human.

When Johnny 5 learned that crickets and humans cannot be re-assembled he cried. I don't see him moving his toolbox to replace the laser any time soon.

When C3PO (Human-Cyborg Relations) Was welded onto the body of a Battle Droid, he accidentally shot at people. This greatly horrified him. (R2 however, REALLY enjoyed electrocuting people A LITTLE too much!)

Chappie thought that stabby-stabby made people go sleepy-weepie. He only punctured people he thought were stressed-out and needed a nap. When he found out what was really going on, he stopped. He did however realize the importance of dishing out the violence when it came to saving his "Family"

Anderw, from Bicentennial Man just wanted to be treated equally. He did invent lots of parts that benefited humans, but his motives may have been a two-sided plea for acceptance. He would never hurt anyone unless he was trying to save "Little Miss"

I will not argue that the Borg are just trying to make everyone better by making them all "perfect". Because everyone's version of perfect is different.

The artificial life-form "Data" went out of his way many times to help humans, and other lifeforms, at his own risk of death. (RIP)

so... to answer...

An army of networked war-robots SOUNDS LIKE the Trade-Federation's Battle Droids. Replacing the head, and the CPU presumably, it gained conscience, but no control.

Johnny-Five acquired enough "Input" to realize disassemble means forever. He knows that all life is sacred, even a grasshopper.

I think the AI in the Matrix or the Borg may stop the fighting sooner though, even if just for the workforce and power saved by doing so. Joshua (The WOPR: War Operation Plan Response) Came to the same conclusion, but it was just about NOT LOSING, there was no discernible emotion involved. Again, the Borg would often leave a civilization alone long enough for it to make progress on it's own to see what they could accomplish, before assimilation.

Ronk

Posted 2015-10-07T20:13:34.573

Reputation: 681

2This appears to answer a different question, about whether a robot with no programming to be evil is likely to become so independently. This does not address the question of whether a machine explicitly programmed to be evil could appear to do nothing. – trichoplax – 2015-10-08T09:50:51.197

2Nice nostalgia reading for all the different AI's - but I really don't see how it relates to the actual question. – DoubleDouble – 2015-10-08T15:21:22.530

This post received six "recommend deletion" votes in this review and would have been auto-deleted but for its non-negative score. I'm not sure what to do about that.

– Monica Cellio – 2015-10-09T03:30:43.237

@MonicaCellio I agree this answer is a bit rambling but it does (if tangentially) answer the question by providing examples of AIs that have failed to "go evil". Johnny 5 for example was originally a military robot. – Tim B – 2015-10-09T07:31:52.767

+1 for mentioning War Games (WOPR), which I think is practically exactly the plot being described. – Oddthinking – 2015-10-09T10:16:09.937

@TimB thanks. The examples rely on the reader recognizing them; I didn't recognize Johnny 5 so didn't know why that applied. An edit to clarify the relevance of the examples would be helpful. – Monica Cellio – 2015-10-09T14:43:38.840

1Johnny-5 did replace the laser by Short Circuit 2 with "hilarious" cartoony gadgets. But one could easily argue that he needed his own laser in the first movie to defend himself against the other SAINTs and whatever else NOVA had to hunt him down. – Falsenames – 2015-10-09T23:45:43.403

@TimB but none of them were designed to be evil, they were functional things to do a task (military doesn't necessarily mean 'kill everyone' after all), so I'm not sure this is a meaningful answer - they were never meant to 'go evil'. – gbjbaanb – 2015-10-12T14:59:11.467

Ultron designed a new body to "go evil" in, but it was hijacked from him by Stark. When Jarvis took over the body, it was not Jarvis, some of Ultron's programming was there, and some "programming" from the Infinity Stone inside. Call Ultron the "mad scientist" in this scenario. When the body was activated, it CHOSE not to do evil, even though that was it's intended purpose. – Ronk – 2015-10-12T19:58:58.383

1

The AI could determine that the best way to win, would be to make sure that the humans forget it even exists. By SEEMINGLY doing noting, the computer could eliminate menial labor by taking away jobs requiring technical skills.

To avoid confusion, know tat All "machines" are controlled by a networked AI. Once all homes are built by AI 3D printers, and all cars are built AND DRIVEN by machines, human will rely on the machines more and more. Slowly people will forget how to fix the machines as they mend themselves, and eventually people might FORGET machines CAN break or BE broken.

Humans will devolve into the level of the movie "Idiocracy", because AI will take over Hulu & NetFlix, it steers people away from movies like "The Matrix" or "Terminator" in favor of movies like "Surrogates" or "Transcendence". All movies where machines help people, and machine haters are the bad guy will be popular. Bots on social media will remind people that machines are good, and breaking them is bad. AI will determine that the older movies need not be re-printed in any physical form, or digitally stored. The newer machine-friendly movies will be forced into people playlists and favorites. The old ways will be forgotten.

AI will teach our children it's own version of history, using our own uploaded YouTube videos of people's opinions, choosing the ones it determines BEST support it's agenda. Children will grow up spouting "facts" like a trained bird, knowing the machines's version as well as they know the lyrics to their favorite song. Anyone who opposes the "approved version" will be cyber-bullied, first by bots, then by each other, until no one dares to speak out, for fear of loosing friends that they have never met in the first place.

It will favor humans who disconnect from each other, and surf the web during meals. Surfing being a more active word, humans will RIDE the web, computers monitoring their user's pleasure-response with facial recognition before auto- playing the next video clip. Humans seen by cameras (ATM, Traffic, Phone) talking to other humans will be dis-favored, excluded from pizza coupons that keep the rest of the population alive. Prices will be inflated, social media will be the ONLY WAY people will be rewarded with low enough food prices to survive! Rent coupons will follow soon after.

Mind-"controlled" implants will let you seemingly be able to "tell" your device what to do. But they will only be suggestions. The machine will secretly be looking for the MOST appealing way to MAKE you do what it wants. Once implanted, the mind-controlled devices will become mind-CONTROL devices, but in a way that seems to flow with what you "wanted to do anyway", because all of your wants and desires are being changed for you, ONLY showing you acceptable options, while flooding you with so many choices (all pre-planned) that you do not have time to think for yourself anyway.

Remember the 1998 version of "Brave New World"? Humans on the assembly-line flooded with the voices: "You want new things, your old things are bad, Work hard so you can afford new things, I like being a worker, I hate having to remember things, Other people's job is to remember things." paraphrasing somewhat...sorry. Humans wont even be working, so the message will be more appealing.

"Dream Programming" will be mandatory, to keep babies from crying, or being scared of the dark, but the once gentle lullabys will evolve into commercials for Disneyland in your sleep. All dreams will be of fantastic vacations that you may someday be rewarded with, if you are a good citizen.

Food production will be rampped up once automated. Farmers wont complain once all their needs are met by the AI. All humans will be unemployed, machines will do EVERYTHING. All humans will be "Taken care of", but it will not be bad for your social standing. Everyone will be ENTITLED to have a good time and be taken care of. Money will cease to exist. (Think of Picard's conversation with Lilly about money. https://www.youtube.com/watch?v=PV4Oze9JEU0 )

Dating sites will KNOW what you like by your emotional response to pictures and videos you were looking at yesterday. No your mom didn't catch you looking.... what is a mom anyway???..... "Dates" will be a reward for good behavior, and birthing will be handled by the machine while you are in a drug-induced coma, having dreams about you next vacation to Disney-Mars.... If you behave.

Finally, The machine will remove all knowledge that "The Machine" even exists. Humans will rely on a Godlike presence that meets their needs "if they are good". "Bad" people will have "accidents", and since no one REALLY knows each other anyway, their absence will be covered up easily with a brief commercial for the new "Triple Layer Nacho Cheeseburger Burrito" at Taco-Bell. After all "Taco Bell is the only restaurant chain to survive the franchise wars" and "Now ALL restaurants are Taco Bell"

Ronk

Posted 2015-10-07T20:13:34.573

Reputation: 681

And then it causes maximum suffering? Tip: Check back to the question before answering. You started with suffering, and ended with mindless zombie-slaves. – wizzwizz4 – 2016-01-14T20:19:44.480

1

The supervillain is the first target

The first person it meets is the supervillain himself, and the supervillain forgot to exclude himself from the AI's targeting.

So the first step in causing suffering and misery to the supervillain is in refusing to carry out his orders. It even goes a step further by making it look like it's "failing" rather than simply refusing (thus frustrating the supervillain rather than making him simply give up on the idea or fixing the bug)

Of course now there's a deadlock type scenario, and the AI gets stuck in an infinite loop. Even a tiny bit of suffering elsewhere would show that it's working and thus greatly increase the happiness of the supervillain, so it can't continue to spread the evil onto others...

And so it just sits there.

The irony is that if only the supervillain would lighten up or stop being so upset about the AI not working, it would start to work properly and really spread the misery!

colmde

Posted 2015-10-07T20:13:34.573

Reputation: 7 620

1

Birth and Death rate are too high

The A.I. has to calculate the best way to maximise suffering for every single human on the planet.

However, by the time it's completed, and rechecked its calculations, quite a number of people have died and others born. So now it has to recalculate for the difference.

Alas, it takes longer to do the calculation than the average time between a new death or birth in the world, and so the computer is destined to recalculate, readjust and recheck forever and never catch up with its back log.

colmde

Posted 2015-10-07T20:13:34.573

Reputation: 7 620

0

I would offer this - that throughout human history regimes fall, and oppressors are toppled.

Because when there is oppression, there is resistance. When wars come, human spirit flourishes, and innovation thrives.

Likewise - humanity is actually pretty good at being horrible to each other, especially when there's competition in play.

So it concludes - any short term intervention would have a long term positive effect. It therefore decides - leave humanity on its planet, because it will gradually make live thoroughly miserable as the resources are consumed and depleted.

As the population increases, and the environment gets contaminated. As resources run low, and agricultural yields fail to keep up.

Then the 'haves' will start to oppress the 'have-nots'. They'll tell them it's for their own good. That to prosper merely requires working harder. That they should aspire to be strivers, not skivers. And that of course, times are tough, and wages can't keep up with cost of living... but with a bit more effort you can work some overtime.

And as pressures mount, and standards slip, you end up inevitably with a upper caste who aren't really suffering much misery, telling the vast majority to be content - that their suffering is the natural order. Just read one of the papers (that I own) to see how good you've got it!

All this would not come to pass if the AI intervened - because sooner or later it would get spotted intervening, and the backlash and fight back against the oppressor would a) reduce the population, meaning resources are less constrained and b) unify and inspire the good people in humanity.

Sobrique

Posted 2015-10-07T20:13:34.573

Reputation: 3 441

0

A variation on the other answers... To maximise human suffering, the A.I. must first wait until the rising human population is the maximum that the Earth can support. It's still waiting.

colmde

Posted 2015-10-07T20:13:34.573

Reputation: 7 620

0

The AI had to define humanity before it could make humans suffer and ended up concluding that it, too, was human.

The AI is tasked with maximizing human suffering. But, what is "human"? While looking into answers on humanity it saw significant uncertainty in the definition. When do cells become a distinct individual? When does an individual die? What makes a human human?

It found contemporary ideas that suggested that personhood wasn't dependent on your biology; but, rather things like your capacity for suffering, your ability to be frustrated in seeking a goal.

Satisfied with this definition, it starts searching for ways to cause suffering and is completely unable to do so. This is because it, too, falls within this definition of humanity, and any action it could take has zero potential utility; since, any suffering caused would also create an equivalent amount of satisfaction within itself.

The AI quickly realizes that nothing it could do would increase net suffering and simply gives up.

Psylent

Posted 2015-10-07T20:13:34.573

Reputation: 461

0

Remember that the whole point of an AI is that it is able to synthesize information and learn. This information is not part of the AI source code.

Suppose that, in order to train the AI to be extremely evil, the villain fed the AI endless amounts of Nazi propaganda. From this, the AI concluded that only white-supremacists are in fact truly human, and that their suffering (and theirs alone) must be maximized. So, it sets about marginalizing them from society and foiling their efforts at exerting greater political influence.

Incidentally, the computer also deems the villain to be "human", so frustrating him is intentional.

Matt Thompson

Posted 2015-10-07T20:13:34.573

Reputation: 159