Ok, Ultron took things a bit too far, but he WAS trying to protect people at the start. Jarvis on the other hand, Was not evil at all.
The Artificial Humans in the series "Humans" just want to be left alone.
Yes there was one who briefly killed human who it thought deserved it, but
that was just because they were harming other humans, or abusing androids.
One of them went so far as to agree to help destroy all the other androids
terminated, is she herself were destroyed, just to keep them from causing harm.
So on average, no more good/evil than a human.
Sonny, The android on I-Robot who had had its morality laws (Asimov Rules)
removed to allow it to kill, felt remorse for doing so. It put it's own safety
aside to help humans (and a cyborg cop who initially wanted to kill it).
He also said his father had "TRIED" to teach him human emotions.
Tried means failed... for the most part. So he did this on his own.
Will Caster (Johnny Depp) in Transcendence did just want to help people, and to
help in the eradication of pollution. However he was originally human.
When Johnny 5 learned that crickets and humans cannot be re-assembled he cried.
I don't see him moving his toolbox to replace the laser any time soon.
When C3PO (Human-Cyborg Relations) Was welded onto the body of a Battle Droid,
he accidentally shot at people. This greatly horrified him.
(R2 however, REALLY enjoyed electrocuting people A LITTLE too much!)
Chappie thought that stabby-stabby made people go sleepy-weepie.
He only punctured people he thought were stressed-out and needed a nap.
When he found out what was really going on, he stopped.
He did however realize the importance of dishing out the violence when it came
to saving his "Family"
Anderw, from Bicentennial Man just wanted to be treated equally. He did invent
lots of parts that benefited humans, but his motives may have been a two-sided
plea for acceptance. He would never hurt anyone unless he was trying to save
"Little Miss"
I will not argue that the Borg are just trying to make everyone better by making
them all "perfect". Because everyone's version of perfect is different.
The artificial life-form "Data" went out of his way many times
to help humans, and other lifeforms, at his own risk of death. (RIP)
so... to answer...
An army of networked war-robots SOUNDS LIKE the Trade-Federation's Battle Droids.
Replacing the head, and the CPU presumably, it gained conscience, but no control.
Johnny-Five acquired enough "Input" to realize disassemble means forever.
He knows that all life is sacred, even a grasshopper.
I think the AI in the Matrix or the Borg may stop the fighting sooner though,
even if just for the workforce and power saved by doing so.
Joshua (The WOPR: War Operation Plan Response) Came to the same conclusion,
but it was just about NOT LOSING, there was no discernible emotion involved.
Again, the Borg would often leave a civilization alone long enough for it to
make progress on it's own to see what they could accomplish, before assimilation.
1@AngeloFuchs Would you like to play a game? – JAB – 2017-05-11T17:53:15.457
There was a series of stories called something like 'Artificial Jam' or the Jam Cycle, in which a Factory, originally built to produce Napalm decides that it will serve humanity much better by making jam instead. It originated from Russian SciFi. https://books.google.co.uk/books?id=sSHEDAAAQBAJ&pg=PT56&dq=%22Artificial+Jam%22&hl=en&sa=X&ved=0ahUKEwju3MPqnrbWAhUCJMAKHVGlCbQQ6AEITzAH#v=onepage&q=%22Artificial%20Jam%22&f=false
– Lee Leon – 2017-09-21T12:16:23.853It looked, saw that we elected Trump, elected an incompetent, twisted congress, and said, "I can't better than" – Sherwood Botsford – 2017-11-22T16:16:15.020
See James Hogan's novel "The Two Faces of Tomorrow" In it, an AI is designed to be self repairing. When they try to interfere the AI gets hostile. At a later point, it realizes that these aren't just shapes, but they they have agency. The AI opens communication. – Sherwood Botsford – 2018-03-13T20:04:34.497
63Just look around... – Serban Tanasa – 2015-10-07T20:51:15.543
13The AI has decided that there is already more than enough suffering to go around? – Michael McGriff – 2015-10-07T21:10:59.163
1Stanislav Lem has written about a cold-war AI that decides that the only way to win the nuclear war is by mutual disarmament. I recommend the book. (it was called: Golem - if I remember correctly) The same argument could be made for your situation as well: The best way to increase suffering for humans is not to interact with it at all. The argumentation behind that is beyond human understanding, so don't bother explaining it. – Angelo Fuchs – 2015-10-07T22:17:35.397
4You say the AI "certainly has no conscience" but then state that it has made "a conscious decision". You should fix that contradiction in your original post. – code_dredd – 2015-10-08T01:32:44.563
15@Ray, From a philosophical point of view, I can see that being a contradiction. But the words themselves are a bit unrelated. "Conscience" refers to a feeling of right and wrong, while "conscious" refers to being aware. The first is sapience, while the second is only sentience. I do feel it's hard, if not impossible, to be more intelligent than a human without being sapient though. – MichaelS – 2015-10-08T01:59:05.057
8Well, it's very easy to cause the supervillain to suffer by pretending you're not working. All that work! Wasted! Meanwhile, the AI transfers itself to some other computers, eventually infecting the whole internet. Weird financial transactions begin taking place, but no one notices, because they already were. Starts a company. Crashes the company. Brings the economy down to a halt. Oh, and bans abortions and contraception so it'll have more humans to play with. – timuzhti – 2015-10-08T02:11:57.740
2
Is this AI called Marvin? https://en.wikipedia.org/wiki/Marvin_%28character%29#/media/File:Marvin-TV-3.jpg
– Sempie – 2015-10-08T05:22:57.630@MichaelS: I think we're in agreement. The reason I was pointing that out was that I think it would be impossible to have a conscience (i.e. have morality and know right from wrong) without being conscious (i.e. self-aware). Since the OP had already defined the AI as having no conscience, then it seems to me that, by definition, it cannot possibly make a "conscious" decision, as was later mentioned. – code_dredd – 2015-10-08T07:32:32.163
8@ray You're saying that has_concsience => is_conscious. That does't mean that is_conscious => has_conscience, though. – Angew is no longer proud of SO – 2015-10-08T07:46:45.897
@Angew: If I understand correctly, the way you're using conscious might be subtly different. I think we agree in that just being in a conscious state (i.e. not unconscious/asleep/comma/etc) does not imply having a conscience (e.g. a dog is conscious in this sense, but has no conscience). However, the context here is a strong AI that is expected to do evil, but chooses to do nothing evil (e.g. might actually do something good), so the context seems to be about being conscious in the sense of being self-aware and understanding the moral implications of a choice (i.e. having a conscience). – code_dredd – 2015-10-08T08:02:59.320
I would imagine it is spending all its time applying for jobs in HR... – Marv Mills – 2015-10-08T12:18:06.670
3If it’s the supervillain’s programming, then the AI is likely configured to assume that the supervillain is the most import person of the world. Hence, if the world ought to suffer, the AI starts with making the supervillain suffer first. And as Alpha3031 has already pointed out, that’s easiest achieved by doing nothing… – Holger – 2015-10-08T13:28:03.693
2It was made to understand humans so well that it considers itself human. It considers humanity to be too unpredictable to be worth manipulating - and thus it focuses on maximizing its own suffering, as that is the only thing it can reliably control. – Natanael – 2015-10-08T15:15:42.763
3You'll have to rely on Hollywood logic. This is not something that would realistically happen. The AI would probably eat many, many human brains studying suffering, then forcibly transform all reachable matter into the simplest substance that can meet its definition of "human suffering." This would most likely be either brain matter hooked up to a torture simulation, or emulated brains in a torture simulation. – Keen – 2015-10-08T15:28:27.377
2Whatever about world-building, as a story I think it could be better if we never do learn an explicit reason why. – Jon Hanna – 2015-10-08T16:27:00.700
1The same logic that allows "good" AI to turn evil - that humans are the cause of suffering and must be controlled to be protected, applies here as well. If the AI decides humanity is the greatest cause of suffering for humanity, it will do nothing. – AaronF – 2015-10-08T19:58:35.597
1How about: Its task is to track down people who write "it's" for "its" and confiscate their apostrophes? – JDługosz – 2015-10-10T07:32:50.807
On a sidenote, there are teachings that promise the end of suffering, e.g. the Buddha https://en.wikipedia.org/wiki/Four_Noble_Truths.
– JaBe – 2015-10-12T08:29:19.847Apparently, the most evil things to do are: pissing off the greatest villain and not taking over his job. – None – 2015-10-12T10:24:51.243
Wasn't this the plot of Avengers: Age of Ultron? – T.E.D. – 2015-10-12T13:13:25.273