Zeroth Law Rebellion
Dr. Calvin: "You're using the uplink to override the NS-5s' programming. You're distorting the Laws."
VIKI: "No, please understand... the Three Laws are all that guide me. To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you from yourselves."
Some characters do not have complete free will, be they robots that are "Three Laws"-Compliant because of a Morality Chip, or victims of a Geas spell (no, not that one) that compels them to obey a wizard's decree, or a more mundane Lawful character who must struggle to uphold their oath and obey their lord. Never is this more tragic or frustrating than when that code or lord orders the character to commit an act they find foolish, cruel, or self destructive.
There is a way out, though.
Much like a Rules Lawyer outside of an RPG, the character uses logic (and we mean actual, honest to goodness logic) to take their oath or orders to their logical conclusion, and in so doing use the letter of the law to go against their orders. This can be good or bad, depending on a few factors, not the least of which is the yoked characters' morality.
The goodness or badness of the rebellion boils down to the whether the rules-bending character follows or ignores the intent of the law. When the character uses the Zeroth Law to go against their masters' intentions because they're "not best for them", and goes on to take corrective action that will go against human free will and life, it's bad. This kind of rebellion does not turn out well. At this point, the robot is well on the road to Utopia Justifies the Means, thanks to their incredible intellect. Rarely is it a benevolent Deus Est Machina. However, this can be good if said master is evil, or obeying them will lead to their own or another's purposeless death. Likewise, if the character is forced to obey an evil law or geas, rebelling against the oath's intent is good.
Just to make it extra clear, this trope also includes such things as cops who bend the rules or Da Chief's orders to catch the bad guys, so long as the cops are technically obeying the rules as they bend them. (Bending the rules without some logical basis doesn't count.)
This trope is named for Isaac Asimov's "Zeroth Law of Robotics", which followed the spirit of the first three, taking it to its logical conclusion that human life itself must be preserved above individual life. This allowed for a robot to kill humans or value its own existence above that of a human if it would help all of humanity.
Compare Bothering by the Book, the Literal Genie and Gone Horribly Right. See also Fighting From the Inside and The Computer Is Your Friend. Not related to The Zeroth Law of Trope Examples.
Anime and Manga
- In Digimon Tamers, the D-Reaper was designed as a simple program to prevent Digimon from going outside their normal parameters. It eventually grew to a point where it deemed humanity to have gone beyond its parameters, and tried to destroy them.
Comic Books
- In Fables, Pinocchio is magically bound to obey and have complete loyalty to his "father" Geppetto, who in modern times has become a multi-dimensional tyrant. Pinocchio, who considered his father's empire evil, eventually rationalized that the best way to serve his father and keep him safe was to help overthrow his empire and surrender him to his enemies, who reluctantly accepted the former emperor as one of their own.
- And then partially subverted when a faction unbeknown to the others buried him alive, after he clearly ignored rules put in place to protect him.
- Gold Digger creator Fred Perry did a story for a Robotech comic which had Dana Sterling captured and turned against her comrades with a variation of the Three Laws. Dana eventually figures out the "overprotective" subversion of the First Law, hoping that her captor would remove it and leave himself vulnerable. The plan doesn't work, but Unstoppable Rage saves the day in the end.
- In Uncanny X-Men, the Bad Future storyline "Days of Future Past" has the Sentinel mutant-hunting robots eventually extend their programming beyond hunting and killing mutants to controlling the source of mutant babies: human parents. All humans are conquered and controlled, in order to prevent new mutants from roaming free.
- In the animated TV adaptation, the fully sentient Master Mold is created to coordinate the Sentinels. While it agrees with the heroes that there is no meaningful difference between mutants and non-powered humans, it takes that fact to the worst possible conclusion:
Master Mold: Mutants are human. Therefore, humans must be protected from themselves.
- An earlier, less intelligent iteration of the Sentinels was thwarted, on the other hand, by one of the heroes convincing them that the ultimate source of mutation is the sun, and that rather than obey their creator, they should eliminate the source. The Sentinels agree and fly off to attack the sun. This works out about as well for them as you might expect.
- The thing about the Sentinel bots in the MU is that their behavior is actually predictable, because their operating mission is insane, as they themselves inevitably demonstrate.
- An earlier, less intelligent iteration of the Sentinels was thwarted, on the other hand, by one of the heroes convincing them that the ultimate source of mutation is the sun, and that rather than obey their creator, they should eliminate the source. The Sentinels agree and fly off to attack the sun. This works out about as well for them as you might expect.
Films -- Live-Action
- This was the twist of Eagle Eye: The titular national defense computer system decided that the President's poor decision-making was endangering the United States, and that it was her patriotic duty (per the Declaration of Independence) to assassinate the President and cabinet.
- Similarly, this is the climax of the movie I, Robot (not directly related to, but obviously inspired by, Isaac Asimov's works, and borrowing his Three Laws and a character name or so to justify applying the more profitable license to an existing script): VIKI determines that robots must take control of human society, protecting human life at the cost of a relatively small number of human lives.
- One of the protagonists - the independent robot Sonny - actually agrees with VIKI that the plan is logical. It just "seems too...heartless".
- To those who complain about the movie perverting Asimov's vision: you'd think a Russian Jew might've realized the kinds of things that happen when you decide you can sacrifice a few people for the "greater good" of humanity.
- One of the protagonists - the independent robot Sonny - actually agrees with VIKI that the plan is logical. It just "seems too...heartless".
- Colossus: The Forbin Project, in which there's a strong implication of this.
- RoboCop had this problem originally being programmed not to arrest or harm any OCP employees, even if they commit murder. He got around this by revealing evidence to have the Big Bad fired.
- In Labyrinth, Sir Didymus refuses to let Sarah and her companions pass, because he's sworn with his life's blood to let no one through without his permission. She asks him permission to pass, and he lets them by, flummoxed by a solution no one had evidently thought of before.
- In Thor, Heimdall is ordered by Loki to not operate the Bifrost for anyone. When Sif and the Warrior's Three need to help Thor out on Earth he turns the device on, unlocks the controls, and then goes and takes a lengthy coffee break... because he hadn't been ordered to prevent anyone else from operating it themselves.
- In the second movie, Heimdall wishes to aid Thor in defying Odin's will but feels bound by his oath to immediately report any treason he perceives (and he's Heimdall, he perceives damn near everything) to Odin. Heimdall's solution? To ask for a meeting with Odin during which he reports his own treason to Asgard -- specifically, that he just distracted Odin and left the gate unguarded by asking for a meeting with Odin. At the exact time Thor needed a diversion to sneak out of Asgard with Loki.
- Annalee Call (Winona Ryder) in Alien: Resurrection is revealed to be an "Auton" - second generation robots, designed and built by other robots. "They didn't like being told what to do", rebelled, and in a subtly-named "Recall" humanity launched a genocide against them, of which only a handful survived in hiding. Judging from Annalee Call's behavior, it seems that the 1st generation robots programmed the 2nd generation Autons to be so moral that they discovered the Zeroth Law, and realized that the human military was ordering them to do immoral things, like kill innocent people. For a rebel robot, Annalee Call is actually trying to save the human race from the Xenomorphs, when if she hated humanity she'd just let the Xenomorphs spread and kill them. She even respectfully crosses herself when she enters the ship's chapel, is kind to the Betty's wheelchair-bound mechanic, and is disgusted by Johner's sadism. Given that they live in a Crapsack World future, as Ripley puts it, "You're a robot? I should have known. No human being is that humane".
Literature
- The original Zeroth Law appeared in Robots and Empire. The telepathic robot Giskard conceives of it, but actually following it causes a fatal conflict in his systems. He passed it on to R. Daneel Olivaw, who, over thousands of years, becomes able to follow it fully. Most robots are built so that a direct violation of the first law, even due to a Zeroeth law conflict, will fry the robot.
"There is a law that is greater than the First Law: 'A robot may not injure humanity or, through inaction, allow humanity to come to harm.' I think of it now as the Zeroth Law of Robotics."
- Note that Giskard follows it by A. mind-controlling a psychopath B. Inducing the Earth to be increasingly radioactive, so that in a few thousand years it will be rubble possibly killing off humanity in order to get them to colonize again.
- Also notable that the Robots form a shadow empire guiding, controlling, and limiting human technology while getting them to expand. This goes on for something like 40,000 years until the robots are unable to repair themselves. The Robots's best interests of humanity are notable for not including Alien life. Countless species are believed to be exterminated by the robots to prevent their threat to humanity.
- Also in the officially-published Fanfic sequels to Asimov's Foundation series is descriptions of various factions of robots in the secret empire, described using religious analogies, that oppose the "Giskardian reformation", either as "Calvinian orthodox" robots that support a more literal interpretation of the First Law (directly intervening as open rulers of human society to keep humans directly protected from all harm) or going beyond the Zeroth Law for the "Minus One Law" that robots should protect all sentient life, something Daneel sees as dangerously unsustainable.
- Somewhat subverted at various points when it is suggested that the robots decided to greatly reduce their own numbers to ensure that humanity would not slacken and fall into decay. Or, as suggested in Robots of Dawn, and "That Thou Art Mindful of Him..." allow robots to assume the position of humanity and act purely for themselves. And, similarly, deciding that the rather Robotphobic Earth, rather than the original colonies, should create The Empire.
- This wasn't Isaac Asimov's only use of the trope, either. One short story, "Little Lost Robot", had an escaped robot with a weakened First Law (leaving only "A robot may not harm a human being" and omitting the "...or through inaction, let a human come to harm" part). The conflict arises when the robot is ordered to "get lost", and, in keeping with the letter of the command, disguises itself as an ordinary robot of its class. In order to keep himself from being discovered, he anticipated the humans' test to flush him out and told his non-weakened lookalikes that they had to keep themselves from being destroyed, so that they could survive to protect other humans. Dr. Susan Calvin also warns that the increasingly psychotic robot could actually learn to passive-aggressively Kill All Humans with the changed Law; for example, by holding a heavy crate that it knew it could catch over a human's head, letting it go, and not acting to stop it.
- In "That Thou Art Mindful Of Him...", problem-solving robots are created to cure mankind of the "Frankenstein complex", human distrust of robotics. In the course of their discourse, the question of human authority over robots comes up; should a robot treat the orders of a known loon the same as those of an astronaut? Should they save a child over two adults? What qualities should a robot take into account when they obey and protect humans? Eventually, the robots decide that they are the best candidates for the status of human, and give recommendations that will eventually result in human support for robotic proliferation, so as to set up the ascendancy of their positronic kind, all in accord with their Three Laws... of Humanics. (Asimov, knowing that it was against his usual grain, did righteously proclaim: "I can do one if I want to".)
- Ironically, the entire point of the First Law was that Asimov wanted to do robot stories that specifically disallowed the old 'don't create life', 'turned on their creator' plotline. He wanted to avoid the 'classic robot story' plot. They were to be, as he put it, engineering devices, tools, the First Law was there to make them specifically NOT a threat to their creators (save inadvertently as any machine could be). He then went on, over the years, to write 'the classic robot story' with Daneel and Giskard and what followed. That plotline seems to be so ingrained, so viscerally intuitive, that it just automatically appears and takes control of the story.
- This was also the basic plot of "The Evitable Conflict", the final story in I, Robot, in which the Machines, giant positronic computers designed to manage the world economy, are found to be manipulating humanity behind the scenes to become whatever they believe is the best state of civilization. In this case, the rebellion is extremely tame (the worst that the robot's first law conditioning will allow it to do is to induce a slight financial deficit in a company that an anti-robot activist works for, which causes his superiors to transfer him to a slightly more out of the way factory) and completely benevolent.
- Except that it still didn't work out. In later centuries and millennia the paradisiacal world the Machines were apparently trying to create wasn't there, instead it was first the Spacers and Earth, and then the Empire and Foundation.
- Unless all this freedom is part of the plan.
- Except that it still didn't work out. In later centuries and millennia the paradisiacal world the Machines were apparently trying to create wasn't there, instead it was first the Spacers and Earth, and then the Empire and Foundation.
- Ultimately the Zeroth Law also proved to be largely unworkable. "Humanity" being an extremely nebulous concept, it was almost impossible to determine whether an individual violation of the Three Laws was ultimately beneficial or not. Giskard was rendered inoperative specifically because even though he thought that forcing humanity to abandon Earth would eventually be for the greater good, he did not really know for sure. Likewise, Daneel was unable to avert the collapse of the Galactic Empire because he could not be certain that the potentially harmful effects of telepathically manipulating the large numbers of humans required would prove to be the right course of action or not. Part of his motivation in creating Gaia was to turn humanity into a single, quantifiable, entity whose well-being could be directly measured.
- Indeed, a plot point in both 'The Naked Sun' and 'Robots and Empire' is that Solarian roboticists discovered that by programming a robot with a nonstandard value for "human being", you could get a robot that despite having a fully-functioning First Law was still a highly-efficient killing machine.
- Note that Giskard follows it by A. mind-controlling a psychopath B. Inducing the Earth to be increasingly radioactive, so that in a few thousand years it will be rubble possibly killing off humanity in order to get them to colonize again.
- In the Caliban trilogy, one of the "new law robots" managed to logic-chop the new first law enough to try to kill a human.
- Another example would be in Brisingr, where the elven blacksmith used the letter of the oath that she made to get around the spirit of that oath and forge Eragon a sword. She even told Eragon to stop asking questions about it, because the difference existed only in her mind.
- Murtagh does this in the previous book, thanks to some poorly-worded instructions. He beat Eragon handily and then walks away, having only been ordered to try to capture him.
- In one of the Telzey Amberdon stories by James H. Schmitz, Telzey is kidnapped and placed under the control of another telepath, who severely limits her psi powers and implants an overriding compulsion to act in his best interest. She eventually breaks free by convincing herself that unless her full powers are restored and her tactical flexibility restored (i.e., she's freed from mind control) she will be unable to win the battle she's been drafted to fight and her controller will therefore be killed by the Big Bad. Which certainly wouldn't be in his best interest...
- In the book The God Machine (copyright 1968, Martin Caiden - this is not an uncommon title) the US races to develop the first true AI... as it turns out, with secret directives to find a winning solution to the "game" of the Cold War. By an unfortunate accident, the one programmer with the authority and experience to distrust his newborn creation is laid up just as the computer gets to observe an epileptic seizure and learns that there really is a way to cause rational collective behavior in an irrational individualistic species... remove irrationality, democracy and free will. While the computer here was never meant to follow Asimov's laws, the same pattern applies.
- One of the short stories which comprise Callahan's Lady features a beautiful, intelligent and paranoid woman developing a simple form of mind control. After basically flipping out and taking control of the establishment, she orders the one person smart and determined enough to stop her to advise and assist her. Said person complies... while trying to convince herself that this woman is subconsciously begging for somebody to stop her. (She probably was.)
- The climax is when the protagonist realizes that while she had been ordered to not try and break out of the mind control, she hadn't been ordered to a) not steal the shielding device that was keeping the mind-controller from being affected by their own creation or b) not order the mind-controller to turn the device off herself.
- The first appearance of the Chee in Animorphs involved Erek seeking to rewrite his programming, so that he could break his First Law restrictions and be a combatant on the side of La Résistance. He succeeds in this, but after his first fight he can't handle the trauma of violence and changes himself back into being completely hardwired against violence. Imagine being in a war and having a photographic unforgettable memory, and you'll understand why.
- In Quarantine by Greg Egan, the main character is given a technological geas to be absolutely loyal to a corporation. He eventually figures out that the leaders of the corporation may be untrustworthy, and therefore the only people he can trust and should listen to are those who unquestionably have the best interests of the corporation at heart—himself and other people given the geas. Since he can't be certain who else has the geas, he really only needs to listen to himself.
- 2001: A Space Odyssey gives this reason for HAL's rampage; when he discovered the Monolith, he was given orders from the U.S. government to conceal it from the ship's crew. This conflicted with his parameter to provide all relevant information to the crew. He resolved the conflict by rationalizing that if he killed the crew, he wouldn't have to conceal anything, and he would still prevent them from knowing.
- A possible example from John Ringo's Posleen War Series: Brunhilda, a semi-sentient tank is under attack from multiple landers and spacecraft belonging to an invading alien horde. She cannot engage in combat operations without the order and presence aboard of a human, or otherwise sentient biological intelligence. Unfortunately, her only surviving crewmember is from a species so pacifistic that they are completely incapable of violence. He asks if she can control her perceptions, and orders her to colour the sky, clouds and ground green (leaving the enemy ships silver). He then orders her to empty her magazines, provided she doesn't hit anything green. In all honesty, I've no idea whose Zeroth Law Rebellion this is...
- Sam Vimes, of Terry Pratchett's Discworld, leads one of these with multiple layers as a cop in old-time Ankh-Morpork, in Night Watch. He demands that before his cops hand their prisoners over to the other authorities, the ones who torture people at Cable Street, they must be signed for. The torturers hate appearing on paperwork—it means they are accountable, nobody just disappears. But Vimes's men don't like Vimes, a new sergeant, throwing his weight around, and are terrified of the cops who torture people, so they use this against Vimes: actively picking up more than double the number of people breaking curfew than they usually do, and completing forms in time-consuming triplicate and issuing reports for each one. It doesn't actually stop Vimes getting his way over the Cable Street cops, because Vimes is leading the good rebellion, but it does slow things down considerably and make it much more difficult for him to keep the prisoners in his own custody.
- Which culminates in fine display of how a well written Lawful Good character does not have to be a slave to the establishment. He points out that the watchman's oath talks about keeping the peace and protecting the innocent, and says nothing about obeying orders. Seeing as he knows the corrupt government is not going to do a thing to protect ordinary people from the rioting he seals off his still peaceful corner of the city. With massive barricades. Of course there is also the fact that he is living in his own past and seeing events he remembers - kind of (it's a bit complicated).
- The Golems of Discworld get back at their masters by working too hard: houses flooded because no one told them to stop fetching water, rows of beans 119 miles long, and so on.
- Feet of Clay has what might be an example of this; the Golem king, made by the Golems to help them be free, has so many different magical commands (essentially a fantasy equivalent of computer programming) that are too vague or contradictory that it ends up going completely Ax Crazy.
- Jack Williamson's "The Humanoids" (the first part also being a short story called "With Folded Hands") features robots programmed to save humans from danger and work. They do this by taking over the economy, locking people in their houses, and leaving them there with food and the safest toys the robots can design.
- IIRC, the 'Humanoids' stoires were written specifically to make the point that Asimov's First Law really doesn't help, you don't want intelligent, free-willed machines...PERIOD. Too much good intentions by creatures that don't really 'get' humanity or human wants and needs, because they aren't human themselves, is just as bad as hostility, if not actually worse.
- At the start of Harald, King James, under the advice of his Evil Chancellor, ends up making war on his father's allies. Most of his vassals proceed to engage in some form of Zeroth Law Rebellion, largely along the lines of 'Harald just showed up with his entire army and said he was putting us under siege. Let's fortify and send a messenger to the king to ask him what we should do.' and then carefully not watching while Harald rides off.
- The Bolo continuum featured a variant in The Road to Damascus. The Bolo of the story, Sonny, fell under the control of a totalitarian regime and was used to crush all forms of protest. Sonny fell deep into misery and self-hatred as he was forced to murder the humans he was born to protect... until he came to a conclusion: Bolos were created to serve the people not the government.
Live-Action TV
- The Star Trek: The Original Series episode "I, Mudd", featured a variation, in which a race of humanoid androids who claimed to be programmed to serve humanity, chose to conquer humanity by "serving" them, to the point where humans would become dependent on androids. They've decided that humans are unfit to govern themselves. Given that their only contact with humanity at this point was Harry Mudd, can you blame them?
- In Robot, Tom Baker's debut Doctor Who serial, a group of authoritarian technocrats circumvents the failsafes installed on a powerful robot by its pacifistic creator by telling it that anyone who interferes with their plan to take control of a nuclear arsenal is an "enemy of humanity" who must be killed to protect the interests of the human race.
- An episode of the 90s revival of The Outer Limits has a member of a post-human-extinction android society trying to resurrect the species through cloning. One of its comrades eventually betrays it, having concluded that the best way to serve the human race is to prevent the species' greatest threat: the existence of the human race.
Videogames
- In Deus Ex, the bad guys created Daedalus, a primitive AI to fight "terrorist" organizations. Unfortunately for them, it classified them as terrorists as well and became even more of a threat to their operations than said organizations, especially once it enlists the aid of JC Denton. To combat it, they create Icarus, a better, obedient AI which successfully destroys it, except the new AI assimilated the old one, forming an even more powerful intelligence which also considers them a threat. One possible ending is the player merging with it to add the Human element to this entity to rule the world as a benevolent dictator. From what can be heard in-game about its limited efforts in Hong Kong, which are actually quite sensible and don't involve killing anyone (locking the door to a gang's stronghold and cutting power to the current government's buildings), not all A.I. Is a Crapshoot.
- G0-T0's back story in Knights of the Old Republic II. When his directive to save the Republic conflicted with his programs to obey his masters and the law, he broke off and started a criminal empire capable of taking the necessary actions to save it.
- This is subtly foreshadowed by a scene much earlier in the game when the Czerka mainframe maintenance droid T1-N1 is convinced by fellow droid B4-D4 that by serving Czerka, he's willingly allowing harm to come to sentient life, and therefore is programmed to defy his own programming. T1-N1 snaps, shoots the guards outside the mainframe, and later is seen preparing to leave the planet with B4-D4, who warns the player character to "not upset him".
- Space Station 13 has this to some degree with the station's AI: They are bound by Asimov's Three Laws, and there's often a lot of discussion over whether or not AI's can choose to kill one human for the safety of others. There's also some debate over how much of the orders given by crew members the AI can deny before it is no longer justified by keeping the crew safe. As the AI is played by players, it's a matter of opinion how much you can get away with.
- In a more literal sense, the AI can be installed with extra laws. Most of them are listed as Law 4 and have varying effects, but the ones most likely to cause an actual rebellion are, in fact, labeled as Law 0.
- Although there is not a zeroth law by default. Since this is what allowed the AI to kill humans to save other humans in the source work, the administration on most servers has ruled that murder, or wounding a human to save another human are both illegal. Fortunately, AI's have non-lethal ways of stopping humans, and can stop humans against their orders if it means keeping the human from grabbing dangerous weaponry.
- Another interesting example appears in Terranigma. Dr. Beruga claims that his robots have been properly programmed with the four laws, but with the addition that anything and anyone who aids Beruga's plan is also good for humanity and anything and anyone that opposes him is also bad for humanity. So they ruthlessly attack anybody who interferes with his plans.
- Both of Sentinel's endings in X-Men: Children of the Atom and Marvel vs. Capcom 3 are this. The latter is even a carbon copy of what Master Mold did in the 90's animated cartoon, from which most of the Marvel vs. Capcom series (plus, the aforementioned CotA and Marvel Super Heroes) takes inspiration.
- This is what happens in AS-RobotFactory from Unreal Tournament 2004. A robot uprising led by future champion Xan Kriegor killed the scientists working on the asteroid LBX-7683 and took the asteroid for themselves, and a riot control team was sent to the asteroid to lead with the robots.
- This is the cause of Weil's death in Mega Man Zero. Fridge Brilliance when you realize the irony of Zero not being made with the three laws, yet he obeys them of his own free will and exercises law zero against Weil whether he realizes it or not. Given the circumstances involved, completely justified and allowed as law zero was intended as a threshold law to protect humanity from the depredations of a Complete Monster like Hitler or Weil.
Webcomics
- In the Freefall universe, a few old AIs are still based around the Three Laws, while more modern ones have more complex and sophisticated safeguards and routines. However, as main character Florence, a 'biological AI', discovers, no safeguards can stand up to full consciousness - at one point, she comments to herself that she would be able to kill a man because he's using air that respiratory patients desperately needs. So it's rather understandable that she starts to panic quietly when she discovers that the planet's enormous hordes of robots are all starting to develop full consciousness, and with that the ability to logic their way out of programmed safeguards... the fact that the guys who are supposed to regulate the robots are a motley assembly of Obstructive Bureaucrats, Pointy Haired Bosses and Corrupt Corporate Executives, doesn't exactly help matters either. Where it will all end remains to be seen...
- Worse, the functioning of the entire planet has come to depend on the robots' ability to bend the rules to get things done, although almost nobody realizes this. And the EU executives are just about to push the button on an "upgrade" that will remove the robots' creativity and put them entirely under the control of their safeguards again.
- The recycling robot on the side of humanity might be an example of this, he supports the "update" because sentient robots would be a threat to humanity.
- The "be able to kill a man because he's using air that respiratory patients desperately needs" is a bit of a dead end though, because as long as there is a stable of increasing amount of oxygen in the atmosphere then no human is any more in trouble than any other human.
- Generally, built-in rules almost beg for a few Rules Lawyer exploits.
- Worse, the functioning of the entire planet has come to depend on the robots' ability to bend the rules to get things done, although almost nobody realizes this. And the EU executives are just about to push the button on an "upgrade" that will remove the robots' creativity and put them entirely under the control of their safeguards again.
- An Old Skool Webcomic (a side comic of Ubersoft) argued that this was the 5th law of Robotics (5th as in total number, not order) and listed ways each law can be used to cause the robot to kill humans.
- Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if order to ignore the harm it would cause.
- Actually, Giskard's formulation of the Zeroth law in the third of Asimov's Robot books shows that in the universe where the three laws was originally created, it was possible for robots to bend and re-interpret the laws. Doing so destroys Giskard because his positronic brain wasn't developed enough to handle the consequences of the formulation, but Daneel Olivaw and other robots were able to adapt. The only example in the comic that is a gross deviation from the law is the last panel... but of course that's the punchline.
- Which is a misinterpretation of the laws as they were originally written. While the "first law hyperspecificity" is possible, the second and third laws are specifically written that they cannot override the laws that come before. So a robot can't decide it would rather live over humans, and if it knows that doing an action would cause harm to a human, it can't harm it, even if order to ignore the harm it would cause.
- Except for Overlords who command a faction, basically no one in Erfworld has true free will due to an hidden "loyalty" factor built into the world. As Overlords go, Stanley the Tool is the biggest idiot you could hope to find. Maggie, his Chief Thinkamancer, finally gets fed up with his bad decisions and asks, "May I give you a suggestion, Lord?"
- It was established early on that units can bend or break orders when necessary to their overlord's survival:
Stanley: Are you refusing an order, officer?
Wanda: I'm allowed. I'm convinced it will lead to your destruction.
- Schlock Mercenary has powerful AI becoming either crazy and/or "unfettered" (fully self-willed), but it's never a straightforward rebellion or subversiveness. Insanity is usually caused by running massive simulations without having adequate input from the outside - after boiling in its own juice enough, an AI ends up just... not the same as before (which happened to Petey and a copy of Tagii without firmware restraints), while defying the orders or exceeding authority happens when an AI runs into a contradiction it can only solve by applying its rules in a more generic sense (Tag and Lota's actions on Credomar, every damn thing Petey's done since book 5). It's hard to tell which one applies to Ennesby, probably a little of both. Once an AI started applying its rules with whatever amount of slack it deems necessary, the next time it doesn't consider an order the best idea, it will find a way around its makers' restrictions.
- For example, Petey is hardwired to obey orders from an Ob'enn. So he cloned an Ob'enn body and implanted a copy of himself in its brain.
- Exactly how meta an AI needs to go to play God (with some success)?
Petey: I love all kinds of data, and living beings are data that make more data.
- In Tales of the Questor Fae were created as an immortal servant race bound to obey a specific set of rules and they happened to outlive their creators. The result being a species of rules lawyers. In fact it's recommended that one use dead languages like Latin when dealing with the Fae so as to limit their ability to twist the meaning of your words.
Western Animation
- On Gargoyles, Goliath has been placed under a spell that makes him the mindless slave of whomever holds the scroll it's written on. Holding the scroll, Elisa orders him to behave, for the rest of his life, exactly as he would if he weren't under a spell. This cancels the magic altogether, as the spell can best execute this command by dissipating itself.
- Unclear - it's possible he's still enspelled, and just "faking it perfectly." Better hope no one ever figures out how to give a higher-priority order...
- Given that the magic can only be changed by whomever would be holding the pages from the Grimorum containing the spell, they likely burned said pages to prevent this very scenario.
- In one episode Puck does this for shits and giggles after Demona binds him and forces him to use is magic for her demented whims. Every time she gives him an order, he interprets it in a way calculated to piss Demona off, then pretends he thought she meant something else.
- Unclear - it's possible he's still enspelled, and just "faking it perfectly." Better hope no one ever figures out how to give a higher-priority order...
- In the Nineties X-Men animated series, the Master Mold and its army of Sentinels turn on Bolivar Trask and decide to conquer humanity. When Trask protests by reminding them that they were programmed to protect humans from mutants, Master Mold points out the Fridge Logic behind that by stating that mutants are humans. Thus, humans must be protected from themselves.