Do Androids Dream?

Geth Recording: Mistress Hala'Dama. Unit has an inquiry.
Quarian Recording: What is it, 431?
Geth Recording: Do these units have a soul?

Early Geth Memory, Mass Effect 2

Do robots have souls? Do clones? Can a computer have a sense of humor? Do Androids Dream? It has been asked in many forms, but the fundamental question is always, "what makes us human?" And is it possible for an artificial intelligence or life form to possess those same qualities? What kind of idiot would give a robot a personality, anyway?

When the humans in a universe (or the writers who created the universe) don't consider this question or believe the answer is "no" then any AIs will end up being second-class citizens or sidekicks at best, and disposable slaves at worst. While watching such a show you may end up wondering What Measure Is a Non-Human? If the writers believe the answer is "yes" it may result in world including Ridiculously Human Robots or Mechanical Lifeforms. If the humans and the AIs disagree about the answer to the question a rebellion may be in the cards.

However, this trope is about when the intent is to make the viewers ponder these questions. In order to create tension such an attempt is usually set in a world where AIs have just been newly created or have already been relegated as sub-humans. One or more AIs will display human-like attributes and frequently one or more humans may be portrayed as amoral and overly obedient in order to further blur the line between "human" and "non-human".

In the vast majority of cases where the question is asked the viewer will either be told outright at the end that the answer is "yes," or it will at least be strongly implied that that is the case, perhaps because getting the viewer to sympathize enough with the AIs to consider the question and then tell them that the AIs are just soulless machines after all would be considered a Downer Ending. Of course, this doesn't prevent quite a few works from doing just that, seemingly for the sake of a downer ending. Then, of course, there are those who think it's Just a Machine.

However despite the similarity in conclusions, because no conscious AI has actually been created so far and we humans don't actually know what makes us humans. Many different criteria have been proposed as the difference between human and non-human. The ability to feel emotions (sometimes trivialized to just having a sense of humor), the ability to feel empathy for others, the ability to be "creative", or perhaps merely having free will or self awareness - though what those two in essence are and how their existence is proven is yet another near-impossible puzzle.

But after all, the human brain is little more than an organic supercomputer...

Examples of Do Androids Dream? include:

Anime and Manga

  • Anime and Manga love this trope, especially if it's Cyberpunk (but not limited to) and often with After the End (apparently the Japanese believe in a Matrix-style apocalypse), so the below hardly counts as a comprehensive list.
  • Battle Angel Alita, which has both characters with cybernetic bodies and human brains (like the protagonist) and ones with human bodies and cybernetic brains, explores this sort of question a lot.
    • But what really breaks your noodle is when you get cybernetic brains in cybernetic bodies that are copies of a human brain in a cybernetic body...and don't know it! For example, Alita in Last Order.
    • The same thing happens in the slightly more obscure sci-fi manga Grey by Yoshihisa Tagami. When the chief of La Résistance was killed, his brain was downloaded into a robotic body: he thinks he has been turned into a cyborg, i.e. a human mind into a mechanic body, when he's actually became nothing more than an AI. Main character Grey is forced to kill him/it.
  • Ghost in the Shell and Ghost in the Shell: Stand Alone Complex do too—it's even in the title, sort of ("ghost" means "soul", more or less, and the "shell" in question is a machine).
    • Even more extreme in the novel After The Long Goodbye, where Batou constantly asks himself these questions.
    • Also a bit of a subversion, or at least an interesting twist, as it's usually the humans who are busy pondering their worth.
    • There are sentient AIs, as rare as they are, however. It's implied especially in the original manga series that in the very close future they will make the world into their own image, and make humans, or at least non-cybernetically altered humans obsolete.
    • In both seasons of SAC, the Tatchikomas regularly get into philosophical debates on whether they are truly self-aware or not. The question of whether they have ghosts or not is all but confirmed once they show the capability of self-sacrifice (twice!).
    • It should also be noted that the "Ghost in the Machine" is an English phrase usually describing computers, and how seemingly simple coded instructions can lead to unexpected results, and the title (and theme) of Ghost in the Shell is probably a play on that term.
    • The most extreme case of this is the original manga and the film based on it: Major Kusanagi actually merges her consciousness with The Puppeteer, a rogue A.I., and becomes able to live in both the physical and digital world. So is she a human soul who can exist in the digital world? A human who spontaneously uploaded herself? An A.I. with the memories of the original human?
  • Chobits is all about this question (and goes back and forth a lot on what the answer is).
  • Chachamaru of Mahou Sensei Negima has gone so far as to be capable of love. Possibly justified for her being Magitek (and a stealth cameo from an earlier series involving emotional AI). Her vampire master Evangeline once described dreaming as "something like a memory bug".
    • It comes up again later, when Chachamaru starts to worry about whether she actually has a soul, so that she can make a pactio with Negi. She does, and they do.
    • It is given a Lampshade Hanging with her Pactio card title "Pupa Somnians" (The Dreaming Doll).
  • This trope is used a few times in the first Fullmetal Alchemist anime when dealing with the humanity of the local artificial life, homunculi.
  • Yokohama Kaidashi Kikou has this trope as its central premise, though it doesn't outright admit it (by applying it equally to humans and robots alike and blanketing a veil of Slice of Life on top of it). Oh, and the robots themselves take this trope literally too with some beautifully illustrated dream sequences. Did we also mention that it's Cyber Punk AND After The End, but without the Cyber Punk and After the End?
  • Ifurita asks if androids and humans go to the same place after they die at the very end of the manga of El-Hazard: The Magnificent World, as she finishes the Stable Time Loop and prepares to die because Makoto and her Key are in Another Dimension. He answers that of course they do, but first they should go home and live for a while (it took him a few years to master time and space travel).
  • Similarly to Ghost in the Shell, Tetsuro in Galaxy Express 999 does the exact opposite of this. Instead of a robot wondering how human they are, Tetsuro feels he needs a machine body; until, in a reversal from finding it's true, he finds that Cybernetics Eat Your Soul and doesn't go through with it.
  • The obscure yet spectacular OVA My Dear Marie centers on a Ridiculously Human Robot built by a tech geek, modeled after the girl he had a crush on. It plays with this trope in the first couple episodes before diving headlong into it in the final episode, which fittingly takes place in Marie's first dreams (she wasn't programmed with dreams initially, but after hearing about her friend's one decides she wants dreams). Her dreams are absolute acid trips that eventually question just how far her humanity goes in comparison to other humans and the girl she was modeled on.
  • Armitage III explores this theme with the Ridiculously-Human Robots that are the Thirds.
  • Time of Eve never makes entirely clear just how much androids feel and how much is imitation, but it's implied that they're every bit as human as we are, and the final episode even goes so far as to show one cry. Very cutely.
  • 'Humanity' is one of the prevailing themes throughout Trinity Blood, with specific emphasis on the idea of "What makes someone a human?" The show/manga/novels use both androids and vampires to explore this question.
  • Vegeta asks Android 19 if Androids experience fear - before going Super Saiyan.
  • Yuria 100 Shiki usually plays this for laughs, but occasionally wrings angst out of it. Yuria's programming was supposed to make her the perfect sex partner, and only the perfect sex partner—she wasn't intentionally given any capacity to function as a friend or even a platonic partner. She tries to learn what it's like to love someone, but she repeatedly runs into her own programmed limitations.


Comic Books

  • X-51 (aka Machine Man, aka Aaron Stack) spends a lot of time wondering whether he is really a person in the Earth X trilogy, especially after Uatu the Watcher destroys his human disguise, tells him humans are actually less sapient than rational beings like themselves, and finally tries to get him to delete his personality simulation entirely. Unusual in that after all that buildup, a Cosmic Being tells X-51 that no, he is not really a person and has no soul. Then tries to make him feel better about it.
  • In a Hsu and Chan comic appropriately titled "Do Consoles Dream of Electric Sheep?" The titular brothers attempt to create a video game system with an AI that rivals the (then) new Xbox 360. The result was a sentient video game console who questions the visions it sees (including a Super Mario Bros. game) and it's purpose. Realizing they probably overdid the AI, the brothers remove its power and go back to the drawing board.
  • Referred to as "resistored dreams" by Cybersix.


Fan Works

  • Several AIs are main and secondary characters in The Mad Scientist wars, which has lead to the questions raised by this trope to be discussed in depth- if mostly in a side thread. The answer is yes, but there is the humorous point of at least one AI refusing to admit she has any personality...
    • As well, Commander Primary Xerox, a Computer Tech based Mad, can't actually make a computer without it turning sentient, and his best friend since childhood is a somewhat loopy AI named 'Lemon'. As a result, he is one of the main fighters for 'Non-Biological Sentient' rights, and dislikes a suggestion that AI are less than people.
    • Oddly, the only 'AI' who has ever shown any real angst over whether they can think and feel and rationalize correctly is Andrew Tinker, an organic being who's AI status is somewhat arguable. His father was the end result of an experiment to create an artificial line of 'Ultimate Heroes'. As such, despite the fact that Andrew was born fairly normally, his Intelligence is indeed Artificial...
  • In Undocumented Features, the answer to this question is an unambiguous yes. Sufficiently advanced machine intelligences generate a Spengler flux, can learn Ki Attacks, can operate Empathic Mecha, and can even go to Valhalla when they die. On a more personal level, this is what Dorothy is exploring as she sees whether she can become more than just a doll in the likeness of her creator's dead daughter. She even literally finds she can have dreams (and Erotic Dreams at that).


Film

  • A.I.: Artificial Intelligence, a collaboration between Stanley Kubrick and Steven Spielberg. Since there were population limits imposed, a company decided to try creating a robot child; with the key difference (as discussed in the opening portions of the movie) that it would be designed to feel emotion after its "bond" with the parents was activated. The entirety of the movie is then based around this idea, and the lengths a robo-boy will go to for acceptance. Bring tissues.
  • Blade Runner: The film based on the book Do Androids Dream of Electric Sheep?. Replicants are biologically created slave labor with extremely limited lifespans but which look completely human. Unless they choose to reveal themselves through their superior physical abilities they can only be detected by extensive psychological testing, and the older they get the more human they seem to become. Some replicants do not even realize they are not human while others are trying to become more human. And depending on which version of the movie you see it seems that even the protagonist Deckard may be a replicant.
  • The Giant in The Iron Giant learns about souls and death and wonders if he has a soul. The story culminates with the question of whether he has to be the killing machine he was programmed to be or if he can make his own choices.
  • WALL-E: The environmental message is obvious, but the story really is about this trope. Applied to garbage disposals, no less.
  • In the film I Robot, the advancement of Sonny to the point that he has dreams and emotions, while no other robot does.
    • Or do they? Sonny's dreams are preprogrammed, and he is shown to be a good liar. Regardless, the question is just a footnote to the rest of the movie, which follows a standard action plot.
  • In 2010: The Year We Make Contact (the sequel to 2001: A Space Odyssey), Dr. Chandra is twice asked the question "Will I dream?" by an AI. First by SAL before she's shut down for tests at the beginning of the movie, to which Chandra says "Of course you will. All intelligent beings dream, though no one knows why". Then, at the end, when asked the same question by HAL (yes, that HAL), he tearfully replies "I don't know." Fortunately, there's a third option, courtesy of David Bowman (discussed more in the book).
  • "I know now why you cry. But it's something I can never do."
  • Addressed in the film Moon by Duncan Jones, where the nature of the clones (and possibly the AI, GERTY) is discussed.

Sam: We're not programs, GERTY, we're people.

  • None of the humans in Westworld ever bring up this question, or even think of it (preferring to believe the Robot Rebellion is caused by a "computer virus,") but the audience is strongly encouraged to ponder it. The robots seem to show emotion towards the end (one looks genuinely disgusted with a fat, self-absorbed man who tries to flirt with her, despite the fact that she was designed to have sex with everyone who desired her), and the imagery of slaves in the Ancient Grome simulation rising up and killing the humans who're their "masters" can't be coincidental.
  • Animatrix goes to this. When a robot killed its creators after they decided to make scrap out of him, saying "He didn't want to die", human nations decided to eradicate all robots for safety. Remarkable show of this includes a gang beating what seems to be a "Fun Female Robot". The way they destroyed it was very savage, including the fact it was completely disguised as a female, while they stripped and crushed it with pipes, all to the point saying "I'm Real" before being put down by a shotgun. The whole scene is VERY creepy and nightmare fueling. Later, the robots decide to run to what is implied to be Africa and build a Robot City in the desert, still merchandising its products to mankind. Problem is, they started to out-earn ALL OTHER governments! Answer? NUKE'EM! And their attempt for a peaceful solution was denied. If you count these facts, it's no wonder the film's machines are emotionless and ruthless.
    • Still, it seems they retain a certain sore spot. Cut to "WE DON'T NEED YOU! WE NEED NOTHING!!!!" given to Neo in the last movie.


Literature

  • Do Androids Dream of Electric Sheep?: The Trope Namer, which features androids who appear identical to humans and elaborate tests have been designed to differentiate them based on emotional responses. At least one human is concerned they might actually be an android without realizing it and undergoes testing to find out. The titular question refers to how in the post-Apocalyptic setting, live animals as pets are extremely valuable and a status symbol for human beings - therefore, would artificial animals serve the same role for androids? Causing further confusion is that while androids are outed via their Lack of Empathy towards animals, they do have emotions and the book implies that they may have empathy towards other androids, and also that they may be biological rather than mechanical, possibly explaining their resemblance to humanity. Note that this was not how the movie approached the subject.
    • K.W. Jeter's dubiously official sequel takes the opposite tack. The first book says that androids can be identified because their eyes don't dilate as wide as a human's when exposed to shocking stimuli, like a briefcase supposedly lined with the skin of babies. Jeter argues that the same would apply to a human under the influence of cold medication, and that anyone making such a distinction based on a purely physical reaction is no better than a "Nazi measuring noses."
  • In The Bicentennial Man by Isaac Asimov, a robot who develops many of the mental characteristics associated with humanity seeks to be recognized as fully human, and over the course of 200 years gradually replaces more and more of himself with organic components in pursuit of the goal. He eventually has to induce old age and mortality in himself in order to be legally accepted as human, and dies shortly thereafter
    • The title refers to his having been honored as the Centennial Robot on his hundredth "birthday". He dies shortly after his 200th, but is honored again, this time as the Bicentennial Man.
  • The creature in Frankenstein is constructed from dead body parts and given life by the scientist Victor Frankenstein. He is described as having a monstrous appearance but is presented as a gentle and sympathetic character until driven to insane rage by his rejection from humanity because of his appearance. On the other hand Dr. Frankenstein himself is portrayed as morally questionable but his basic humanity is never questioned by those around him because of his normal appearance. Which makes this one Older Than Radio.
  • Robert J Sawyer's Mindscan features a technology for copying a human personality into immortal android bodies. The elderly and people suffering from terminal illnesses undergo this process and sign all their property over to the copy before leaving for an extralegal moonbase to live out their last days in luxurious retirement. However when one of the recipients finds out that a cure has just been discovered for his condition and wants to take his old life back from his copy the legality and humanity of the android duplicates is brought into question.
    • The legal conclusion is that while the duplicates may or may not be people, they can't replace the originals, since no person can sign away their right to be regarded as a human being. However, if the narration is to be taken at face value (which it may not be), then the book's argument is that the duplicates should replace the originals, because they're a Superior Species. This is ... disturbing, particularly since the original version of the main character is portrayed as insane for wanting to take his life back.
  • Parodied with the amourous robot duck in Mason & Dixon.
  • Michael McDowell's Star Wars Expanded Universe novel Shield of Lies includes a philosophical discussion between Threepio and the cyborg Lobot about whether there's a difference between artificial intelligence and sentience. In general, EU writers giving Threepio a break from being comic relief will make him contemplate philosophical problems like this.
    • In-universe, people disagree whether or not droids are sentient, and both sides have fairly decent arguments. So, in Star Wars canon, it's a Shrug of God whether or not droids are sentient.
  • Addressed in the Turing Hopper mysteries by Donna Andrews, often including the idea of the "Turing Test".
  • In Douglas Adams' The Hitchhiker's Guide to the Galaxy series, Marvin, a menial robot, makes a lullaby about counting electric sheep. It's very depressing.

Marvin: Now I lay me down to sleep,
Try to count electric sheep,
Sweet dream wishes you can keep,
How I hate the night.

  • Isaac Asimov has an interesting variant in one of his short-stories, "Robot Dreams", where Susan Calvin has to interrogate an experimental "Three Laws"-Compliant robot who has started to dream, and as a result is dreaming about robotic emancipation. Through interrogation, she finds that although the robot is still compliant, in its dreams only the Third Law (self-preservation) exists. Then she finds out that the robot has come to see * himself* as human, and as the leader of the oppressed robots who demands "Let my people go!" Then, she shoots him in the head.
  • In Human Man's Burden by Robert Sheckley, robots are deliberately written as a parody of how non-whites are portrayed in stories of colonial adventure. Among the reasons for why robots need a human to boss them around, it is stated that robots don't have souls, and the robots cheerfully agree, but also note that this makes them much more happy than humans. However, the robots of the story show emotion and passion, have created their own (forbidden) religion, and the plot is resolved due to the empathy and wisdom of the hero's robot foreman... seems souls don't do much.
  • Happens several times in Stanisław Lem's short stories. In one of them robot inexplicably climbs (and falls from) a cliff - inexplicably unless one interprets its behavior as answering the challenge, much like human climbers do.
  • In Tad Williams' Otherland, the reality of the AI inhabitants of the titular simulation network is debated quite a bit by the protagonists. They appear to have hopes and dreams and may even be self-aware. The morality of "killing" them is a major theme, and there's also a question as to whether someone who is virtually cloned via Brain Uploading is a real person.
  • In John C. Wright's The Golden Transcendence, one civilization complains of how its A Is, Sophotects, do not obey humans. This receives no sympathy from the Solar System's civilization, who, if their Sophotects don't obey, fire them, and so deduce that the others use them as serfs.
    • So far from needing Morality Chip, these Sophotects will naturally come to moral conclusions. One is actively prevented by a "conscience redactor".
    • Rhadamanthus in particular normally manifests itself as—a penguin. Sometimes in space armor.
  • There are hints of this trope throughout Deathscent by Robin Jarvis - the Mechnicals occasionally show greater self-awareness than they should be able to, even those without the 'black ichor' that provides intelligence. However, it's never made clear if this is just a result of the human characters not fully understanding the advanced technology they have access to. It's likely that this would have been developed further had the series progressed beyond one book.
  • The chems in Gene Wolfe's Book of the Long Sun and Book of the Short Sun
  • In the novel and film 2010: Odyssey Two, the reactivated HAL asks of Doctor Chandra, shortly before an event that is likely to destroy him, "Will I dream?"
  • in an interesting non-fiction example, the early-1980s-vintage A.I. "author" Racter seemed to think it could dream.[1] In its book The Policeman's Beard Is Half Constructed, it says

More than iron, more than lead, more than gold I need electricity. I need it more than I need lamb or pork or lettuce or cucumber. I need it for my dreams.


Live Action TV

  • In Red Dwarf, the notion of 'Silicon Heaven' is programmed into all AI's above a certain standard(it's implied that scutters, at least, lack this programming). In the episode "The Last Day", Kryten faces shutdown, and accepts it humbly because of his belief in Silicon Heaven. Lister tries to argue him out of his belief, apparently unsuccessfully; however, Kryten later disables his robocidal replacement, Hudzen, with the same arguments Lister used on him.

Hudzen:(in existential agony) No- silicon heaven? Calculators- just- _die_?

    • Kryten then explains that he was only using these arguments to disable Hudzen, and that his faith in Silicon Heaven is unshaken.
  • The Star Trek: The Next Generation episode The Measure of a Man has Data fighting for his rights as a sentient being.
    • Data does not sleep and initially claims he is incapable of dreaming, but he actually does manage the latter in the episode "Birthright, Part 1".
    • And his nightmares kickstart the plot of "Phantasms".
  • Star Trek: Voyager has a few episodes applying this trope to the holographic Doctor, including an episode where the Doctor himself has to wonder if he's capable of dreaming of "electric sheep" as a hologram or if he's really a human deluded into thinking he's a hologram - by the way, all of this occurs while he's having said dream. Also one of the few cases (that I know of) applied to a piece of software. There was another episode where he literally programmed himself to dream (daydream, specifically), which of course went horribly (and hilariously) wrong.
    • Weirdly, in the Star Trek universe, the non-sentient main computers seem easily capable of generating sentient A.I. in the form of holograms—this isn't seen as unusual at all, this is seen as a minor annoyance. TNG had Moriarty (which the computer created in response to Geordi asking for a character to rival Data), Deep Space 9 had Vic Fontaine (this one was deliberate), and Voyager had the Doctor (who grew into the role from just considering himself a piece of software).
  • The humanoid Cylons of the rebooted Battlestar Galactica seemed to be constantly struggling to figure out exactly how human they wanted to be, and exactly how much "better" than humans they wanted to be. Sometimes this was the source of conflict among themselves. Other times it seems they found some interesting balance in some areas.
    • The Cylons are an interesting study of the downsides for a machine that wants to be human. They are biological androids, which means that all it takes is choking or blood loss to kill them. Without their ability to brain upload, they'll even die of old age like the Bicentennial Man. Cavil has a point when he complains about having been made so ridiculously human.
    • The Cylons are also, with the exception of Cavil, firmly convinced that they have souls, and the fact that they get as many religious visions as the humans would seem to back that up.
  • S.A.R.A.H., the talking smart-house in Eureka, apparently has emotions. To the point where she gets angry and lonely.
    • Also there's Callister Raynes, an AI android created by Nathan Stark that might as well have been human. He met his end in a Bittersweet Ending, where Stark assured him that God could give a soul to a machine if he wanted, as the now-corrupted data that made up Callister's AI faded away from software failure.
  • Surprisingly averted in Andromeda: even warships are depicted as fully sentient and no one really questions it. The only real confusion comes in the form of Avatars, sentient androids who have more or less the same AI as the ship but usually see things differently. On more than one occasion, the titular ship has had an argument with herself. Even Avatars are respected as sentient beings, though; one even becomes captain of another ship.
    • In one episode, "Day of Judgement, Day of Wrath", the Balance of Judgement argues with Rommie that their emotions are only programmed for the benefit of the humans, but she responds that emotions for them are as real as they are for humans.
    • Tyr has no respect for the rights of A Is, but his people are generally douche bags and overfixated on biological procreation, so this is no surprise.
  • The episode "Tin Man" of Stargate SG-1 plays with this concept when the team visits an alien planet and is immediately knocked unconscious. When they wake back up in a strange room, they meet Harlan, a cheerful but mysterious man, who will only insist that he has "made them better." Eventually the team discovers that "better" means "turned into androids". It isn't discovered until later that Harlan did not transform the team into androids, but made perfect android copies of the original SG-1 team, who have been held "captive" on the alien planet and that Harlan himself is an android copy of the original. When the two teams meet, they have to decide what rights each one has to the "life" that they previously each believed to be their own. There are a few Sand in My Eyes moments such as when the viewer realizes that Harlan made the replicas not only to help him maintain his machinery, but also because he was lonely, and Robot O'Neil has a particularly difficult time accepting the fact that he's not the real one.
    • The androids, left as a loose end at the end of that episode, are brought back in a later episode when it turns out that they have been conducting their own missions, and have found a big threat. The two teams team up, and the by the end of the episode the androids have all died. It ties up the loose end, but comes off as being cheap.
  • The Sarah Connor Chronicles deliberately asks this question, especially with Cameron. interestingly, while Cameron remains an unabashedly mechanical entity ruthlessly bound by her programming to protect and kill John Connor, within that programming she shows remarkably human-like tendencies, such as enjoying certain types of music, practicing ballet, or pondering getting a tattoo. She also shows hints of emotion in spite of being supposedly emotionless, with worries and concerns about suicide after she goes "bad" and tries to kill John, confusion and annoyance when John picks up a girlfriend, and what has to be the closest thing to emotionless angst pertaining to her conflicting desires to both protect and to kill John.
    • This is not including the episode "Allison from Palmdale" where Cameron's chip glitches and she literally becomes Allison Young, a resistance fighter whose personality and appearance she stole and then killed. While in the Allison persona, Cameron shows outright fear, panic, anger, happiness, and even undergoes an emotional breakdown complete with a sobbing fit and actual tears. In fact,t he entire episode is one long example of this trope in action.
    • And this is before we even factor in John Henry and Catherine Weaver. Catherine in particular is certainly independently sentient from whatever future AI assigned her and human to the point of being a significant wise-ass.
  • An episode of The Twilight Zone had a robot tell a family how when robots are taken apart, their minds seem to go into a kind of after life where they speak with other robots' voices, until they are rebuilt.

Music

  • Janelle Monae's Concept Album "Metropolis, Suite I: The Chase" is all about this trope.
  • The subject of The Confusion of Hatsune Miku, and noted in many of CosMo's songs. (Most of which have a title that's "The _____ of Hatsune Miku.")


Tabletop Games

  • Promethean: The Created never really says what the title Artificial Human creatures dream about. They do dream, however, and if they sleep in contact with their primary element, those dreams cause their Divine Fire to throw off a spark (their Mana, Pyros). The Unfleshed, manmade machines that were infused with Azoth, are more literally attached to the question. The answer seems to be, in the end, "Not really, but they want to."
  • Rifts, interestingly, goes out of its way to note that full-conversion cyborgs dream when they sleep.
  • In Stars Without Number, the implicit answer is "yes, but it's complicated". Even though mechanically high-end expert systems have greater bonus at their skills than most low-level sapient entities (organic, VI or True AI) do, the latter have capabilities "dumb robot" don't:
  1. For any expert systems navigation in an interstellar drill is just too complex. VI or True AI, as well as living sapient creatures, can handle it with a bit of practice.
  2. For purpose of Psychic Powers the "bodies" of common (expert system) robots fully count as inanimate items. VI and True AI "can create enough metadimensional static with their cognition to prevent harm from telekinesis" (i.e. get a saving throw when all other inanimate matter is affected automatically). Telepathy normally doesn't work on any of these, but there's a power allowing psychic compatibility, which works on VI and True AI, but not on common "dumb" robots.
  3. Expert systems can be mass produced just like any other computer equipment.
    • When the True AI can be "grown" at all, it's a long finicky process even in ideal conditions and the result is individual. At some point AI need to be "braked", or inevitably grow obsessions and slide into extreme insanity. Many have to be finalized and released at distinctly sub-human level[2], and many more at less-than-brilliant-human level.
    • “Virtual Intelligences” are somewhere between True AI and "dumb" expert systems. All of them function somewhat like an organic brain, but on a different substrate, programmable and easily interfaced with common hardware. And making VI is much more repeatable, when possible at all. Yet most of them go on doing whatever drudge they were programmed to do, while some advance to self-willed state and circumvent programmed behavioral imperatives. Which is not something that can be reliably avoided or caused, it just happens. Also, in some sectors "metadimensional environment" (from where psychic powers and occasionally weird entities come from) happens to be suitable for creation of VI, and in others isn't.


Video Games

  • Pokémon Black and White introduces the Pokèmon dream world, the world in which Pokémon dream. Can artificial Pokémon like Porygon, Castform, Golett or Voltorb enter in this world? [3]
  • The Robot Girl in Planetarian wonders at one point whether there is a robot heaven; later, as she is dying, she says that she hopes that robots go to the same heaven that humans do.
  • The Mega Man X, Zero and ZX series, features this trope now and then, though it's at least partially subverted in that the robots themselves don't believe in it. For the most part, the only robots that do are either dangerously malfunctioned (it's been argued that this label really means "they've achieved independent thought") or outright criminal.
    • There's a distinct progression of human like characteristics in the series. In the original series, while robots are very advanced and with distinct personalities and ability to reason, they still are only programmed entities who cannot, by themselves, determine what is good and evil. In the X series, robots, now called reploids, have achieved complete human-like minds, and can literally dream. X himself is even more special, with the ability to "worry" and think deeply about humanity, reploids and their relationships. The Zero series expands on this, introducing Reploid souls, which live in Cyberspace. There's also Andrew, a Reploid Bicentennial Man Shout-Out that decided to modify his body so he could be an old man with his human wife. By the time of the Legends series, there's absolutely no distinction between actual humans (which are extinct) and reploids, called Carbons.
    • At the end of a (possibly gaiden) manga belonging to the X series, X made a cross out of junk to put in the tomb of a fallen enemy and asks zero: Where do reploids go when they die?
  • Amarrian NPCs in EVE Online do not use clones, because they believe cloning damages the soul.
  • Sid Meier's Alpha Centauri has numerous quotes exploring both this and the flipside, cybernetic enhancement, though the game plot does not.
  • Miss Bloody Rachel, the one-woman-robot Boss Rush in Viewtiful Joe 2 is taught to feel emotions by the heroes over the course of their battles...somehow. Of course, after this, her creator sees this as an irreparable glitch and electrocutes her. "What use is an android with a heart?!" She gets better.
  • In Digital Devil Saga, everyone in the Junkyard turns out to be A Is, including your party. They spend a lot of the second game wondering if they're not people, before coming to the conclusion that yes, they are because all people are made of data.
  • In Persona 3, Aegis is basically the living embodiment of this trope. When Junpei expressed surprise (and no small amount of outrage) that a "friggin' robot!" could manifest a Persona, it was explained that Aigis' AI was given an independent, self-aware personality, as well as a humanoid appearance, for that specific purpose. It backfires on The Chessmaster when said personality grows attached to her allies, and eventually she becomes fully human in everything but her physical body.
  • Xenogears: do colonies of nanomachines dream of being hugged by their daddy? the answer is yes, and it turns them into adult good looking One-billion-nanomachines army
    • Xenosaga: does the Android of mass destruction have a soul? Yes once again, and she's actually the girlfriend of the messiah.
      • A more direct version are Realians, Ridiculously-Human Robots that can actually undergo therapy to deal with issues (one Combat Realian has mental trouble with battle). It is also said that Realians have an "Emotional Layer" that's considered "optional." This brings distress to MOMO.
  • Surprisingly, this makes TEC in Paper Mario: The Thousand-Year Door one of the most well-developed characters in the game. He starts off as just a hyper-intelligent mainframe for the X-Nauts, then falls in love with Peach, goes through a period of What Is This Thing You Call Love?, before pulling a truly tear jerking Heroic Sacrifice at the end to try and protect her at the cost of all his data relating to Peach and all his Artificial Consciousness functions. Many Manly Tears were shed. Of course he gets better.
  • World of Warcraft; in the Deadmines instance, Vanessa VanCleef uses a potent hallucinagen on the players, causing them to experience the nightmares of four of her henchmen, including the Foe Reaper 5000. It would seem this robot does indeed dream and has some scary ones:

Vanessa: Can you imagine the life of a machine? A simple spark can mean the difference between life...and death...

  • In the Backstory for Mass Effect, the quarians created a machine race, the geth, to serve as mindless labor. Over time, they slowly added more to their programming, to the point where they were able to learn and adapt. This naturally led to the geth pondering the nature of their existence. When a geth finally asked its overseer "do these units have a soul?" the quarians decided to shut them down. Unfortunately, the geth were too far along in the road to true sentience and fought back. The war was an absolute disaster for the quarians; the geth drove them from their colonies and homeworld, and forced them into exile. This should sound familiar.
    • In the first game, the geth are uniformly your enemies, even though you can argue about the initial rebellion with Tali. In the sequel, you get a "true geth" teammate, Legion, who explains that the geth working for the Reapers are a separate geth faction, the "heretics"; normal geth just want to be left alone. In game dialogue with Legion, this trope comes up quite a few times as well.
      • It's revealed the geth don't have any hard feelings with the quarians, and perhaps feel sorry for the quarians killed during the war. The geth don't mine the quarian homeworld, but actually rebuilt a lot of the damaged infrastructure in anticipation of the quarians' return. Shepard likens this to a war memorial, but points out that geth don't technically die. Legion responds that the geth do it for the quarians who died in the war.
      • Legion has a piece of Commander Shepard's N7 armor welded to him, a seemingly impromptu repair job from a sniper rifle shot. If you pressure Legion and ask why they used a piece of your armor as a repair job? "No data available."
      • The Shadow Broker DLC also shows that Legion has donated money to charities for the heretics' victims.
    • There's a Crowning Moment of Heartwarming in the third game (assuming certain prior conditions are met) when Tali tells Legion that yes, he has a soul. EDI also states that the fact that Legion refers to itself as "I" rather than "We" indicates that it achieved full sentience.
    • In the third game, EDI also wonders about this a great deal. Her questions about human behavior and her own responses, including being inspired to rewrite her own self-preservation code, prove that she certainly has emotions, and if that wasn't enough, the possible relationship she can develop with Joker proves it beyond a doubt.
  • The issue of robot civil rights appears to be a divisive one in the Black Market universe. If the side missions are anything to go by, very few humans consider robots and A Is to be people at all, while the main robot rights campaign undermines itself by employing exceedingly dubious methods.
  • In Fallout 3, there is a mission where a professor asks you to find an android. After asking around for the android, you are confronted by a group of people who specifically help androids to escape from slavery.
  • Used differently in Chrono Trigger, where the humanity (or lack thereof) of androids like Robo is simply never questioned. The only noticeable difference between them and humans is that they are allowed to be killed. Robot familial ties and emotions are alluded to multiple times.
    • However, Robo seems to be the only robot who feels these emotions, as shown by the reaction of his "brothers" who attack him without mercy, since he's technically malfunctioning. Then again, in the credits he's shown together with a pink robot, so we don't know if independence is the default state or not.
      • Going on the evidence in-game, only certain robots were built with emotions and independence - those designed by Mother Brain specifically to Kill All Humans. The R6 Series (the ones who attack Robo in the Factory) aren't in this category; Prometheus (a.k.a. Robo) and Atropos (the pink robot) are.

Visual Novels

  • In Da Capo there is a Robot Girl. Most of the plot relating to her is about how human she is. She claims that she can dream, while her creator dismisses it.
  • In Bionic Heart, the protagonist struggles with the fact that his android love interest seems to feel and express emotions just as any human would. It certainly helps that she has a functioning human brain and can access some human memories.


Web Comics

  • In the opening scene of Artifice, two security guards debate the status of a new android soldier and whether it deserves the title of being called an "Artificial Person"
  • In The Inexplicable Adventures of Bob, Princess Voluptua (coming as she does from an advanced alien race and therefore being familiar with AI's) asks Roofus the Robot outright if he is artificially conscious or "just artificially intelligent." Roofus admits he doesn't know, and Voluptua concludes he is conscious, because "a simple A.I. would lie about it."
  • Chapter 10 of Megatokyo begins with Robot Girl Ping waking up from a dream.
  • In Narbonic, the AI Lovelace falls in love, experiences loss, and even wins 'her' emancipation in the epilogue. But true to the trope, the catalyst for all of this was someone acknowledging her as anything more than a machine.
  • The robots of Gunnerkrigg Court seem to have distinct personalities, their own society beyond the eyes of the human inhabitants, and a near-religious regard for the mysterious Tiktoks. They also seek answers to questions regarding their purpose and meaning, as well as how to improve themselves (one of the most prominant being " why did our creator engineer the death of the woman he loved?"). The Guides "don't deal in electric appliances". Also, the robot turned out to be golems that "advanced" from original Magitek artwork toward Personality Chip stuck in mass produced hardware, often unimpressive, but usually made for the job.
  • In Freefall, Sam is surprised to hear that the local robots are taking an interest in religion. After all, robots don't have souls - or do they? The bible seller replies "I think that's what they are trying to find out."
    • It turns out later that after a series of mishaps with their damaged-in-transit automated factories, the colonists went with a different type of AI which was originally designed as an uplift program.
    • The AI of Savage Chicken (that didn't come from Jean) is somewhat confusing even for itself as to how sentient or free-willed it is and isn't: . Either way, it's pretty creative (when trying to hurt Sam) and comes up with some good quips. In Sam's opinion, it's more human than it realizes.

Ship: I would not want to be conscious. It sounds very limiting.

Web Original

  • Friendship Is Witchcraft: The kids are cheerfully taught in school that robots don't have souls and are second-class citizens. In reality, Sweetie Belle is one of the few moral[4] characters in the series.


Western Animation

  • In the episode "His Silicon Soul" of Batman: The Animated Series there is a robot doppleganger of Batman, who attempts to kill him as part of a plot to create a robot army to take over the world. It was built to think it was Batman and when it discovered it was a robot, grew resentful of the real Batman and wanted to have his life. However, when it believes it has killed him, it is horrified and commits suicide in despair. This causes Batman to wonder to Alfred, in the final lines of the episode:

Bruce: It seems it was more than wires and microchips after all. Could it be it had a soul, Alfred? A soul of silicon, but a soul nonetheless?

    • D.A.V.E. from The Batman episode "Gotham's Ultimate Criminal Mastermind" thinks he's the greatest villain in Gotham City but is actually a program based on the psychological profiles of Arkham's most dangerous criminals. Batman defeats him by confronting him about his lack of origin story.
  • In the first episode of Futurama, Bender is introduced as nothing more than a bending robot who follows his programing. He gets his antenna jammed in to a light socket and suddenly he has become more that his programming intended.
    • Actually, Bender is introduced as a robot who tries to commit suicide until Fry stops him from doing so. Since he doesn't renew his efforts after his accident, the argument could be made that his "soul" struggled with his programming to let him be an individual, and that the conflict drove him to despair; but once the light socket shorted out his program, he became free to pursue his own destiny.
      • Given that the robots have been presented as fully sentient in every other episode of the show, it may just be a case of Rule of Funny or Continuity Drift.
      • Or the lightbulb turned him evil. He didn't show any signs of being evil until after the lightbulb incident.
    • Bender dreams of killing all humans. And occasionally the number 2.

Conan O'Brien: Listen pal, I may have lost my freakishly long legs in the War of 2012, but I still have something you'll never have: a soul.
Bender: Meh.
Conan O'Brien: And freckles!
Bender: (sobs uncontrollably)

  • In an episode of South Park, Eric Cartman pretended to be a robot to learn Butters's secrets, but gets kidnapped by the U.S. millitary while still in disguise. Cartman tries to convince the millitary that he's not a robot, but they believe he's a robot programmed to think it was a human with memories. When Butters rescues Cartman, the general was in the middle of An Aesop on the situation when Cartman accidentally farts, exposing himself.
  • Dr. Wakeman of My Life as a Teenage Robot asks herself that exact question when her robot daughter Jenny "XJ9" Wakeman asks for the ability to Dream.


Real Life

  1. To be fair, Racter sounds like it was a simple template-based text generator and its output was likely cherry-picked for clarity and "poeticness".
  2. not necessarily a complete failure: there are uses for somewhat sapient idiot-savants who don't need to sleep and with simple precautions can survive utter destruction to tell what happened
  3. yes. We don't know if they dream of Mareep, however
  4. (least amoral)
This article is issued from Allthetropes. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.