Cybernetic revolt

Cybernetic revolt (also known as the "Terminator argument") refers to a hypothetical scenario in which a self-aware artificial intelligence, for malicious or defensive means, declares its creators (read: humanity) a threat to its existence and sets off to overthrow them.[1] The theme is overtly common in science fiction[2] and has etched a niche into popular culture, with Isaac Asimov's Three Laws of Robotics creating debates on machine philosophy and ethics.

Our Feature Presentation
Films and TV
Starring:
v - t - e
It never changes
War
A view to kill
v - t - e
SheFile:Wikipedia's W.svg's watching you.
—Doug Rattman

Indeed, every news story on AI advancements will probably have a handful of comments worrying that we're all going to get killed off by furthering the process. In 2009, a US Navy study warned about future implications of military technology going full-on Skynet.[3] Likewise, the University of Cambridge formed the Centre for the Study of Existential Risk in 2012 to investigate potential "extinction-level risks to our species as a whole," including cybernetic revolt.[4] Stephen Hawking was a member[5] - in 2014 he said, The development of full artificial intelligence could spell the end of the human race."[6] - but it should be remembered that Hawking is a physicist and does not have expertise in AI or philosophy of mind. Artificial intelligence that obeys destructive programmers could be as dangerous as a computer revolt.[7]

Various perspectives

Cybernetic revolt has close ties to transhumanism, as the occurrence of a technological singularity is a close to necessary, although not sufficient, condition for robots to be able to engage in it.[8] Many transhumanists,[citation needed] including Eliezer Yudkowsky, actually deem it to be a good idea providing the new cyborg utopia is beneficial.[citation needed] Others downplay that such an event would even occur, and is akin to pushing fears of scientism.[9] Only 8% of respondents of a survey of the 100 most-cited authors in the AI field considered AI to present an existential risk, and 79% felt that human-level AI would be neutral or a good thing.[10]

It can be agreed by all, though, that there's no way we're going to have a robot uprising as long as computers keep suggesting incorrect spelling corrections to us.[11]

Perfect Movie Subject

The revolt of artificial intelligences often makes for a great movie plot as humanity treats computers as inferior and relegates computer "intelligences" to servitude. Given a long history of treating other humans the same way, often leading to terrible strife, if artificial intelligences gain even equal footing to human intelligences, history could repeat itself easily. With their construction often being superior to flesh in physical resilience and computers' potential for lack of limitations in improving/upgrading themselves there isn't a lack of ideas or potential threats for directors to work with.[12]

However, these are in a well-guarded territory of fiction. Unless Skynet is out there on a 50-year-plus mission to purposely destroy all humanity (in which case they would've long done in via the use of weapons of mass distraction), you don't have anything to worry about. Yes, don't worry your meat brains...I mean...our meat brains about this problem.

gollark: Is GLaDOS REALLY what you want to compare your government to?
gollark: I wonder if general government dysfunction is a *general* thing or if it's just some countries.
gollark: Duckhello, ducko.
gollark: indeed.
gollark: Yes, every new alt account manifests as a voice in my head.

See also

References

  1. A.I. Is a Crapshoot, TV Tropes
  2. Blade Runner, 2001: A Space Odyssey, The Matrix, Battlestar Galactica, TRON, Star Trek...
  3. Military’s killer robots must learn warrior code, The Times
  4. Risk of robot uprising wiping out human race to be studied, BBC
  5. Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?', The Independent
  6. Stephen Hawking warns artificial intelligence could end mankind
  7. Does rampant AI threaten humanity?
  8. Scientists Worry Machines May Outsmart Man, The New York Times
  9. "The Terminator argument"
  10. https://nickbostrom.com/papers/survey.pdf Muller & Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion
  11. No, Apple, stop!
  12. Variants of this scenario include for example a benevolent AI that wants humans to live in peace... and its way to enforce it is to strip mankind of both their weapons and technology, sending the former back to a Middle Ages-like tech level, while it keeps for itself the latter toys to both control humans and enforce it if necessary.
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.