AI effect

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.[1]

Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] AIS researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]

"The AI effect" tries to redefine AI to mean: AI is anything that has not been done yet

A view taken by some people trying to promulgate the AI effect is: As soon as AI successfully solves a problem, the problem is no longer a part of AI.

Pamela McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior, were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the "failures", the tough nuts that couldn't yet be cracked."[4]

When IBM's chess playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, people complained that it had only used "brute force methods" and it wasn't real intelligence.[5] Fred Reed writes:

"A problem that proponents of AI regularly face is this: When we know how a machine does something 'intelligent,' it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright."[6]

Douglas Hofstadter expresses the AI effect concisely by quoting Larry Tesler's Theorem:

"AI is whatever hasn't been done yet."[7]

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by human. This formalisation is referred to as human-assisted Turing machine.[8]

AI applications become mainstream

Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI.

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."[9]

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"[10]

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"[11]

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."[12]

Legacy of the AI winter

Many AI researchers find that they can procure more funding and sell more software if they avoid the tarnished name of "artificial intelligence" and instead pretend their work has nothing to do with intelligence at all. This was especially true in the early 1990s, during the second "AI winter".

Patty Tascarella writes "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding"[13]

Saving a place for humanity at the top of the chain of being

Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe".[14] By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought as uniquely human is discovered in animals, (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."[15]

gollark: It did not really work.
gollark: I tried that.
gollark: JSON-serializing the messages you store would be a significantly better thing, though I don't know if PHP can do that easily.
gollark: I mean, I can't think of any which you could make using the existing environment without any significant tweaks.
gollark: I should really make my website's HTML Haiku-compliant.

See also

Notes

  1. Haenlein, Michael; Kaplan, Andreas (2019). "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence". California Management Review. 61 (4): 5–14. doi:10.1177/0008125619864925.
  2. McCorduck 2004, p. 204
  3. Kahn, Jennifer (March 2002). "It's Alive". Wired. 10 (30). Retrieved 24 Aug 2008.
  4. McCorduck 2004, p. 423
  5. McCorduck, p. 433
  6. Fred Reed (2006-04-14). "Promise of AI not so bright". The Washington Times.
  7. As quoted by Hofstadter (1980, p. 601). Larry Tesler actually feels he was misquoted: see his note in the "Adages" section of .
  8. Dafna Shahaf and Eyal Amir (2007) Towards a theory of AI completeness. Commonsense 2007, 8th International Symposium on Logical Formalizations of Commonsense Reasoning.
  9. Swaine, Michael (September 5, 2007). "AI - It's OK Again! Is AI on the rise again?". Dr. Dobbs.
  10. Stottler Henke. "AI Glossary".
  11. Marvin Minsky. "The Age of Intelligent Machines: Thoughts About Artificial Intelligence". Archived from the original on 2009-06-28.
  12. Quoted in "AI set to exceed human brain power". CNN.com. July 26, 2006.
  13. Patty Tascarella (August 11, 2006). "Robotics firms find fundraising struggle, with venture capital shy". Pittsburgh Business Times.
  14. Faye Flam (January 15, 2004). "A new robot makes a leap in brainpower". Philadelphia Inquirer. available from Philly.com
  15. Reuben L. Hann. (1998). "A Conversation with Herbert Simon". IX (2). Gateway: 12–13. Cite journal requires |journal= (help) (Gateway is published by the Crew System Ergonomics Information Analysis Center, Wright-Patterson AFB)

References


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.