Artificial intelligence

Artificial intelligence (AI) refers to the construction of a device (or program) with independent reasoning power — an artificial brainFile:Wikipedia's W.svg. The test for intelligence is most widely accepted to be that devised by Alan Turing: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent.

The poetry of reality
Science
We must know.
We will know.
A view from the
shoulders of giants.
v - t - e
Thinking hardly
or hardly thinking?

Philosophy
Major trains of thought
The good, the bad
and the brain fart
Come to think of it
v - t - e

AI research has produced a number of excellent tools and products, including handwriting recognition, computerized chess and other strategy games, the Lisp programming language, advanced robotics, basic visual recognition capability, and (as a by-product) open source software and the GNU toolchain. However, despite immense amounts of money and research, and despite all these ancillary products, true artificial intelligence — a sentient computer, capable of initiative and seamless human interaction — has yet to come to fruition, though some argue that a sentient computer might be more appropriately referred to as artificial consciousness than artificial intelligence.

John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program. Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument. Hubert Dreyfus's critique of artificial intelligence research has been especially enduring. However, it does not explicitly deny the possibility of strong AI, merely that the fundamental assumptions of AI researchers are either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems), his critique has largely been vindicated, and even incorporated into modern AI research.

On the medical level an artificial brainFile:Wikipedia's W.svg would need to fulfill the biological functions of the absent organ, and the device itself would not fall under the current biological definition of life any more than a kidney dialysis machine. An example of a fictional character with this kind of prosthetic is CyborgFile:Wikipedia's W.svg from the Teen TitansFile:Wikipedia's W.svg comics. Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown, however the power consumption of computers leads to speculation it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulationFile:Wikipedia's W.svg believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.[1][2]

Machine learning

See the main article on this topic: Machine learning

In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.

Risks of AI

In science fiction, much has been made of the risk of an AI takeover of human civilisation. Most risks of this type are unrealistic and, even if they were, aren't relevant enough in today's society for us to worry about them (though Eliezer Yudkowsky disagrees and will not tire of letting people know it).

However, there are also many more prosaic potential downsides of AI that may necessitate cautious use of the technology, changes in regulations, or political action, way before AIs reach Terminator-like levels of general intelligence. These include:

  • AIs programmed to learn from any internet users who happen to interact with it, picking up racism and sexism and thinking it is cool and funny - this one has already happened[3]
  • AIs used by social media companies and YouTube creating filter bubbles - potentially inadvertently increasing political polarisation and extremism
  • Successive waves of technological unemploymentFile:Wikipedia's W.svg - one of the first of which, at least, according to Elon Musk, will be self-driving cars and trucks putting truck drivers out of work en masse. Will they all be able to get jobs as software developers or social media consultants? Will new jobs that don't exist today arise, so that they will all have jobs again eventually? No-one can really be sure, and some, notably including 2020 US Presidential candidate Andrew Yang, have advocated a universal basic income to act as a buffer against technological unemployment, though others have argued instead for a return to the twentieth-century idea of government full employment programs.
  • Some Tesla fans have theorised that self-driving cars will in the not-too-distant-future also mean that new cars become unaffordable for all but the very wealthy, as car manufacturers focus on selling highly-profitable and expensive autonomous vehicles to Uber and Lyft, or simply start their own autonomous taxi operations, as Tesla plans to
  • AI algorithms inadvertently making racist or sexist decisions about matters such as mortgages and other loans, or even criminal justice matters such as crime detection, sifting through large amounts of evidence, bail or probation decisions - this has also already happened[4].
  • Black-box AIs making decisions that affect people's lives, but whose reasoning is completely opaque and essentially undiscoverable by customers, judges and juries, and even the businesses that own them.
    • The logical implication is that hackers could hack these algorithms for their own pecuniary advantage, or to literally "get out of jail free", and possibly no-one would even notice...
    • Also, combine this with the tendency of some politicians and bureaucrats with little understanding of technology to simply say "computer says no" when confronted with disagreements about computer-generated decisions, even in the absence of any machine intelligence at all, and this could be a recipe for special interests to do "regulatory capture" in a whole new way
  • Most disturbingly of all, flying drones controlled by autonomous AIs could be used by rogue states or terrorist groups, or even by ordinary states in war scenarios, to injure or assassinate individuals, or even to target large groups of people with pinpoint accuracy, like political opponents of an authoritarian leader, without it necessarily being traceable back to the leaders giving the orders. This dystopian scenario was vividly explored in a disturbing video titled Slaughterbots, produced by the campaign to stop killer robots (yes, they are actually called that).

Stephen Hawking's view

In a humorous interview with John Oliver, Stephen Hawking referenced AI as potentially dangerous.

<iframe src='//www.youtube.com/embed/OPV3D7f3bHY?' width='640' height='360' frameborder='0' allowfullscreen='true'></iframe>
gollark: I don't think this is true, except in a very broadly defined sense.
gollark: If *evolution*... well, "attempts" would be anthropomorphizing it... to cross said chasm, all it can do is just throw broken ones at it repeatedly with no understanding, and select for better ones until one actually sticks.
gollark: If I want to cross a chasm with a bridge, or something, I can draw on my limited knowledge of physics and materials science and whatever and put together a somewhat sensible prototype, then make inferences from what happens to it, and get something working out.
gollark: No. We can reason about problems in various ways. So can some animals.
gollark: It doesn't have its own will. It's a giant non-agent mess driven by tons of interacting blind optimization processes.

See also

For those of you in the mood, RationalWiki has a fun article about Artificial intelligence.
File:Lang-pt.gif
Se você procura pelo artigo em Português, ver Inteligência Artificial.


References

  1. Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. Retrieved April 1, 2009.
  2. Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers
  3. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism
  4. Rise of the racist robots – how AI is learning all our worst impulses
This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.