Chinese room

The Chinese room is a thought experiment designed by John Searle in his 1980 article "Minds, Brains, and Programs", largely as a response to Alan Turing's Turing test and functionalist approaches to the mind. It was designed to prove that computer programs will never be able to create minds; by showing for a certain degree of "showing" that it is possible for a computer to behave as if it is intelligent, but in a purely mechanical way, such that it lacks the comprehension that we intuitively believe to be part of intelligence. The experiment has become well known and influential in various scientific fields, especially cognitive science.[1]

Thinking hardly
or hardly thinking?

Philosophy
Major trains of thought
The good, the bad
and the brain fart
Come to think of it
v - t - e

The experiment

Searle describes the thought-experiment as follows:

Suppose that I'm locked in a room and given a large batch of Chinese writing...[but] to me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols...from the point of view of somebody outside the room in which I am locked -- my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese.'

The conclusion is that just as the man in the thought experiment fails to comprehend Chinese but still produces coherent answers, a machine can't possibly become "conscious" because it is just following a code, it doesn't understand anything itself.

The setting is the Turing Test on its head. A human, equipped with the time and algorithms, is imitating a computer that imitates a human being. There's nothing remarkable or special about using Chinese in this thought experiment. It is merely something that is sufficiently different from English, or a related language with a Roman alphabet, to provide a good visual image to work with.[note 1] It could be many things; French, German, a made up language, or something else such as processing scientific data loaded with jargon i.e., we can ask whether an individual, loaded with the right sort of tools, can imitate a knowledgeable expert without absorbing the information themselves. Some people suggest that this is, in fact, what school sets out to achieve.[2]

The experiment was briefly demonstrated on an episode of the BBC's Horizon programme, "The Hunt for AI". Here, mathematician Marcus Du Sautoy sat in a room and compared Chinese script put through a letterbox with a book of common phrases, successfully returning answers to questions. In this demonstrative version, the algorithm is far simpler than the one imagined by Searle. In reality, to mimic a language would require far more information and a far larger instruction book, as well as the need for the person-powered algorithm to retain some memory meaning it might take years for a human to process even the simplest idea. In the realms of a thought experiment, however, such a limitation is of little concern.

In contrast to the Turing Test

The aim of the Chinese Room thought experiment is to say that machines will never truly comprehend what they are saying even if they do pass the Turing Test because they lack any concept of semantics. The Turing Test determines if a machine is "conscious" by effectively asking it "are you conscious?" and seeing it if replies "yes". This was originally called "the imitation game", as we are asking if the machine can imitate a human to the degree that it can fool another human. Meanwhile, the internet has been asking humans if they can fool others into thinking that they're non-intelligent chatbots since the early 90s.

The Turing Test comes from a concept within computer programming that code should pass certain tests in order to be considered functional. The tests are laid out in advance, and the code is considered to "work" if it passes all specified tests without error, and this is independent of what the code actually does behind the scenes, hidden from the user. An analogy is made with artificial flight, which was successfully achieved when people stopped trying to make a replica of a bird, and simply looked at building a machine that would pass the required test: namely, fly. In the case of the Turing Test, the principle is allied to sentience and intelligence. As an outward display of intelligence and the ability to communicate is pretty much the only evidence we can get from other humans to prove that they are conscious ("Hey, you can totally trust that I'm not a figment of your imagination or a computer program!"), the Turing Test simply holds machines to the same standard.

Searle's Chinese Room reverses the test-focused nature of the Turing Test, suggesting that "true" intelligence and self-awareness is deeper, and not based on an observable (and apparently superficial) result. Instead, it questions what must be going on inside the machine's brain to see if it really does think analogous to going back to the days of trying to build a bird, rather than a machine that would actually fly. The problem here is that such a thing is difficult all we have to prove a human is conscious is their ability to imitate a human, as cutting open a brain and having a poke around doesn't, in itself, prove anything. Turing did design the imitation game to avoid this sort of difficulty, drawing inspiration from the test-focused concepts within computer science.

Problems

There are a few problems with this interpretation of the result, however, as well as the form of the thought-experiment.

Consciousness and emergence

A computer that passes the Turing Test is no more alive when it is switched off than the code is alive when it's printed onto paper and left stored in a room, much in the same way that a heap of neurons on their own isn't alive most people agree that a corpse that hasn't yet rotted to dust isn't actually alive. It takes them all working to make consciousness. When a hypothetical Turing Test compliant computer is switched on, and its program executed, it produces a result indistinguishable from human consciousness. Because of this, asking a man in a room whether he understands Chinese while following an algorithm would be to miss the point of how consciousness operates and where it actually stems from. In the isolated room, it's not the man that needs to be comprehending or understanding Chinese, but the code and the instructions. The man is simply a tool for executing the program, much in the same way that blood supply and electrical conductivity by potassium and sodium ions is a tool for executing the functions of the human brain. It is the algorithm itself when combined with the operations of the man that understands Chinese.

Searle responded to these criticisms of the thought experiment by suggesting an extension where the man completely internalized the necessary algorithms, remembering and executing them within his own head. In this case, it's not external activities that are understanding the language, it's entirely within the human brain. Yet the man in the thought experiment still doesn't understand Chinese.[3] However, critics are quick to point out that regardless of where the algorithm happens whether it be with pens and paper, on a computer or in someone's head it is still the algorithm combined with an ability to execute it doing the work and understanding. Thus, Searle's refutation doesn't actually answer this criticism at all. Indeed, it's probably indicative that Searle didn't really understand the criticism in the first place. The Chinese Room ignores any emergent properties of consciousness; that it exists as part of the order and execution of the algorithm within a computer, or within the vast network of neurons in a human brain.

It's the algorithm, stupid

The experiment really proves little overall. The man is going through the same algorithms executed by a computer that can "understand" Chinese. He may be doing them manually (or even in his own head) but he is still performing them in the same manner. It can be conceived that the pens, paper and filing cabinets needed to do this are a form of "help". We can give him more help with a calculator, then finally a robot assistant to go through the filing cabinets and flip the pages in the instruction book. We can then consider adding more "help" and automation to the process gradually until the man is essentially typing into a computer to get the result and at no point will we cross a line where it suddenly becomes the computer doing the work compared to the man. The man in the room is now talking to a Turing Test passing computer! A similar gradual process can be also said for Searle's internalization variant, although this would raise the issue of a separate Turing Test compliant computer inside the man's head and whether it would cause a bit of a schizophrenic episode (although this is a thought experiment, as such a thing would likely be impossible in reality and not cause a problem).

But basically, this pretty much brings us back to whether the man, the agent that simply executes the emergent programme, needs to understand Chinese at all. For the thought experiment to conclude that computers cannot be conscious, this needs to be demonstrated. The man himself has no more need to understand the Chinese than the atoms in his body need to understand English when he describes the odd day he's been having, pushing strange symbols around in a locked room, to his friends down the pub.

Circular assumptions

Finally, the entire concept begs the question that human consciousness is special and different. In order to accept the conclusion that a machine cannot comprehend the same way as humans do, you have to assume that human thinking is different to start with, otherwise the experiment fails immediately you must assume that there is something different about the man and that he must somehow understand Chinese throughout the process. Compare a brain running on connected neurons and a computer running on silicon that completely and perfectly simulates a brain. The difference between the former and the latter is irrelevant, and so the Turing Test is valid, unless one assumes some non-materialist dualism first, hence the circularity. That understanding, some semantic "magic", can happen only in organic brains is nothing but bio-chauvinism that one has no right to assert.

We experience, and we can be sure of that ourselves, as Descartes noted when he said "I think, therefore I am". But we have no proof that this exists in others, so are reliant on projecting our consciousness onto people we can interact with. That this heuristic for assessing consciousness should only apply to human beings and cannot under any circumstances apply to computers is just special pleading.

Response vs Initiative

There of course remains the fact that a man inside the Chinese Room, speaking with a fluent speaker of Chinese, would only be able to respond with fluency. He could receive input and create a response with full accuracy, and could provide an output with the expectation of a particular input, yet could not, for example, ask a series of questions seeking particular input. He would be unable to ask where he is, as he would not know the translation. Likewise, he would not know how to ask for food, or drink, or to be released. It is in this way that a distinct point is made via the experiment: a consciousness has to be able to learn, and to sustain itself. The man could arrange questions until food was provided, and then through testing associate that question with food; he would then have learned a question in Chinese which would produce an answer of food, and know something about that part of the Chinese language. This contradicts the experiment's conditions, as he would in fact begin to understand Chinese; thus a system must have the capacity to learn, rather than to just respond.

gollark: GTech™ sites are shielded in all extant and hypothetical realities.
gollark: I'm sure you'd like to think so.
gollark: Wouldn't work.
gollark: That doesn't mean it can magically affect heavily shielded GTech™ facilities.
gollark: What's orbital? Some of the soul processing things? That's true.

See also

File:Lang-pt.gif
Se você procura pelo artigo em Português, ver Quarto chinês.


Notes

  1. Assuming a western cultural bias, of course. Feel free to mentally substitute "Elvish" if this bothers you.

References

This article is issued from Rationalwiki. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.