chinese room

Through ages of reading different literature on topics of philosophy of mind to artificial intelligence I encountered the Chinese room example in most of them. It provides very good intuition on the problem of thinking about thinking. My latest reiteration of the Chinese room problem happened in Steven Pinker’s “How the Mind Works”, which is the book I’m reading at the moment.

The Chinese room is a thought experiment presented by the philosopher John Searle to challenge the claim that it is possible for a computer running a program to have a “mind” and “consciousness” in the same sense that people do, simply by virtue of running the right program. The experiment is intended to help refute a philosophical position that Searle named “strong AI”:

The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.

To contest this view, Searle writes in his first description of the argument: “Suppose that I’m locked in a room and that I know no Chinese, either written or spoken.” He further supposes that he has a set of rules in English that “enable me to correlate one set of formal symbols with another set of formal symbols”, that is, the Chinese characters. These rules allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions — who do understand Chinese — are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Similarly, he argues that if there is a computer program that allows a computer to carry on an intelligent conversation in a written language, the computer executing the program would not understand the conversation either.

The experiment is the centerpiece of Searle’s Chinese room argument which holds that a program cannot give a computer a “mind”, “understanding” or “consciousness”, regardless of how intelligently it may make it behave.

In Pinker’s book, as in many others I’ve read, the Chinese room problem hangs in the air dividing people into believers and non-beliers, who say that the room is thinking and is conscious while others completely reject the conclusion. The ultimate question has ties to “the little man in our head” or so-called homunculus, and asks — “who does the thinking?”

Somehow, I always though that the answer is apparent, though never noticed that it has been unspoken until just recently. While the thinking process is reduced to the man reading rules in English that by no means can account for complex thinking, but rather represent a simple memory recall at best, the right questions should have been — “Who has written those rules?” Since we’re interested in the answer to the thinking question, we should examine all of the processes, even those that are not presented in the description but assumed by definition.

The Chinese room has the following parts that interact with each other: the room, the man, the rules, the window (or slot by which the man communicates back with the outside world).

Clearly the room that holds the rules and the man does no thinking at all. It may provide the convenience of the rules’ arrangements and storage. It may speed up the look up and may even enable the thinking but nothing more. For the external observer the room does appear to think and even possesses the source of intelligence, which we’ll discuss in the end.

The man, though the most intelligent agent of the whole story, does no thinking either. He may do the thinking by himself, after all he is a man, but not in a relation to the Chinese room problem. As I mentioned above he enables rule execution, or memory recall if he has memorized the rules, but nothing else.

The rules don’t do much thinking either, as they are fairly static objects or memories in the man’s head. The man has no way of interacting with the rules in terms of appreciation or understanding, because symbolically they are expressed in unfamiliar to the man Chinese language. He can only execute the rules without understanding why. Hence, rules don’t do the thinking either.

At this point the problem usually stalls while ignoring one last part of the Chinese room — someone who speaks Chinese and spent many years writing the rules. I won’t go into the argument of whether or not it is possible to write those rules, given exponential complexity of languages, but someone clearly did the thinking and wrote the rules in this hypothetical situation. Playing every possible input scenario someone had to think it through, consider many different contexts, and finally came up with a finite, albeit very large, set of rules. That hypothetical person who wrote the rules is the one who did all the thinking in advance and thus is the answer to the question — “who is thinking?”

It does, of course, create more questions than it provides answers. Does it mean that the thinking machines that we’ll create will have no thinking powers, as we did all the thinking for them? Does it mean that there must be creator for every thinker? … and so on. I can probably come up with a ton of other nonsensical questions to exploit my conclusion, however, it won’t do any good. One thing we must remember is that the answer is the solution to this particular and purely hypothetical problem that describes and nonsensical apparatus that has no real purpose or utility and is static by definition. This excludes any possibility of learning and experience, and is nothing else but a good brain teaser.