Maths & the Chinese Room Problem

From: Chen Yixiong, Eric (cyixiong@yahoo.com)
Date: Thu Aug 23 2001 - 00:18:55 MDT


About a year ago, I engaged in a short debate with a lecturer about whether students understand the homework they do or not. This lecturer insisted that since students can solve the homework problems, it means they understood the text. Since he marks my assignments, so case closed when I cannot figure out a counter argument in three seconds.

A fortnight ago, I started read a excellent and highly recommendable book called "e: The Story of a Number", which explains mathematical history and concepts related to the number e, which range from calculus, logarithmic spirals in nature to the feud between the great mathematical geniuses of Newton to Leibniz.

When I read about differentiation, integration and limit theory, a lot of questions in the past that eluded me now appear clearly. What the hell do you mean by differentiation and integration and what does it do, and that what does this number with an arrow thing pointing to zero means? I realised the beauty and power of what previously seemed meaningless to me.

Yet, I managed to get an A for my previous mathematical module which has a rather high dose of calculus simply by doing my homework without asking anyone (exceptions do exist when you cannot solve a problem for more than six hours). Something seems wrong, because I don't understand something and yet I seem to do. I made extensive use of my own self-invented heuristics that once a certain lecturer (who taught on a math-related subject) told me I should stick to the "textbook" in case I make mistakes.

Suddendly, I realised that this has a connection to a famous problem in Artifical Intelligence, which we call the Chinese Room problem.

<quote>
Searle's Chinese room argument tries to show that strong AI is false. But how can anyone show it to be false if we don't know what the human mind's program is? How can one know it a priori - before any empirical tests have been given? This is the ingenious part of Searle's argument. The idea is to construct a machine which would be a zombie (ie. not mental) with any program. And if this machine would exist, it is the case that strong AI would be false, since no program would ever make it mental.
But how to construct such an machine? And worse than that, how would we actually know if it has thoughts or not? This is the second problem which Searle solves by putting ourselves to implement the machine. If we implement the program, we would know if it is mental or not. Therefore the Chinese room argument has a thought experiment part. This is presented next.

Suppose you are in a closed room which has two slots. From the slot 1 somebody gives you Chinese characters which you don't recognize as words ie. you don't know what these small characters mean. You also has a huge rulebook which you use to construct another Chinese characters from those that were given to you, and finally you split these new characters out of the slow 2. In short:

1. Chinese characters comes in, 2. you use the rulebook and construct more Chinese characters and 3. you put those new characters out.

In its essence, this is just like a computer program which has an input, it computes something and finally splits an output. Suppose further that the rulebook is such that people outside this room can discuss with you in Chinese. For example, they send you a question 'how are you' and you, following the rulebook, would give a meaningful answer. So far, the computer program simulates human being which understands Chinese.

One can even ask 'do you understand Chinese?' from the room and it can answer 'yes, of course' despite of the fact that you, inside the room, would not understand a word of what is going on. You are just following rules, not understanding Chinese.

The crucial part is this: given any rulebook (=program), you would never understand the meanings of those characters you manipulate. Searle has constructed a machine which cannot ever be mental. Changing the program means only to change the rulebook and you can clearly see that it does not increase you understanding. Remember that the strong artificial intelligence states that given the right program, any machine running it would be mental. Well, says Searle, this Chinese room would not understand anything... there must be something wrong in strong AI.

</quote>

The paradox makes sense now. The students don't understand the true problem and solutions behind the problem, but they know the rules of solving the problem and how to apply them. Therefore, they operate like a Chinese Room translator who seems to understand Chinsese. Ah ha, so those Singapore students who aced the world maths competition may actually lose to a computer. Oops, I also notice that our tests and exams actually measure this instead of true knowledge, even quite a number of those so-called open book exams (since the student still does not know what the hell these maths mean).

Back to the point, it goes to show how much students today don't understand what they learn, and even the teachers themselves too. Once I asked a math teacher why do we need to learn logarithms, she said she does not know why but since the exams require it, I better learn it. Sigh.. I wonder what the hell happened to real learning among students, and the concept of real education.

For the actual argument, refer to:

JOHN R. SEARLE'S CHINESE ROOM
http://www.helsinki.fi/hum/kognitiotiede/searle.html

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



This archive was generated by hypermail 2.1.5 : Sat Nov 02 2002 - 08:10:00 MST