Path: utzoo!attcan!uunet!cs.utexas.edu!mailrus!ames!ncar!unmvax!ariel!wayback.unm.edu!bill From: bill@wayback.unm.edu (william horne) Newsgroups: comp.ai Subject: Re: What's the Chinese room problem? Message-ID: <567@ariel.unm.edu> Date: 22 Sep 89 16:52:33 GMT References: <235@cerc.wvu.wvnet.edu.edu> Sender: news@ariel.unm.edu Reply-To: bill@wayback.unm.edu (william horne) Organization: University of New Mexico, Albuquerque Lines: 30 In article <235@cerc.wvu.wvnet.edu.edu> siping@cerc.wvu.wvnet.edu (Siping Liu) writes: >I remember there was a discussion on the net about a >"Chinese room" problem half a year ago. I never know >the exact problem definition. Can someone tell me >what it is? I think you might be talking about the problem Searle poses in his paper, "Minds, Brains, and Programs", Behavioral and Brain Sciences (1980). He poses the following: Suppose there is a black box which accepts Chinese as input and performs a translation into English as output. Does this imply that the black box "understands" Chinese? He claims not by the following argument: Suppose in the box is a man who neither understands Chinese or English. When presented with a Chinese string of words, he simply matches the words and sentance structure according to a set of rules which dictates how to manipulate the input. However, he is at no time aware of the meaning of the input he is manipulating. And thus he does not "understand" Chinese. This example is relavant to AI, because it questions the validity of the Turing Test as a test of "understanding", as well as questioning the legitimacy of rule based systems as models of intelligence. Is this really any different than what we do in our heads anyhow? What is so bad about a complex system of rules being applied. Maybe the understanding is in the rules, not in the man manipulating them. In this sense Searle is imposing a homunculus on the system. Maybe there is just rules, no man.