Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!cs.utexas.edu!samsung!uunet!mcsun!ukc!edcastle!aiai!jeff From: jeff@aiai.ed.ac.uk (Jeff Dalton) Newsgroups: comp.ai Subject: Re: Hayes vs. Searle Message-ID: <2754@skye.ed.ac.uk> Date: 12 Jun 90 15:54:52 GMT References: <3204@se-sd.SanDiego.NCR.COM> Reply-To: jeff@aiai.UUCP (Jeff Dalton) Organization: AIAI, University of Edinburgh, Scotland Lines: 47 In article <3204@se-sd.SanDiego.NCR.COM> jim@se-sd.SanDiego.NCR.COM (Jim Ruehlin) writes: > >According to Searly in the Chinese Room paper, the difference is that >human brain tissue has some "magical" (my word) quality that provides for >intelligence/understanding/causative powers, while mere silicon doesn't. >He states that just what this quality is and how it works is a matter for >empirical study. Neat way to sidestep the issue, no? > >It seems to me that this is the real point of his paper - brain mass is >different from silicon mass in some fundamental way. There's some >molecular/atomic/?? quality or structure that makes brain mass causative >and silicon not. He may not have intended this, but thats what it comes >down to, and it seems patently silly. There was no evidence for this when >he wrote his paper, and there still isn't. In a sense, you have it backwards. Searle thinks he has shown that computers do not understand (merely by instantiating a computer program), and he takes it as given that people do understand. If both were so, it would follow that there must be some difference between computers (at least as far as they are merely instantiating programs) and people. If we accept his argument, there is "some evidence", namely that people do understand. _Something_ has to account for it. Note that Searle doesn't say that running the program in a person would result in understanding. Indeed, in his answer to the system reply, he says it wouldn't. On the hand, he would allow that something made of silicon, etc could understand -- but not merely by running the right program. So it's something about people beyond merely running a program that results in understanding. That is, those who suppose that all the aspects of people needed for understanding can be captured in a program that we could then run on any machine with the right formal properties are wrong. Searle is, moreover, a materialist. Understanding is produced by the physical brain, by it's causal powers if you will. So he figures that something with equivalent causal powers would also produce understanding. However, Searle doesn't know enough to say what the relevant properties of the brain actually are. He thinks empirical investigation is the way to find out. -- Jeff