Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!usc!elroy.jpl.nasa.gov!jarthur!uunet!mcsun!ukc!edcastle!aiai!jeff From: jeff@aiai.ed.ac.uk (Jeff Dalton) Newsgroups: comp.ai Subject: Hayes vs. Searle Message-ID: <2629@skye.ed.ac.uk> Date: 1 Jun 90 17:46:08 GMT References: <16875@phoenix.Princeton.EDU> Reply-To: jeff@aiai.UUCP (Jeff Dalton) Organization: AIAI, University of Edinburgh, Scotland Lines: 133 In article <16875@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes: >(2) SEARLE'S CHINESE ROOM > > Pat Hayes > >The basic flaw in Searle's argument is a widely accepted misunderstanding >about the nature of computers and computation: the idea that a computer >is a mechanical slave that obeys orders. This popular metaphor suggests >a major division between physical, causal hardware which acts, and >formal symbolic software, which gets read. This distinction runs >through much computing terminology, but one of the main conceptual >insights of computer science is that it is of little real scientific >importance. Computers running programs just aren't like the Chinese >room. > >Software is a series of patterns which, when placed in the proper >places inside the machine, cause it to become a causally different >device. Computer hardware is by itself an incomplete specification of a >machine, which is completed - i.e. caused to quickly reshape its >electronic functionality - by having electrical patterns moved within >it. The hardware and the patterns together become a mechanism which >behaves in the way specified by the program. > >This is not at all like the relationship between a reader obeying some >instructions or following some rules. Unless, that is, he has somehow >absorbed these instructions so completely that they have become part of >him, become one of his skills. The man in Searle's room who has done >this to his program now understands Chinese. The AI community must be pretty annoyed with Searle by now. He writes papers, gives talks, inspires newspaper articles. In the UK (at least), he even hosted his own philosophical chat show. And throughout it all he refuses to accept that his simple little argument just doesn't show what he thinks it does. It would be nice, therefore, to have a straightforward refutation of the Chinese Room, preferably one with some intuitive appeal, and even better, I suppose, if it could be shown that Searle was in the grip of a fundamental misunderstanding of computation. But how plausible is the argument outlined in this abstract? I know it's not fair to assume these three paragraphs are all there is to it. However, I think there's enough for us to draw at least some tentative conclusions. To begin, I'd prefer to describe the conceptual insight in a different way. What happened was the discovery of a certain class of universal, programmable machines. Rather than wire up a number of different hardware devices, it's possible to make one that, by executing a program, can emulate all the others. It's not unreasonable to say the program becomes part of the machine. After all, we could always produce another machine that embodied the program in hardware; and we accept that such a step is equivalent (modulo execution speed and maybe a few other things) to loading a program into a general purpose machine. However, we can follow the hardware / software equivalence both ways. We don't have to think of a computer + software only as a new machine; we can also think of it as a computer + software. Indeed, there are a number of configurations that are formally equivalent, including one where the program is stored as text in a book and read by a camera, with some mechanical device for turning the pages and making notes on scraps of paper. Now it doesn't seem so different from the Chinese Room after all; and, given that the configurations are equivalent, Searle can pick whatever one is best for making his point, provided, of course, that he does not rely on arguments that do not apply to the other configurations as well. Indeed, there may always be a suspicion that Searle is getting too much mileage out of the presence of a person in the room. On the other hand, it's hard to see why replacing the person with a simpler, more "mechanical", device will suddenly cause "understanding" to occur if it wasn't there before. This brings us to the suggestion that if the person in the room somehow absorbed the instructions so completely that they became part of him, he would then understand Chinese. Whether or not this follows from a correct understanding of computers and computation, it has to be considered. One point to bear in mind is that we don't have a very complete or precise notion of what the instructions being followed by the person in the Room are like. If the instructions are largely unspecified, then the changes involved in absorbing them completely are largely unspecified too. There are certainly some changes that would result in the person in the Room understanding Chinese, and perhaps they amount to absorbing some program. However, we're not yet, given our limited knowledge of how understanding works and our rather vague notion of what the instructions to be absorbed might be, in a position to go beyond this "perhaps" to the claim that absorbing a program, much less the program used in the Chinese Room, would definitely result in understanding. Indeed, suppose someone does acquire the skill represented by the Chinese Room. That is, when presented with written questions in Chinese they can produce reasonable written responses, also in Chinese. If this behavior counts as understanding in itself, or is sufficient evidence for understanding, then we didn't need any of these arguments, because the Chinese Room already had this behavior and hence already understood Chinese. That is, we're back to a version the "system reply". Nor is it sufficient to say that this behavior counts as understanding when it occurs in a person, because that wouldn't tell us what we need to know about computers. At best, it might let us block the step where Searle goes from the person not understanding to the Room not understanding either. However, an argument that makes people such a special case seems more likely to undermine the case for understanding in computers than to support it. Worse, it's far from clear that such a skill would show that a person understood Chinese (unless we we inclined to count the behavior as sufficient in any case, in which case we don't need these arguments). If we ask the person in English what is going on in Chinese, he wouldn't know (unless we suppose more than that he has acquired the skill of writing replies to written questions). This is hardly what we'd expect from a person who understood both English and Chinese. In the end, we're only slightly closer to defeating Searle, if that, than we were before. -- Jeff