Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!usc!venera.isi.edu!smoliar From: smoliar@vaxa.isi.edu (Stephen Smoliar) Newsgroups: comp.ai Subject: Re: Hayes vs. Searle Summary: what Turing had in mind Message-ID: <13871@venera.isi.edu> Date: 12 Jun 90 00:45:38 GMT References: <16875@phoenix.Princeton.EDU> <2629@skye.ed.ac.uk> <2687@skye.ed.ac.uk> <586@dlogics.COM> Sender: news@venera.isi.edu Reply-To: smoliar@vaxa.isi.edu (Stephen Smoliar) Organization: USC-Information Sciences Institute Lines: 89 In article dg1v+@andrew.cmu.edu (David Greene) writes: >Excerpts from netnews.comp.ai: 7-Jun-90 Re: Hayes vs. Searle David >Angulo@dlogics.COM (1092) > >> No, a program couldn't be printed (if by program you mean a list of >> questions and their answers) because such a book or program is always >> incomplete. To prove this, all you have to do is ask in English all of >> the possible addition problems. This is infinite so the book cannot list >> all of the questions nor can it list all of the answers. > >This raises a question that has not been clear in the discussion, that >is, it seems to confuse intelligence with omiscience. It seems >perfectly reasonable to allow that the entity (book, human, room) does >not know a particular line of inquiry. The distinction (at least for >the turing test) has always been that the pattern of response is >indistinguishable from an "intelligent being" (usually human). >Constantly saying "I don't know" to all questions won't get you too far, >but it is appropriate at certain times. > Turing was well aware of this point. Perhaps not enough readers have actually read Turing's paper. Take a good look at the sample dialog he proposes: Q: Please write me a sonnet on the subject of the Forth Bridge. A: Count me out on this one. I never could write poetry. Q: Add 34957 to 70764. A: (Pause about 30 seconds and then give as answer) 1015621. (sic) Q: Do you play chess? A: Yes. Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play? A: (After a pause of 15 seconds) R-R8 mate. It should be clear from this example that Turing was more interested in the behavior which went into the conversation than in the content of the conversation itself. I found myself thinking about Searle again over the weekend, provoked primarily by his silly letter to THE NEW YORK REVIEW. I think John Maynard Smith presented an excellent reply, but it occurred to me that Searle may be very seriously confused in how he wants to talk about symbols. This thought was further cultivated while I was reading Wittgenstein's "Blue Book." Let me try to elaborate my recent thoughts. Wittgenstein is discussing the concept of solidity. Here is the relevant passage: We have been told by popular scientists that the floor on which we stand is not solid, as it appear to common sense, as it has been discovered that the wood consists of particles filling space so thinly that it can almost be called empty. This is liable to perplex us, for in a way of course we know that the floor is solid, or that, if it isn't solid, this may be due to the wood being rotten but not to its being composed of electrons. To say, on this latter ground, that the floor is not solid is to misuse language. For even if the particles were as big as grains of sand, and as close together as these are in a sandheap, the floor would not be solid if it were composed of them in the sense in which a sandheap is composed of grains. Out perplexity was based on a misunderstanding; the picture of the thinly filled space had been wrongly APPLIED. For this picture of the structure of matter was meant to explain the very phenomenon of solidity. Leaving the issue of understanding aside for a moment, I think Searle is having a similar problem of misunderstanding with regard to computational behavior. The bottom line of Church's thesis is that symbol manipulation serves to EXPLAIN computational behavior, just as a theory based on the nature of atoms and molecules serves to explain solidity. Thus, just as Wittgenstein has warned us against letting the specifics of the atomic model interfere with our understanding of solidity, so we should be careful about letting the specifics of symbol manipulation be confused with the behavior which they model. In a previous article I accused Searle of being rather naive about what computers actually do in practice; now I am inclined to believe he is just as naive about the general theory of computational behavior. ========================================================================= USPS: Stephen Smoliar USC Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina del Rey, California 90292-6695 Internet: smoliar@vaxa.isi.edu "So, philosophers of science have been fascinated with the fact that elephants and mice would fall at the same rate if dropped from the Tower of Pisa, but not much interested in how elephants and mice got to be such different sizes in the first place." R. C. Lewontin