Path: utzoo!attcan!uunet!munnari.oz.au!uhccux!ames!think!sdd.hp.com!uakari.primate.wisc.edu!zaphod.mps.ohio-state.edu!unix.cis.pitt.edu!ml From: ml@unix.cis.pitt.edu (Michael Lewis) Newsgroups: comp.ai Subject: Re: Hayes vs. Searle Message-ID: <24653@unix.cis.pitt.edu> Date: 2 Jun 90 03:41:05 GMT References: <16875@phoenix.Princeton.EDU> <2629@skye.ed.ac.uk> Reply-To: ml@unix.cis.pitt.edu (Michael Lewis) Organization: Univ. of Pittsburgh, Comp & Info Services Lines: 78 In my view Searle's argument is correct but attempts to be philosophically "safe" by leaving the word "understand" undefined (this has been said here often enough before). I hold a variant of Harnad's symbol grounding position and believe that uninterpreted symbols/non-understanding are not restricted to gedanken experiments but are quite common in our experience. In fact I would claim that it is quite feasible to endow computers with human non-understanding. The only question is whether we choose to make "understand- ing" a prerequisite of intelligence. I lean in that direc- tion but would not be bothered by the claim that an idiot savant machine was intelligent (providing of course the definition of intelligence included idiocy). Consider this example: Laplace transforms are meaningless to me, although I can use the symbol and its tables, it remains magical, yet Mar- tin, the EE in the next office, assures me that it makes perfect sense. He claims that it is its discrete z version which he "cannot see", that is shrouded in mystery. In either case we can rotely manipulate our magical symbols and provide their linguistic descriptions on cue, but to us these symbols remain opaque. This passage is a replay of the Chinese room illustrating how we habitually distinguish between understanding "things" and recognizing and manipulating symbols. The language associating "making sense" of Laplace transforms with "see- ing" them was lifted directly from our conversation. In this context "understanding" is referring to possessing an imaginal (more on this later) model of the transform's behavior not merely producing its linguistic description. We would both describe its effect as "translating equations from the time domain to the frequency domain", yet Martin claims he "understands" it and I claim I don't. Let's define this usage of the word, understand, as understanding in the strong sense. I would be willing to say that I "understand about" Laplace transforms but not that I "understand" the transform itself. This "understand- ing about" things is the weak sense of the term. If we employ this distinction in usage, it is not difficult to find similar examples. Consider an electric circuit. To say that I understand it implies that I possess a model of how it operates, perhaps similar to the fluid or mechanical models studied by Gentner and Gentner. To say I understand about electrical circuits implies only that I am familiar with Ohm's law and similar symbolic descriptions of circuit behavior. I may not have the foggiest idea of "how/why the circuit behaves as it does" even though I could find vol- tages at test points and compute for you it's every capaci- tance. Searle's man and even his room could be said to "under- stand about" Chinese symbols but neither will ever "under- stand" them. This is an ecological realist position main- taining that the "meaning" of symbols arises through their association with experience not vice versa. This observa- tion is hardly profound and could only raise eyebrows in a discussion such as this or among cognitive psychologists who have confused their programs with their subjects. (There, Did I get the Searle tone right?) Actually I enjoy this perpetual discussion. The argument above in no way rules out the possibility of AI, it simply suggests that for machines to manifest intelligence (if "understanding" in Searle's sense is to be the criterion), their symbols must be grounded in an environment (a simula- tion would be fine). The notion of a "disembodied" intelli- gence or an intelligent symbol system is ruled out not by any lack of cleverness in programming but because the poor programs never get to interact with anything "understand- able". Yes, the robot counter example to the "combination" reply is still convincing, but no one ever said (except to funding agencies) that creating artificial Intelligence was going to be easy.