Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!umich!samsung!usc!jarthur!uci-ics!ucla-cs!oahu.cs.ucla.edu!martin From: martin@oahu.cs.ucla.edu (david l. martin) Newsgroups: comp.ai Subject: Speaking of 'intelligence' (was Hayes vs. Searle) Message-ID: <36230@shemp.CS.UCLA.EDU> Date: 14 Jun 90 05:05:09 GMT References: <36194@shemp.CS.UCLA.EDU> <2411@bruce.cs.monash.OZ.AU> Sender: news@CS.UCLA.EDU Organization: UCLA Computer Science Department Lines: 134 In article <2411@bruce.cs.monash.OZ.AU> frank@bruce.cs.monash.OZ.AU (Frank Breen) writes: >Perhaps the problem is that no-one can agree exactly what understanding >is. How can we argue about whether or not something has a quality that >we can't even define. If Searle is just saying we don't really know >what understanding is why not just say so. From the bits of this >discussion that I've read it seems that Searle hasn't really worked >out exactly what understanding is and is searching for an answer. Yes, I agree, that's a big part of the problem. Usually one of the major results of this sort of discussion is that everyone becomes aware that we need to be more sensitive to the ways that we use language, like in our use of the terms 'intelligence' and 'understanding'. However, I don't think the answer is just to throw up our hands and refuse to talk about such terms anymore, as some have suggested. These terms have been adopted and have evolved because of their usefulness to us, and furthermore we hold important intuitions about them (like Searle's intuition that a conventional computer running a conventional program doesn't understand anything). We need to try to get clearer about terms like these, and in the process of doing so, they evolve further. The trouble is, we usually don't get too far until significant new empirical discoveries are made. Steve Smoliar suggested a comparison of the use (or rather mis-use) of computational theory to explain cognitive behavior, with the use of atomic particle theory to explain our concept of hardness, by way of pointing up the kinds of confusion we can get into about our use of concepts in explanation. Here's a comparison that I like even better, which I think can give some really good perspective on the discussion. Consider our use of the term 'life'. Once upon a time, I suspect, 'life' was just a vague, ill-defined notion, like 'intelligence' is now. 'Life' was something mysterious that certain entities just had for a while, and then it went out of them. 'Life' was no doubt part of a constellation of other related concepts, like 'movement', 'growth', and 'reproduction', but it was never quite clear just how they were all related. It may have been felt by most that life was actually the cause of most of these other phenomena. (In the case of 'intelligence', some of the related concepts are 'consciousness', 'understanding', and 'symbol-crunching'.) Now consider our present-day use of 'life', in the context of a relatively mature science of biology. It seems to me that we've learned some pretty interesting things about _how it works_ - like stuff about the chemistry of metabolism, and about the genetic basis of reproduction - which has caused us to forget that we were ever concerned about the mysterious nature of 'life' itself. We recognize that life itself is not really a well-defined concept that we can give necessary and sufficient conditions for. Rather, it's just a loose sort of an umbrella term that brings together a number of related phenomena that we've observed. I mean, I don't think that anyone would want to say that metabolism, or DNA, or evolution, or whatever, is quite the _essence_ of life. We're not really concerned anymore to say what is the essence of life. We've sort of bypassed that concern by finding out some really detailed explanations about how particular processes take place, which we recognize are related to our concept of life. On the other hand, that doesn't mean that 'life' has become a useless or meaningless term. We still use it and have a pretty good idea of what we're talking about. For instance, based on my experience with the language, I would say that it's incorrect to say that a mechanical robot make of steel and plastic and silicon is alive. Others may feel differently, and our use of the term may well evolve. Maybe a robot which included some steel and some plastic, and some cellular organic components as well, we would want to call life, but the boundary line will never be sharply delineated. So, what's the moral with respect to the Chinese room debate? I think there are a number of them, including the following three: (1) Our use of terms like 'intelligence' and 'understanding' are mainly grounded in our experience with other human beings (just as our concepts like 'life' and 'reproduction' are grounded in our experience with real living things in the world). Furthermore, our concept of intelligence is tied up with even more poorly defined notions like those of consciousness and deliberate reflection, the way that we experience them and observe them in ourselves and other humans. When we have found out more about how some of these things actually take place in our brains, we'll forget that we were ever concerned about what might be the precise conditions under which we use these terms. We will then have some powerful explanations about some particular processes which occur in us, which are related to our use of these terms, and our real scientific concern will be about those processes. General terms like 'intelli- gence' and 'understanding', as some have pointed out, will never have a precise or mathematical definition, but they will still be useful, and it will be a matter of usefulness as to how we apply the original terms to artificial devices. However, we're not likely to ever want to say that a conventional computer running a conventional program is exhibiting intelligence, because it's going to turn out to be just too different from what we find is really going on in our brains. (2) It's not going to be enough to say that "we should just define something that understands as something that appears to understand". My suspicion is that we're going to find out some things about some processes going on in the brain that relate well to our traditional concept of understanding, and at the same time to other concepts that we associate with it, like consciousness in particular, and those processes will then help to shape our use of the term 'understanding', similarly to the way that our knowledge of metabolic chemistry and DNA, etc. have contributed to our use of the term 'life'. (3) On the subject of whether or not symbol-crunching is sufficient for understanding, which I guess is an accurate way of characterizing Searle's concern, maybe it's sort of like asking whether DNA (or some such sort of gene- tic coding mechanism) is sufficient for life. The people who have complained about the question and the way we use these terms are right - there's no real answer! In the first place, as stated above, we don't think of 'life' (or 'understanding') as being defined by any precise listing of its characteristics. Sure, it's related in important ways to various other concepts, like genetic coding, but none of them is defined once and for all as the essence of the term. In the second place, even if we all agreed that our concept of genetic coding was essentially related to our concept of life, still, that relationship is not one which would ever establish that genetic coding is _sufficient_ for life. Rather, it's just one of several very important aspects of what goes on in living processes. And I think it will turn out to be the same sort of story with respect to the relationship between symbol-crunching and understanding (or intelligence). Symbol-crunching will be found empirically to be one very important aspect of the processes which we collectively refer to under the heading of 'intelligence', but certainly not all of it. If Searle is just saying that we're going to find out that there's a lot more involved with what we commonly refer to as intelligence, than symbol-crunching, then I think he's right, but in a way the original question isn't a fair one. That is, it isn't fair to ask whether symbol-crunching could be precisely sufficient to account for our concept of intelligence. Because it's not really inherent in the way we use this kind of term, that one closely related term is held to be precisely sufficient to account for the other. Dave Martin U.C.L.A.