Path: utzoo!utgpu!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cis.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!ucbvax!NUSVM.BITNET!ISSSSM From: ISSSSM@NUSVM.BITNET (Stephen Smoliar) Newsgroups: comp.ai Subject: RE: LOGIC AND RELATED STUFF Message-ID: <9106190527.AA17403@lilac.berkeley.edu> Date: 19 Jun 91 05:28:06 GMT Sender: daemon@ucbvax.BERKELEY.EDU Lines: 78 X-Unparsable-Date: Wed, 19 Jun 91 08:52:08 SST In article <20018@csli.Stanford.EDU> levesque@csli.stanford.edu (Hector Levesque) writes: > I think it is a simple mistake (of logic!) to conclude that >because we can never be *certain* about what we mean when we say something, >or what we are agreeing about, or what is true, that somehow the truth of >the matter is thereby open to negotiation or interpretation, or that we can >decide to act in a way that does not take it into account. If I tell you >"there's a truck coming towards you from behind", I may have no way of >knowing for sure that my statement is correct, and you may have no way of >being sure either of what I'm getting at or (assuming you've figured it >out) of whether or not what I am saying is true. But it's a mistake (and a >dangerous one) to conclude from this lack of certainty that the truck issue >is somehow thereby reduced in importance, or that what ultimately matters >is your goals and desires, or our linguistic conventions, or even that one >opinion on the issue is as good as another. None of these follow from >admitting that we may never know for sure one way or another if there is a >truck. A skeptic may choose to focus on what I said, question what I mean >by a "truck" (a toy truck?), or just observe the loaded context dependency >and unavoidable subjectivism in how I perceive and report things. But if >he or she after all this doesn't get it, and does not come to appreciate >very clearly the relevant issue, all is for nought, and the world will do >the rest. You don't have to *know* what I said, and you don't have to >*know* if what I said is true, but for your own safety and comfort, you'd >better be able to figure out what it would be like for it to be true. > I think the REAL mistake in this argument is the attempt to pile too much on the shoulders of logic. When you are standing out there in the world, the issue is not a matter of truth, certainty, or even "what it would be like for it to be true." The issue is far simpler: What do you do when someone says "there's a truck coming towards you from behind?" At the risk of attaching too much importance to Skinner (who has no more claim to having all the answers than the logicians do) the answer to this question, in its simplest terms, is that you BEHAVE. In a situation as urgent as this one, anything you are likely to call reasoning will not take place until AFTER you have behaved and you are reflecting on what just happened (perhaps while choking on the exhaust fumes). Thus, I think Hector's example is a good illustration of the danger of confusing the EXPLANATORY value of logic with any PREDICTIVE value--a point which I recently raised in comp.ai.philosophy. >This, I assume, is what logic is for, at least for AI purposes. Focussing >on Truth in some abstract, all-or-nothing, eternal, godlike sense, is a bit >of a red herring. What matters I think in AI is being able to explore the >consequences of things being one way and not another, even while admitting >that much of our view of the world is not going to be right (even by our >own terms), and that there is no way to achieve certainty about almost all >of it. We need to be able to ask ourselves "according to what I now >believe, what would things be like if P?" The fact that we first use >natural language typically to express a P, and that this language is >infinitely rich and open to endless interpretation and uneliminable context >dependency and bla-bla-bla should really not fool us into thinking that >there is no issue to settle regarding the way the world is. To fall for >this, as far as I can see, is to undervalue the difference between being >right or wrong about the truck, for example, and to guarantee for oneself a >hermeneutically rich but very short life. > I think it is certainly true that we do "reason" (in that same sense of the word which I was arguing about above) about hypothetical situations. Indeed, our ability to do so is one of the reasons why Skinner does not have all the answers. Nevertheless, there is no reason to believe that any machinery which we engage to ponder hypotheticals (which we tend to be free to do only when any other demands of the situation are relatively low) is the SAME machinery which exercises control over our behavior in the here-and-now. Such uniformity would be architecturally elegant, but elegance cannot hold a candle to more fundamental issues of survival such as those Chris Malcolm recently posed on comp.ai.philosophy. =============================================================================== Stephen W. Smoliar Institute of Systems Science National University of Singapore Heng Mui Keng Terrace, Kent Ridge SINGAPORE 0511 BITNET: ISSSSM@NUSVM "He was of Lord Essex's opinion, 'rather to go an hundred miles to speak with one wise man, than five miles to see a fair town.'"--Boswell on Johnson