Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!think.com!snorkelwacker.mit.edu!stanford.edu!csli!csli.stanford.edu From: levesque@csli.stanford.edu (Hector Levesque) Newsgroups: comp.ai Subject: logic and related stuff Keywords: logic, truth, Minsky Message-ID: <20018@csli.Stanford.EDU> Date: 18 Jun 91 19:48:34 GMT Sender: levesque@csli.Stanford.EDU Organization: CSLI, Stanford University Lines: 67 I've never posted to comp.ai before (and may regret it!), so please forgive violations of protocol. But these attacks on logic and truth, though maybe familiar in AI from about 15 years ago, deserve some comment. First, a minor complaint about Minsky's post about logic. I agree completely that universal generalization could end up playing a very minor role in our cognitive life. But I disagree completely that the utility of logic is somehow thereby compromised. One very popular logical theory, sometimes called "quantification theory" is indeed concerned with expressing in logical terms the properties of "for all" and "for some." But even existing logical theories go well beyond this. The theory of generalized quantifiers, for example, examines properties of quantifiers like "many", "most," "almost all" and the like that stand to play a much more important role in expressing what we believe. Then there are the statistical/probabilistic accounts (a la Bacchus/Halpern), the nonmonotinic accounts etc. To say that logic lives or dies with generalization and instantiation is like saying it lives or dies with exclusive-or. Another point: I think it is a simple mistake (of logic!) to conclude that because we can never be *certain* about what we mean when we say something, or what we are agreeing about, or what is true, that somehow the truth of the matter is thereby open to negotiation or interpretation, or that we can decide to act in a way that does not take it into account. If I tell you "there's a truck coming towards you from behind", I may have no way of knowing for sure that my statement is correct, and you may have no way of being sure either of what I'm getting at or (assuming you've figured it out) of whether or not what I am saying is true. But it's a mistake (and a dangerous one) to conclude from this lack of certainty that the truck issue is somehow thereby reduced in importance, or that what ultimately matters is your goals and desires, or our linguistic conventions, or even that one opinion on the issue is as good as another. None of these follow from admitting that we may never know for sure one way or another if there is a truck. A skeptic may choose to focus on what I said, question what I mean by a "truck" (a toy truck?), or just observe the loaded context dependency and unavoidable subjectivism in how I perceive and report things. But if he or she after all this doesn't get it, and does not come to appreciate very clearly the relevant issue, all is for nought, and the world will do the rest. You don't have to *know* what I said, and you don't have to *know* if what I said is true, but for your own safety and comfort, you'd better be able to figure out what it would be like for it to be true. This, I assume, is what logic is for, at least for AI purposes. Focussing on Truth in some abstract, all-or-nothing, eternal, godlike sense, is a bit of a red herring. What matters I think in AI is being able to explore the consequences of things being one way and not another, even while admitting that much of our view of the world is not going to be right (even by our own terms), and that there is no way to achieve certainty about almost all of it. We need to be able to ask ourselves "according to what I now believe, what would things be like if P?" The fact that we first use natural language typically to express a P, and that this language is infinitely rich and open to endless interpretation and uneliminable context dependency and bla-bla-bla should really not fool us into thinking that there is no issue to settle regarding the way the world is. To fall for this, as far as I can see, is to undervalue the difference between being right or wrong about the truck, for example, and to guarantee for oneself a hermeneutically rich but very short life. The fact is, I don't think anyone takes this position too seriously except when assuming a philosophical stance. Show me a philosohper that doesn't fall into realism of the most naive sort when confronted with memos that say "I'm sorry, but your salary will be reduced by 50%." Under the right circumstances (having nothing to do with mathematics or formal artificial domains!) relativism is put on hold, and the ordinary objective truth about what might appear to be hopelessly vague imponderables suddenly becomes very crisp, very precise, and very relevant to action. Hector Levesque