Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!cmcl2!yale!cs.yale.edu!blenko-tom From: blenko-tom@CS.YALE.EDU (Tom Blenko) Newsgroups: comp.ai Subject: Re: What is a Symbol System? Keywords: symbol manipulation, syntax, formality, semantics Message-ID: <6932@cs.yale.edu> Date: 29 Nov 89 04:06:58 GMT References: <11640@phoenix.Princeton.EDU> <6170@cs.yale.edu> <11655@phoenix.Princeton.EDU> <6921@cs.yale.edu> Sender: news@cs.yale.edu Reply-To: blenko-tom@CS.YALE.EDU (Tom Blenko) Organization: Yale University Computer Science Dept, New Haven CT 06520-2158 Lines: 52 In article <6921@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes: |In article <11655@phoenix.Princeton.EDU> harnad@phoenix.Princeton.EDU (S. R. Harnad) writes: |> |> |>mcdermott-drew@CS.YALE.EDU (Drew McDermott) of |>Yale University Computer Science Dept asked: |> |>> Why is it necessary that a symbol system have a semantics in order to |>> be a symbol system? I mean, you can define it any way you like, but |>> then most AI programs wouldn't be symbol systems in your sense. |>> |>I'd rather not define it any way I like. I'd rather pin people down on |>a definition that won't keep slipping away, reducing all disagrements |>about what symbol systems can and can't do to mere matters of |>interpretation. |> ... | |Which "people" need to be pinned down? Fodor, I guess, who has a strong |hypothesis about a Representational Theory of Meaning. | |But suppose someone believes "It's all algorithms," and not much more? |He's willing to believe that intelligence involves an FFT here, some |inverse dynamics there, a few mental models, maybe some neural nets, |perhaps a theorem prover or two,.... His view is not completely vacuous |(Searle thinks it's even false). It might be a trifle eclectic for some |philosophers, but so what? I don't share Drew's disenchantment with semantic models, but I think there is a more direct argument among his remarks: specifically, that it isn't a particularly strong claim to say that an object of discussion has "a semantics". In fact, if we can agree on what the object of discussion is, I can almost immediately give you a semantic model -- or lots of semantic models, some of which will be good for particular purposes and some of which will not. And it doesn't make any Difference whether we are talking about axioms of FOPC, neural networks, or wetware. Richard Feynman had an entertaining anecdote in his biography about a fellow with an abacus who challenged him to a "computing" contest. He quickly discovered that the fellow could do compute simple arithmetic expressions as fast as Feynman could write them down. So he chose some problems whose underlying numerical structure he understood, but which it turned out that the other fellow, who simply knew a rote set of procedures for evaluating expressions, didn't Who had a semantic model in this instance? Both did, but different models that were suited to different purposes. I suspect that Harnad had a particular sort of semantics in mind, but he is going to have to work a lot harder to come up with his strawman (I don't believe it exists). Tom Brought to you by Super Global Mega Corp .com