Path: utzoo!utgpu!news-server.csri.toronto.edu!mailrus!cs.utexas.edu!uunet!mcsun!ukc!edcastle!aiai!jeff From: jeff@aiai.ed.ac.uk (Jeff Dalton) Newsgroups: comp.ai Subject: Re: Hayes vs. Searle Message-ID: <2755@skye.ed.ac.uk> Date: 12 Jun 90 16:21:33 GMT References: <3204@se-sd.SanDiego.NCR.COM> <1990Jun9.154316.29020@ux1.cso.uiuc.edu> Reply-To: jeff@aiai.UUCP (Jeff Dalton) Organization: AIAI, University of Edinburgh, Scotland Lines: 41 In article <1990Jun9.154316.29020@ux1.cso.uiuc.edu> page@ferrari.ece.uiuc.edu.UUCP (Ward Page) writes: > >There is an interesting thought experiment in Moravecs 'Mind Children' that >talks about this. The argument goes this way: If an artificial neuron were >developed that exactly mimics (functionally) a brain cell and you replaced >one neuron in the brain with this artificial neuron, would you still be >capable of thought? If the answer is yes, how many neurons could you replace >before you are incapable of thought? At the heart of this thought experiment >is the ability to exactly mimic a neuron. Searle would have to reject this >to refute the argument (assuming the artificial neuron is made of different >stuff than the real neuron). But Searle doesn't have to refute this argument. The Chinese Room argument leads to Searle to conclude that there must be some difference between computers, at least as far as they are merely executing the right program, and people to account for the presence of understanding in one but not in the other. He does not say that this difference is just the materials they are made of. (For a longer, and possibly clearer, version of this, see my previous message.) All the stuff about the causal powers of the brain making a difference follows from the CR argument -- the argument in no way depends on it. Nor does the argument imply that entities that have "artificial" neurons could not understand, just that the artificial neurons would have to be equivalent to real neurons in the necessary ways. It's important to note that you are not talking about capturing the relevant aspects of the brain in a program -- which is what Searle is attacking; you are talking about duplicating the physical functionality. Since Searle thinks it's the physical properties that matter (since he's a materialist, the famed "causal powers" are physical ones), he isn't going to be refuted if duplicating them in different materials still results in understanding. If, on the other hand, you could show that all of the properties necessary to understanding in brains could be duplicated by artificial brains *and* that the necessary properties of artificial brains could be captured by a program, you might have Searle in trouble. -- Jeff