Xref: utzoo comp.ai:8282 sci.bio:4207 sci.psychology:3930 alt.cyberpunk:5438 Path: utzoo!utgpu!cs.utexas.edu!usc!snorkelwacker.mit.edu!bloom-beacon!eru!hagbard!sunic!news.funet.fi!funic!polaris.utu.fi!polaris.utu.fi!magi From: magi@polaris.utu.fi (Marko Gronroos) Newsgroups: comp.ai,sci.bio,sci.psychology,alt.cyberpunk Subject: Re: The Bandwidth of the Brain Message-ID: Date: 25 Dec 90 17:55:46 GMT References: <37034@cup.portal.com> <1990Dec22.213121.12226@dsd.es.com> <1990Dec24.202254.2832@ddsw1.MCS.COM> Sender: news@polaris.utu.fi Organization: University of Turku Lines: 46 In-Reply-To: zane@ddsw1.MCS.COM's message of 24 Dec 90 20:22:54 GMT In several articles many people write about if we ever can understand and simulate our thinking with a computer. Here I will send another BORING and STUPID article about the subject. I don't think that we can simulate OUR thinking with digital, synchronized computers. It doesn't matter if they are parallel; we can simulate that completely with sequential computers, it's just slower. The iterative processing just brings up too many problems. One problem, for instance, is the signal feedback between layers; it creates a 'resonance' that causes both layers to be completely unaware about each other's activation level (I won't confirm this problem, and ANNs discrete in time may solve this, but anyways..). This is just one example, there are dozens of them. It might be possible to simulate *some* kind of thinking with normal computers, though. But since even one million 'neurons' with one billion 'synapses' would be quite a lot to simulate, the 'artificial mind' wouldn't be too intelligent.. Why do you say that we can't understand our thinking? It's quite true that a pocket calculator (pc) can't understand it's "thinking", but then, a pocket calculator doesn't THINK, it doesn't LEARN, It doesn't make INTELLIGENT CONCLUSIONS. I don't think that we can make a "law of not understanding oneself", if we have only one example of beings who really can't understand ANY of their functional principles (computers). We understand some of our functional principles, so is there a law that at some point stops our advance in studying ourselves? Where is the limit? Is it high enough to allow us to create other beings that think (with other, simpler principles)? The thinking computer doesn't have to simulate our brains. Recently I've been studying about crystalline light computers (sounds like science fiction, doesn't it? :-) ) that deal with 'holographic' thoughts generated by thought pattern signal interference. The synaptic weights would be chemical changes in crystal structure that extinguishes the light (like in automatically adjusting sunglasses). Although I'm not sure about the correctness of this interference theory (aka. rubber duck theory), what I'm trying to say is that the principle of a thinking in our future neurocomputers may be totally different from ours. The last word: Is it useful for us to say that we can't create thinking machines? That kind of law is an ANSWER, and only religions give ANSWERS. If we believe that we can't have progress in our research, then there really can't be any progress. That's the main reason why I don't like most religions.. :-/