Xref: utzoo comp.ai:8288 sci.bio:4209 sci.psychology:3937 alt.cyberpunk:5442 Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!hsdndev!wuarchive!julius.cs.uiuc.edu!apple!portal!cup.portal.com!mmm From: mmm@cup.portal.com (Mark Robert Thorson) Newsgroups: comp.ai,sci.bio,sci.psychology,alt.cyberpunk Subject: Re: The Bandwidth of the Brain Message-ID: <37273@cup.portal.com> Date: 27 Dec 90 02:03:11 GMT References: <37034@cup.portal.com> Organization: The Portal System (TM) Lines: 69 Jim Lai says: > Not so. If it is a pipeline of "agents", you only know the maximum speed of > any individual filter along the pipeline. (A chain is only as strong as its > weakest link.) Each agent can encode data minimally for compactness in > theory, but that doesn't mean that the brain is therefore optimal and actually > only uses 50 bits per second bandwidth. There's nothing wrong with a model > that assumes the sending gigabits per second down the pipeline. In the agent > paradigm, each agent can feed output to many agents. A tree ten levels deep > can still be quite wide, and be particularly nasty if when cycles are added. > > My gripe with experiments that claim to measure the bandwidth of the brain > is the validity of their measurement criteria, which may have already > assumed a fair amount of preprocessing to have taken place. This posting and a later one from Fred Sena hit on the weakest point of the argument I presented in my original posting. If we are to talk intelligently about the bandwidth of the brain, we need to recognize the distinction between bits and bauds. Bits are information, bauds are some sort of lower-level phenomenon which may (but not necessarily does) carry bits. For example, let's say I developed a program to enhance the reliability of serial communications by using triple redundancy. This program takes an input message such as "hi, mark" and converts it to "hhhiii,,, mmmaaarrrkkk" for sending over the communication channel. On the receive end, another program performs the reverse process, using the redundancy to correct for any errors that occurred during the transmission of a single character. Now what would you say the "bandwidth" of this transmission mechanism is? It certainly requires triple the bandwidth over the communication channel, but is triple the amount of information being sent? I think not, the bauds have been tripled but the bits have stayed the same. Likewise in the brain we see enormous neural structures used to perform low bandwidth functions like reading and listening. Do these structures have some incredibly high internal bandwidth not evidenced in either the input or the output? Again -- for the same reason -- I say no. The amount of _information_ has stayed constant, even if it temporarily fanned out into some highly decoded (i.e. redundant) representation. I will admit there is bandwidth which is not visible at either the input or output ends. For example, a single move by a player playing chess is an input which results in a single responding move by the opponent. Each move is an event with very low information content, but a great deal of internal processing takes place during the ten minutes the master chessplayer spends deciding on his next move. But this bandwidth is not so exceptionally high when compared to other intense human activities like reading, writing, or speaking. The chessplayer examines each possibility, one at a time, at a very human rate. It would not slow down the chessplayer very much to tell you what he is thinking as he thinks it. There is no mega-bandwidth simultaneous perception of the entire chess position resulting in a responding move in a single clock cycle. Instead, he thinks, "I can move here. No. Here. No. Here. Hmm, that's interesting -- then he moves there and I go here and he goes either there or there." It's also true that there is some unconscious pre-processing which takes place without thinking. For example, the chessplayer excludes moves involving a trapped piece (such as a rook boxed into a corner) from his consideration unless a reasonable scenario involves removing the obstacles to moving that piece. Likewise, while reading you take in the words but ignore the specks of wood pulp embedded in the paper, unless those specks make a letter difficult to recognize. Should this pre-processing be counted in the bandwidth? Once again, for the same reason as before, I say no. To say otherwise is like saying that when I read text at 1200 baud I'm actually reading faster than 1200 baud because I'm managing to ignore the dust on the face of my CRT. It is like saying my 9600 baud modem is faster than 9600 baud because it manages to ignore random clicks and pops on the phone line.