Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!sun-barr!rutgers!aramis.rutgers.edu!athos.rutgers.edu!nanotech From: dmocsny@minerva.che.uc.edu (Daniel Mocsny) Newsgroups: sci.nanotech Subject: Some problems of super-intelligence Message-ID: Date: 6 Dec 90 07:01:22 GMT Sender: nanotech@athos.rutgers.edu Organization: University of Cincinnati, Cin'ti., OH Lines: 106 Approved: nanotech@aramis.rutgers.edu >panix!alexis@cmcl2.nyu.edu (Alexis Rosen) writes: >>2) More importantly, I'm guilty myself (in the above paragraphs) of the same >>thing I accused Daniel of- overly limited vision. Like the graygu problem, >>though, I don't see how we can even approach this subject intelligently. When >>you're a million times smarter than you are today, what will be important to >>you? Will creativity still be a mystery? Will key "human" things, basic I fully expect that we will one day be able to augment human intelligence massively. (Augmentation that has proceeded to date has all been peripheral, not direct. The two are only equivalent in a very limited way, as I discussed in another article. Giving your brain a better environment in which to think is not going to make you 1,000,000 times more intelligent by many useful measures.) However, we must temper our expectations with the admission that we lack a few rather important tidbits: 1. We haven't the foggiest notion of how our brains do what they do right now. Sure, we have some vague, hand-waving speculations, but nothing that could be regarded as the basis for engineering. We can't even fix broken brains, nor explain what makes some brains work better than others. 2. Much less do we have any idea of how to go about enabling our brains to do 1,000,000 times more than they do now. Face it, human beings have only slightly more control over how intelligent they happen to be than do rocks and trees. (That is a profoundly frightening thought.) We are going to take considerable time merely to catch up to the engineering that our genes do mindlessly in our behalf. And once we do, who knows what we will discover? We *think* we can build smarter brains than any now existing, but how do we *know* that? Suppose a theoretical limit exists to the maximum amount of intelligence that can exist in one coherent entity, before the subparts become so intelligent that they create their own independent agendas and rebel? This might happen, for example, as a natural consequence of lightspeed limitations. If you had a lump of material in which every last quark was processing data, the communication latency between components in that material would at best be inversely proportional to the distance separating them. For maximum efficiency, then, every component would have to spend most of its time "talking" to its nearest neighbors. Communication binds an incoherent mass of components into a "self". This is true for all complex systems, from cells to bodies to societies. We consider our bodies to be "ourselves", rather than the entire Universe, because our intra-body communication bandwidth is so much higher than our extra-body communication bandwidth. Thus, in the super-nano-quark-computer, local assemblies of processors would tend to evolve in paths independent of other, more distant, assemblies. If these assemblies were smart enough to do useful work, they would also be smart enough to develop a sense of "self" apart from the rest of the computer. That would motivate them to seek their own welfare at the expense of the remaining system. I.e., the super-brain would develop an internal structure resembling an ordinary, competitive ecosystem. Having 1,000,000 times more intelligence inside one's head might not make a person 1,000,000 times more "intelligent". It might make one as doddering and ineffective as any corporation or government with 1,000,000 employees. Sure, a large organization can accomplish more, in many important cases, than one individual can. But the large organization is manifestly NOT 1,000,000 times "smarter" in every way. In some instances, the individual is clearly superior, not being bound by the need to expend vast energies at mediating internal conflicts. No organization can focus its entire intellectual capacity on one problem. An upper limit may exist, in fact, on how much intelligence can be focused on one thing at one time, due to the ecological notions I waved around above. >>things like material and emotional desires, still have meaning? The point >>is, achieving "real" nanotech means that you've pretty much won the game >>of life, as we know it. I don't think life is going to roll over and play dead quite as easily as you imagine. Besides, even a 1,000,000 times increase in intelligence isn't going to amount to very much. Read your Garey and Johnson on computational complexity. Most useful, real-world problems are NP-complete or NP-hard, or even NP-atrocious :-). Exponential complexity reduces exponential increases in capacity to merely arithmetic gains in benefit. And then there's chaos, you know. Even if you could simulate everything, you would still have surprises, due to uncertainty in your initial (and ongoing) measurements. -- Dan Mocsny Snail: Internet: dmocsny@minerva.che.uc.edu Dept. of Chemical Engng. M.L. 171 dmocsny@uceng.uc.edu University of Cincinnati 513/751-6824 (home) 513/556-2007 (lab) Cincinnati, Ohio 45221-0171 [Actually, the million mark in increased intelligence probably is the level we can expect to get *without* some fundamental increase in knowledge about intelligence, simply by simulating the existing structure but making it faster. Combine the raw speed with a built-in library, and the resulting entity can apply to any problem in 5 minutes the effect of 10 years of in-depth study and research, current human scale. --JoSH] Brought to you by Super Global Mega Corp .com