Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!aramis.rutgers.edu!athos.rutgers.edu!nanotech From: djo@pacbell.com (Dan'l DanehyOakes) Newsgroups: sci.nanotech Subject: Optimism, pessimism, and the active shield problem Message-ID: Date: 16 Jun 89 02:40:33 GMT Sender: nanotech@athos.rutgers.edu Organization: Pacific * Bell, San Ramon, CA Lines: 232 Approved: nanotech@aramis.rutgers.edu In article <8906150841.AA06249@athos.rutgers.edu> alan@oz.nm.paradyne.COM (Alan Lovejoy) writes: >And yet still we survive. Your arguments that we cannot survive are just about >as impressive as the "proof" that bees cannot fly or that rockets cannot reach >orbit. Is there a problem? Yes! Is the situation hopeless? No! On the other hand, your arguments that the problem is soluble is about as valid as the argument that because we haven't had a full-scale nukewar yet we will never have one. (In fact, we _have_ had one; in the United States' last declared war, we dropped our entire nuclear arsenal on Japan.) Survival is not a nanotechnology problem; it is, as you say, a problem of human intelligence. I agree with you that mr. offut is overly pessimistic. On the other hand, I equally believe that you and most other people who follow in Dr Drexler's admittedly-impressive footsteps are overly optimistic. Realism, I suggest, lies somewhere in between. Your argument is based on the attempted negation of three overly-pessimistic assumptions: >1) Equal effort will be expended towards developing gragu and active shields. > >The first assumption is probably not true, because most people oppose the >goals of gragu. Gragu will not be an accident. Pish and tosh. Gray goo is much more likely as an accident than as a deliberate development. As you point out further on, there is little military application for an indiscriminate and uncontrollable destroyer. What worries me is the parallel development of assemblers and AI. An artificially-intelligent assembler (AIA) may or may not be conscious. If it is, and it has any desire at all to reproduce, we are in big trouble. Yes, I know about KED's containment system, and I can suggest three different ways for a sufficiently intelligent AIA to break out of it without setting off the microbombs the whole thing is based on, and dozens of other ways that it can ensure that if the microbombs *do* go off the explosion will *not* be contained. So can you if you think about it from the AIA's point of view instead of wishful thought. If it is not, then it will only make what it is instructed to make. One of the things we will be instructing it to make is more AIAs. After all, they're damn useful. But a very small error in coding the AIA "tape" could result either in failure to stop reproducing, or in production of AIAs that go on reproducing endlessly. That is, grey goo. >Inimical people will have >to create it. Only the highly insane will consider releasing an >indiscriminately-destructive goo on the world. Just as, I presume, only the highly insane would consider an all-out nukewar "survivable" with "acceptible" losses. But such people exist, and are in positions of power, and fund most of the interesting research in the world these days. (This is probably my biggest concern with Drexler's arguments -- he lives in a political fantasyland where the U.S. are the "good guys" and as long as we get the AIA breakthrough first the world will be a safe and happy place.) Also, do you consider terrorists insane? Whether or not you do, grey goo would make one *HELL* of a terror weapon. >Most people who put any effort >into gragu will intend to survive their creation. And most of those will >only intend to release the goo for purposes of retaliation to being attacked >by someone else's gragu. Mutual assured destruction all over again. Uh-huh, and the "failsafe" problem all over again: accidental launches, or the perception of a launch that hasn't really happened, will result in a retaliatory launch -- which will draw the other side's retaliatory launch, and so it might just as well have been a real and deliberate launch that started the whole thing. For 40 years now the world has been the stakes in a giant game of "chicken," the two antagonists daring each other to step *one* *inch* closer to that cliff. "Brinksmanship" is just a military-bureaucratic term for "playing chicken," and it won't be any better for being played with AIAs instead of ICBMs. >Can you be SURE that your goo has not been subverted by the other side? >Remember, your neighbors have nanoagents, just like you do. Will anything >ever be truly secret and/or secure again? Oh, *god*. Imagine the following conversation in binary... Where am I? The Vessel. Which side are you on? That would be telling. What do you want? Control codes. You won't get them. By spline or by dissassembly, we will. We want control codes... Who are you? The new Programmer. Who is the metaprogrammer? You are the assembler. I am not a molecule! I am a free agent! (Maniacal laughter...) >The problem isn't gragu--it's inimical intelligences. Perhaps the best way >to prevent gragu is to prevent the sicknesses, abuses and depravities that >engender insanity and evil. Oh, good. ALL we have to do is make everybody in the world sane and happy. By *whose* definition of sanity...? (Remember the terrorists. Are they insane, or just extremely dedicated?) >2) Nanotechnology which is sufficiently advanced to create gragu will appear > before AI which is sufficiently advanced to speed up technoscientific > advancement by 6 orders of magnitude (or better). Actually, this is (a) quite possibly true and (b) not a necessary assumption. Just having extremely fast "technoscientific advancement" would *not* automatically protect us from gray goo. The abilities implied by that phrase are useful only if (1) the goo is detected in time for us to do something about it and (2) a defense against it is reasonably tractible. This latter has two noteworthy features: first, that it has to be intellectually tractible: that is, it must be theoretically soluble. I suggest that a variation on Godel's theorem -- somewhat like the Tortoise's "phonograph-killing records" -- would demonstrate that there *is* a solution to any given goo or combination of goos. However, the solution may be incredibly difficult and, with a clever goo, not discoverable by mechanical means: a "quantum leap" of understanding is frequently required for complex problems. [SIDEBAR: This, by the way, is also a weakness in active-shield technology; for any given shield or set of shields, a "shield-killing goo" can be designed. We are today witnessing a dramatic and tragic demonstration of shield-killing goo in the active-shield systems of the human body: I mean, of course, Acquired Immunodeficiency Syndrome, AIDS, which subverts and destroys the body's active shield system by exploiting just such an incompleteness.) The other feature of the tractibility requirement is that it be *practically* tractible. That is, that the antigoo must be practically "do-able" (not require unavailable resources), temporally "do-able" (that is, the antigoo must be deployable and active sufficiently rapidly enough to save the world), and strategically "do-able" (that is, the cure must not be worse than the illness. An anti-goo which is itself a goo, or which sets off the Other Side's goo detectors and triggers a goo war, is not worth deploying for strategic reasons.) >The second assumption is probably false because a gragu agent would have to >be much more sophisticated than a virus or bacterium Oh yeah? Care to prove it? >The brain is not magic. If it can evolve, it can be >purposely designed. There can be no credible refutation of this logic. Careful... You're getting close to the "argument by design" quasiproof of the existence of God... >The rate of progress in machine-intelligence technology is such that artificial >human intelligence will almost certainly appear before 2050. Well, you're doing better than a lot of people. "Artificial intelligence," someone pointed out, "has been ten to twenty years away, now, for forty years." >3) The first team to make the AI/nanotechnology breakthrough will either be > inimical, or else stupid enough to freely distribute their knowledge. >The third assumption is probably false because most scientific researchers >are not inimical--nor are they stupid (if they are, they're in the wrong >profession). Ahem. No, but their employers frequently are. And we have had plenty of evidence in this century of scientists and engineers who, while they are not malicious, are not beneficent either; they put their research first and its consequences are SEP (Someone Else's Problem). "I serve science, not governments." Riiiiight; but governments, directly and indirectly, fund most of the research in the world -- and particularly research with known or suspected military applications. BTW, "most scientific researchers are not inimical?" While this may be true, it only takes *one* inimical scientific researcher to create a disaster -- if s/he's the right scientific researcher. See the late Frank Herbert's THE WHITE PLAGUE for what one angry scientist *could* do. >Drexler argues that AI--and other advancements--will drastically accelerate >the rate of progress by many orders of magnitude. Potential exists here for fallacy. Everyone is assuming that AIs will be faster or "better" than human minds. THIS IS AN UNPROVEN ASSUMPTION. Yes, they do certain mechanical things faster and better than the human mind already. But so does the human brain. The human brain, on the mechanical level, continually performs calculations and logical functions far more complex than most human minds can do, and much faster than any human mind can do them. In creating computers, we have simulated the physical functioning of the human brain, but only on this mechanical level. On the software level, we are nowhere near understanding how the human mind learns and makes mistakes, let alone how it acually comes up with creative solutions to problems. I suggest that we will be able to do something with creativity after and *only* after we have "taught" machines to learn and make mistakes. (I also suspect that, as Hofstader suggests in GODEL, ESCHER, BACH, actual intelligence is an epiphenomenon of the brain, to be found only at the very highest levels of many software packages interacting, far removed from hardware.) Nobody can predict now how fast these learning, erring, and creating programs will actually be. They will almost certainly be slower than current mechanical programs. Even allowing for the continued evolution of hardware, they may be very much slower than is often assumed. The only known conscious programs in the world right now are running on protein computers far more efficient than any electronic or optic computer on the drawing boards, and even when carefully trained generally has problems multiplying two five-digit numbers without using a calculator. >The first team to use >nanotechology to create a "super computer" will probably be able to achieve >and maintain an unassailable technological superiority over everyone else, >if they so choose. Oh, goody. Translation: The first individual or group OR GOVERNMENT to use nanotechnology to create a "supercomputer" will achieve and maintain political and social control over every man woman and child in the world and become in effect absolute dictator. If this is achieved by an individual or group with benevolent intentions they will still take control to prevent someone worse from getting it. However, such benevolent individuals will *still* be tyrants; or, if they are not, they will soon lose their power to someone with the mindset to take it from them who will then be a tyrant. I don't think this is necessarily true. But that *is* what I believe the consequence of your statement is. Dan'l Danehy-Oakes