Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!aramis.rutgers.edu!athos.rutgers.edu!nanotech From: djo@pacbell.com (Dan'l DanehyOakes) Newsgroups: sci.nanotech Subject: Goo Message-ID: Date: 24 Jun 89 04:41:53 GMT Sender: nanotech@athos.rutgers.edu Organization: Pacific * Bell, San Ramon, CA Lines: 212 Approved: nanotech@aramis.rutgers.edu alan@oz.nm.paradyne.COM (Alan Lovejoy) writes: >The explosion of two small bombs does not a nuclear war make. It can be >argued that the Nuclear Peace we have enjoyed since the end of WWII is >partially a consequence of the Hiroshima and Nagasaki bombs. The explosion of two small bombs does not, doesn't it? What about three small bombs? Or two large ones? A war becomes nuclear when nuclear weapons are used. If there had been more, they would have been used -- of this I think there is no reasonable doubt. In the only historical case where a nation holding nuclear weapons went to war with a nation it regarded as a serious threat, the nuclear nation used its entire nuclear arsenal. But this isn't about nanotechnology... >To say that a problem is insolvable is the same thing as saying it will not >be solved. To say that a problem is solvable is NOT the same thing as saying >that a problem will be solved. A statement that something is impossible is >much harder to prove than a statement that something is possible. Granted. And my claim is that unless you can show that a gragu eruption is impossible -- or at least *incredibly* unlikely -- we're better off during the age of nuclear brinksmanship (which I liken to teenagers playing "chicken") than we will be in the hypothetical nanoage. A shield that shields against "most" gragu is *not* sufficient, any more than an immune system that shields against "most" microorganisms is sufficient for the survival of a body. If that one organizm you aren't shielded against gets at you -- that's it. >My claim is simply "The fact that we have avoided nuclear war for forty >years provides a basis for hoping that nuclear war--and similar disasters >such as a biotech or gray-goo war--can be avoided long enough so that >mankind can survive." My "arguments that the problem is solvable" are >precisely that. They are not arguments that the problem is guaranteed to >be solved. And my counterclaim: "We have avoided nuclear war for forty years, but we have to avoid it for many years more, at least until we are spread to other planets, and probably until we are spread to other systems, before we can reasonably decide that nuclear war did not demonstrate that technology (and, by extension, intelligence) is an evolutionary dead end. Nanotech AIAs and the gragu problem simply offer us another possible way to turn ourselves into an evolutionary dead end. Nothing less than a convincing argument that a gooproof shield can be implemented faster than goo will prevent nanotech from being, at least at first, a far greater threat than boon." >I think you overestimate our level of optimism. We are in great danger which >may lead to our destruction. I think there is reason to hope that we will >survive. I fear that we may not. Complete agreement. >Gray goo is almost impossible as an accident. Again: Pish and tosh. Consider your requirements: >Gragu requires nanomachines which: > >a) Can faithfully replicate themselves; >b) Can disassemble and/or maliciously reassemble (in the sense of modification >of molecular structure) almost anything, and/or which can assemble "poisons" >in strategically sensitive locations. >c) Can survive in most environments for significant periods of time; >d) Can hide and/or fight off attack from active shields; >e) Can obtain sufficient energy to perform their functions rapidly enough >to pose a threat; >f) Have sufficient intelligence (or receive sufficiently intelligent direction) >to avoid strategic and/or tactical mistakes (such as devouring each other or >consuming the energy supply before the job is finished). Now consider any attempt to create a truly useful general AIA. (A) will be required of such a machine. (B) will be a likely concommittant -- you have to disassemble things to find out how they're made, if you want to replicate them. (C) is not required of an AIA -- but it isn't really required of gragu, either; it has to survive in the climate you want it to amok in. For example, if you wanted to wipe out LA, you'd have to make something that could work in an oxygen-and-smog atmosphere, at temperatures from 70 to 10000 degrees Fahrenheit (okay, so I exaggerated a little. It drops down to 60 in the winter, sometimes), etc., etc. It need only survive under conditions that humans survive in to make it deadly to humans. (D) is a serious consideration, but only if it's being put somewhere that active shields already exist -- I'm mostly worried about the first few years when talking about accidental gragu. (E) is likely to be the case with any AIA. I'd imagine we want them absorbing heat from their surroundings (or the reactions they cause) wherever possible. Lightpowered AIAs seem a good possibility for first guess; LA's got plenty of light. (F) is nonsense. They just have to go around devouring everything in sight. If they run out of energy, they "lose," but they've done some serious damage in the meanwhile. Eating each other could cause more trouble, but I'm given to believe they'd have that built in in a scenario like this: Jho Nano decides to build a "complete" AIA system, one that can take a general program from nanotape, find the atoms it needs to build the desired object, and assemble it. This will be the first such AIA ever built. After a great deal of fiddling, he decides he has a working design, and grows his molecule. One molecule isn't much use. He could grow more, but it seems more valuable, as a test of his design, to give it instructions for building more of itself. He programs a nanotape that translates as follow: "Build a copy of yourself." "Decrement the counter on this tape by 1." "Make a copy of this tape for the copy of yourself." Say the counter starts at "5." The AIA will make a copy of itself, decrement the tape to "4," and then there will be two AIAs with 4-tapes. Then 8 with 3-tapes. Then 16 with 2-tapes. Then 32 with 1-tapes. You wind up with 64 AIAs, all of them with used up tapes. But. Suppose the decrementer fails? Or the tape accidentally reads "50000000"? Answer: Grey goo. If this happens *after* we've got some kind of useful active shields going, I'm not too worried. But if it happens in the next few years...I'm worried. >The more complex the machine, the more likely that >"accidents" which introduce "bugs" are to occur --and the more likely it is >that those "bugs" will simply prevent the macine from working. Not necessarily. Humans being humans are likely to attempt some modularity of design (makes the whole thing easier to understand, neh?), and it's possible for a module (say the "decrementer" module or the "don't eat that, it might be human" module) to fail without the whole failing. Also, it's a totally normal human tendency to try to make machines as robust as possible... >...homo sapiens is living proof that "accidents" can and will lead to more >advanced and capable replicators--but only over periods of billions, or at >least millions, of years. Ahem: when left to happen by themselves. Most of the "accidents" in the development of nanotech and AIAs will not be accidents at all -- (human?) intelligence will guide the process. >Since disassemblers will not be replicators (UNLESS SOMEONE DELIBERATELY DESIGNS >THEM THAT WAY), Contrariwise: replicators *WILL* be disassemblers UNLESS SOMEONE MANAGES TO DESIGN THEM NOT TO BE. That is, unless someone builds in intelligence that directs them "Don't use that for spare parts -- it might be part of something." >Nanosystems will be DESIGNED to make accidental gragu as unlikely as we know >how. Yes -- but how well do we know? >This restriction >is not as onerous as it seems if you use "idiot-savant" AI's which are >brilliant molecular engineers AND OTHERWISE AS DUMB AS A CRAY-V to program >your nanomachines--and to check the programs offered by your fully-intelligent >AIs for "trojan horses". Hmmmm... the Trojan horse got through in spite of Cassandra's warning, didn't it? More to the point, this is an excellent place for Hofsteder's "record player breaking records." There are by definition trojans that can get past any given security system or set of systems. >What I was trying to suggest is that we need to make a change in what we >consider to be "acceptably sane." Agreed. >And we need to find out how to reliably >cure and prevent the sort of "insanity" (or "antisocial behavior") which >drives (or permits) people to purposely seek to harm others. Again -- whose definition? >May I suggest that "insanity" is any state of mind which engenders destructive >anti-survival behavior? In light of nanotechnology, militarism and terrorism >are insane states of mind under this definition. That's culturocentric. Samurai, for example, often performed anti-survival acts. Ditto car bombers. Are they insane? >Both shields and goo have to overcome the "is it possible or practical?" >hurdle. Why should this cause shields more difficulty than goo? Because goo only has to attack on thing. Sheilds have to attack any hypothetical goo. See above. And also, if virii and bacteria were gragu-class devices, >why are we still here? They aren't. And you still haven't proven anything by my book. Dan'l Danehy-Oakes [I think you are ascribing some magical powers to the goo that are not likely in a real nanotech device. For example, it is almost certain that the first assemblers (and most "industrial" assemblers thereafter) will get their raw materials from floating in a soup of them, and will not be able to take anything apart. Assemblers that will live in an artificial environment, requiring to be "spoon fed", will be easier to design, will work faster, and will be *safer*--reason enough for people to design them that way. --JoSH]