Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!aramis.rutgers.edu!athos.rutgers.edu!nanotech From: vorth%sybil@gatech.edu (Scott Vorthmann) Newsgroups: sci.nanotech Subject: Re: Ye olde matter duplicator Message-ID: Date: 21 Jun 89 03:11:19 GMT Sender: nanotech@athos.rutgers.edu Lines: 68 Approved: nanotech@aramis.rutgers.edu "Ron_Fischer.mvenvos"@Xerox.COM writes: >Could you send your assumptions and derivation for the order of magnitude >mass increase? I was making a rough guess, but let me see if I can make an argument... Assume, for simplicity, that all atoms lie on lattice-points of a 3D lattice with 1nm spacing. Assume also that we are duplicating an object whose composition is a random mixture of atoms of only 10 elements. Both of these assumptions are very optimistic, so we should be able to get some sort of lower bound. (The "random" assumption is actually quite pessimistic, so the lower bound will be applicable only to such random compositions.) Now our program could use a "global" specification ("put an atom of element X at position (x,y,z)"), or a "local" one ("the next atom is element X"). Since the x, y, and z values in the former will be VERY large integers, requiring many bits to encode, I think the latter will be more compact. Also, it's easier to see how a local encoding might be "executed", using something like nested for-loops building the object in "row-major order". So our encoding will have 10 different "data" values, with a few "control" values (like "start new row/face"), in a serial encoding. Each position in the program "tape" will need to encode 4 bits of information. Using a brute-force encoding, where bits are represented by the presence of either of two possible atoms as the links in the chain, we now have a 4-to-1 mass increase of program over object (assuming all atoms are of a single, "average" mass). This is a safe lower bound. In actual fact, the program "tape" will likely require at least 4 atoms to encode a single bit (carbon backbone, extra hydrogens, etc.). Note that bits can also be encoded structurally, via stereoisomers, etc.... this may help put a ceiling on the number of atoms required to encode larger symbol sets. >>Except that we may want all interfaces between different sub-assemblies to >>be specified at the molecular scale... >Using your previous statement regarding hierarchical design, I don't think >this is an issue, since assemblers operating at the interfaces could use >their knowledge of proper construction techniques to do this without an >explicit encoding. That "knowledge" may prove vast. However, we could reduce the size of the problem by having a set of standard "termination" compositions. The interface assemblers would then need only know how to connect the various types of termination regions at plane, line, or point interfaces. The assemblers for the sub-assemblies would have specified "boundary" subprograms, perhaps with coordination between levels of the hierarchy, to terminate sub-assemblies in standard ways. Scott Vorthmann School of ICS, Georgia Tech vorth@gatech.edu [This analysis seems correct as far as it goes. However, there seems to be a way to "cheat" it at higher levels. If we consider ordinary natural or bulk-technology objects, one can use adaptive grid, octree, run length encoding, hierarchical layering, and similar techniques to reduce the amount of information needed to a very tiny amount. For microscopic living organisms or the products of nanotechnology, it may be necessary to specify on the level of "a place for every atom and every atom in its place"; but it would be the rare macroscopic sized object that would need this detail. (Macroscopic living organisms *do* have a tremendously compact encoding for their structure...) --JoSH]