Path: utzoo!censor!geac!torsqnt!snitor!rmc From: rmc@snitor.UUCP (Russell Crook) Newsgroups: comp.arch Subject: Re: 64-bits, How many years? Message-ID: Date: 21 Feb 91 15:06:38 GMT References: <9102171510.AA24745@lilac.berkeley.edu> <1991Feb18.163010.31688@m.cs.uiuc.edu> <3209@crdos1.crd.ge.COM> Sender: usenet@snitor.uucp (nntp news poster) Organization: Siemens Nixdorf Information Systems Ltd. Lines: 53 In article <3209@crdos1.crd.ge.COM> davidsen@crdos1.crd.ge.com (bill davidsen) writes: >In article <1991Feb18.163010.31688@m.cs.uiuc.edu> gillies@m.cs.uiuc.edu (Don Gillies) writes: > >| So we conclude that because >| 64-28 = 36, it will take 120 years to outgrow the 64-bit address >| space. > > We may never run out of 64 bits of address space. That's not to say we >won't have problems larger than that, but there's a real possibility >that some limitations of physics will hold us back. > <<<>> > > Therefore, I conclude that the speed of light makes 64 bits likely as >the largest physical address space we will even need. I have lots of >faith in new development, but I have faith in relativity and physics, >too. >-- >bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) > "I'll come home in one of two ways, the big parade or in a body bag. > I prefer the former but I'll take the latter" -Sgt Marco Rodrigez All of this presupposes two dimensional memory. In 3D, 64 bits seems to be in reach. Some numbers: 2**64 = 2 * 2**63 . Assuming a cubic array of bits, 1.26*2**21 bits on a side, or about 3.5 * 10**6. If we constrain our memory cells to be one micron (which isn't too far from current praxis, except in 2D) this yields a cube 3.5 metres on a side. Large, but not ridiculously so. If you can get the cells down to .1 micron including wires (e.g., 1000 angstroms, or about 10**8 to 10**9 atoms per cell), the size is 35 cm on a side, which would fit on a desktop... Speed of light would however restrict the clock rate to some fraction (say a third) of a gigahertz (i.e., 3nsec access time), the larger version 30 ns or so, so there could be some performance limitations. Even that applies only to completely random access. If you treat the memory like a current day disk, you get 3-30 nsec seek+latency, followed by some arbitrary transfer rate. I won't argue about 128 bits being enough :-> ------------------------------------------------------------------------------ Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre 2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5 +1 416 496 8510 uunet!{imax,lsuc,mnetor}!nixtdc!rmc, rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net, rmc.tor@nixdorf.com (in N.A.), rmc.tor@nixpbe.uucp (in Europe) "... technology so advanced, even we don't know what it does." -- ------------------------------------------------------------------------------ Russell Crook, Siemens Nixdorf Information Systems, Toronto Development Centre 2235 Sheppard Ave. E., Willowdale, Ontario, Canada M2J 5B5 +1 416 496 8510 uunet!{imax,lsuc,mnetor}!nixtdc!rmc, rmc%nixtdc.uucp@{eunet.eu,uunet.uu}.net, Brought to you by Super Global Mega Corp .com