Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!wuarchive!zaphod.mps.ohio-state.edu!ub!uhura.cc.rochester.edu!rochester!pt.cs.cmu.edu!gandalf.cs.cmu.edu!lindsay From: lindsay@gandalf.cs.cmu.edu (Donald Lindsay) Newsgroups: comp.arch Subject: Re: Be Prepared... Keywords: Lots Of Memory Message-ID: <12064@pt.cs.cmu.edu> Date: 23 Feb 91 00:25:25 GMT References: <7517@uceng.UC.EDU> Organization: Carnegie Mellon Robotics Institute, School of CS Lines: 45 In article <7517@uceng.UC.EDU> dmocsny@minerva.che.uc.edu (Daniel Mocsny) writes: >...let me emphasize that having ridiculous amounts of memory >available could potentially speed up lots of things. >So then, given unlimited memory, we can extend the notion of "caching" >to include any potentially redundant calculation. *Many* application >programs involve some element of redundancy. People don't solve >completely unique problems every time they fire up a computer. So if >I wanted to do a bunch of simulation runs, I would be happy to build >up a set of large interpolation tables that could speed things up by >a factor of 10 or 100. This is reminiscent of the old "Godelization" jokes, whereby every program output had to be registered under its Government-assigned Godel number, so that no one would ever have to recompute it... >By compiling statistics on the user's work habits, the computer could >possibly anticipate the user's next likely command(s), and get a >head-start during idle periods. A recent Carnegie Mellon thesis was on anticipating user commands. One of the major issues is insuring that all uncommanded actions be undoable. Given that, anticipation is definitely a winning idea in selected problem domains. The following was posted two years ago, but it seems relevant again: Big memories may turn out to be useful in and of themselves. The group at Sandia that won the Gordon Bell Award - the people with the 1,000 X speedup - reported an interesting wrinkle. They had a program described as: Laplace with Dirichlet boundary conditions using Green's function. (If you want that explained, sorry, ask someone else.) They reduced the problem to a linear superposition, and then as the last step, they did a matrix multiply to sum the answers. This took 128 X as much memory as "usual" ( 256 MB instead of 2 MB ), but made the problem 300 X smaller, in terms of the FLOPs required. One of perennial topics in the OS world is the latest idea for using memory. I don't see why other problem domains shouldn't also find ways to spend memory. -- Don D.C.Lindsay .. temporarily at Carnegie Mellon Robotics Brought to you by Super Global Mega Corp .com