Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!wuarchive!zaphod.mps.ohio-state.edu!caen!news.cs.indiana.edu!uceng!minerva!dmocsny From: dmocsny@minerva.che.uc.edu (Daniel Mocsny) Newsgroups: comp.arch Subject: Re: Be Prepared... Keywords: Lots Of Memory Message-ID: <7517@uceng.UC.EDU> Date: 21 Feb 91 18:20:23 GMT References: <1991Feb13.160718.25759@visix.com> <3197@crdos1.crd.ge.COM> <46049@mips.mips.COM> Sender: news@uceng.UC.EDU Organization: University of Cincinnati, Cin'ti., OH Lines: 61 In article <46049@mips.mips.COM> mash@mips.COM (John Mashey) writes: >Elsewhere, there was a question about desktop applications that might >want this. I thought I posted a long discussion on this, but >let me resummarize: > 1) Databases > 2) Video > 3) Image > 4) CAD > 5) G.I.S. > 6) Technical number-crunch Pardon me if I am repeating points you made earlier, but let me emphasize that having ridiculous amounts of memory available could potentially speed up lots of things. For example, in chemical engineering we often simulate processes that require a program to evaluate equations of state, physical property correlations, etc., repeatedly at similar conditions. Depending on the form of the equation of state or property correlation, the evaluation may be somewhat lengthy, and/or implicit (i.e., requiring numerical convergence). A good-sized simulation evaluates an equation of state many times (once per grid point, per time step). Since conditions may not change appreciably over a few grid points or time steps, much of the calculation will be redundant. Researchers exploit this to speed up iterative calculations, i.e., by using the result of the previous solution for the grid point to guess a starting value for the next one. However, no matter how fast the equation of state routine is, a table-lookup routine would be faster. Even with an interpolation for more precision, it will usually be better, especially when the alternative is a routine requiring convergence. So then, given unlimited memory, we can extend the notion of "caching" to include any potentially redundant calculation. *Many* application programs involve some element of redundancy. People don't solve completely unique problems every time they fire up a computer. So if I wanted to do a bunch of simulation runs, I would be happy to build up a set of large interpolation tables that could speed things up by a factor of 10 or 100. Since these tables would be multi-variable, and precision would be nice also, no meaningful upper bound exists on the amount of memory that might be potentially useful. In principle, I'd like to have lookup tables containing every potentially-useful computed result in my field. Similarly, enormous amounts of memory could help personal computers and workstations cope with the "bursty" workload of the typical user. While the user is twiddling her thumbs, the CPU need not be idling. Instead, it can be stockpiling its memory with all sorts of potentially useful things. Later, when the user happens to request one of those things, the CPU will get it much faster from memory than if it had to build it up from scratch again. The more memory available, the better such a "work-ahead" strategy could work. By compiling statistics on the user's work habits, the computer could possibly anticipate the user's next likely command(s), and get a head-start during idle periods. -- Dan Mocsny Internet: dmocsny@minerva.che.uc.edu Brought to you by Super Global Mega Corp .com