Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!wuarchive!brutus.cs.uiuc.edu!ginosko!uunet!zephyr.ens.tek.com!tektronix!sequent!mntgfx!mbutts From: mbutts@mentor.com (Mike Butts) Newsgroups: comp.arch Subject: Re: How Caches Work Message-ID: <1989Sep13.164605.985@mentor.com> Date: 13 Sep 89 16:46:05 GMT References: Organization: engr Lines: 26 From article , by mccalpin@masig3.ocean.fsu.edu (John D. McCalpin): > In message <3989@phri.UUCP> roy@phri.UUCP (Roy Smith) writes: > >>Here's a (possibly crazy) idea for cache design. The current EUD >>(Example Under Debate) shows that caches just don't work for sequential >>access, but we knew that already. > > [ Roy describes a system for which only a portion of the > address space is sun through the cache ] > >> So what do you think? Has this been done before? > > Along a similar line, the Convex machines cache accesses going to the > scalar unit only. The load/store unit for the vector unit operates > directly from main memory to the vector registers. This is pretty > close to what Roy describes, but it is not controlled by any tags, > merely by which functional unit executes the load instruction. It's not terribly hard to build a vector cache, or vector buffer, call it what you will, into which vector elements are prefetched, according to an address and stride detected by the compiler of a predictable loop. -- Michael Butts, Research Engineer KC7IT 503-626-1302 Mentor Graphics Corp., 8500 SW Creekside Place, Beaverton, OR 97005 ...!{sequent,tessi,apollo}!mntgfx!mbutts OR mbutts@pdx.MENTOR.COM Opinions are my own, not necessarily those of Mentor Graphics Corp.