Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!iuvax!purdue!tut.cis.ohio-state.edu!gem.mps.ohio-state.edu!ginosko!aplcen!uakari.primate.wisc.edu!paz.geology.wisc.edu!uwvax!rang From: rang@cs.wisc.edu (Anton Rang) Newsgroups: comp.arch Subject: Re: flexible caches Message-ID: Date: 19 Sep 89 04:00:25 GMT References: <224@qusunr.queensu.CA> <22151@cup.portal.com> <2115@munnari.oz.au> <22211@cup.portal.com> <12907@pur-ee.UUCP> Organization: UW-Madison CS department Lines: 22 About the whole idea of improving cache prediction with different techniques...just make sure that it doesn't slow down the cache on the average. It's better to use a simple, fast algorithm than a more impressive but slower one. (Yes, I know everybody probably realizes this already.) About neural nets or anything fancy...suppose that you have a cache which is 100 times faster than main memory, and only a 90% hit rate. You come up with a magic algorithm which will give you a 100% hit rate. You have to implement it to be no more than 10.9 times slower or the overall system speed will be worse. Real caches may be slower WRT main memory, and probably can do better than 90% typically, so the problem is probably worse. I doubt a reasonable cache learning algorithm can be implemented which runs on a 5-10 ns cycle, say...at least with current technology. (Then again I could be wrong :-).... +----------------------------------+------------------+ | Anton Rang (grad student) | rang@cs.wisc.edu | | University of Wisconsin--Madison | | +----------------------------------+------------------+ "You are in a twisty little maze of Unix versions, all different."