Path: utzoo!censor!geac!jtsv16!uunet!aplcen!samsung!usc!snorkelwacker!spdcc!merk!xylogics!cloud9!jjmhome!m2c!umvlsi!dime!dime.cs.umass.edu!moss From: moss@takahe.cs.umass.edu (Eliot &) Newsgroups: comp.arch Subject: Re: Software modularity vs. instruction locality Message-ID: Date: 6 Nov 89 13:32:14 GMT References: <17707@watdragon.waterloo.edu> <23604@cup.portal.com> <1989Nov2.190900.29144@world.std.com> <1989Nov4.004529.10049@ico.isc.com> <1TMk2X#Qggn6=eric@snark.uu.net> Sender: news@dime.cs.umass.edu Reply-To: Moss@cs.umass.edu Distribution: comp Organization: Dept of Comp and Info Sci, Univ of Mass (Amherst) Lines: 18 In-reply-to: eric@snark.uu.net's message of 5 Nov 89 16:24:29 GMT While I would agree that many subroutines calls, or even a threaded interpreter, would tend to undermine *sequential* instruction reference, and thus substantially affect the performance of a sequential pre-fetch buffer, a true cache is insensitive to jumps. What matters for cache performance is the the *total volume* of instruction locations referenced over a range of time, and whether they tend to fit in the cache. Processor performance may still be adversely affected, since it may still be faster to buffer and deliver sequential instruction words than to take a jump, especially on a fast pipelined machine, like many RISC cpus. -- J. Eliot B. Moss, Assistant Professor Department of Computer and Information Science Lederle Graduate Research Center University of Massachusetts Amherst, MA 01003 (413) 545-4206; Moss@cs.umass.edu