Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!cornell!biar!trebor From: trebor@biar.UUCP (Robert J Woodhead) Newsgroups: comp.arch Subject: Re: SPEC Bench-a-Thon Message-ID: <681@biar.UUCP> Date: 24 Jun 89 03:51:10 GMT References: <22031@abbott.mips.COM> <22033@abbott.mips.COM> <670@biar.UUCP> <671@biar.UUCP> <5297@pt.cs.cmu.edu> Reply-To: trebor@biar.UUCP (Robert J Woodhead) Organization: Biar Games, Inc. Lines: 31 In article <5297@pt.cs.cmu.edu> jps@cat.cmu.edu (James Salsman) writes: >The composition of any compression algorithm with its self >can not do better than a single application of the algorithm. Well, first of all, in certain circumstances this is incorrect. Every so often, recompressing the compressed file gains you a couple of percent. This was, however, not the point. I suggested doing the compression N times merely to increase the amount of computation required, so one could get a better timing. I thought this was obvious, and the basic idea (of getting an uncompress program and a file you have to pipe through it 100 times in order to get the other benchmarks) appealed to me. >Take the first 10k of /etc/hosts, for instance. The site >differences really won't matter -- time compress on >different disjoint equal-size sections of any large >homogeneous file and see for yourself. You wouldn't want to do this. Some companies would spend thousands of man-hours reorganising their /etc/hosts files for maximum compression performance. ;^) -- (^;-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-;^) Robert J Woodhead, Biar Games, Inc. !uunet!biar!trebor | trebor@biar.UUCP ``I can read your mind - right now, you're thinking I'm full of it...''