Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!mit-eddie!uw-beaver!cornell!rochester!pt.cs.cmu.edu!cat.cmu.edu!jps From: jps@cat.cmu.edu (James Salsman) Newsgroups: comp.arch Subject: Re: SPEC Bench-a-Thon Message-ID: <5297@pt.cs.cmu.edu> Date: 23 Jun 89 09:55:24 GMT References: <22031@abbott.mips.COM> <22033@abbott.mips.COM> <670@biar.UUCP> <671@biar.UUCP> Organization: Carnegie Mellon Lines: 28 In article <671@biar.UUCP> trebor@biar.UUCP (Robert J Woodhead) writes: > In article <670@biar.UUCP> trebor@biar.UUCP (Yours Truly) writes: > > Well, if you want to get elegant, have compress compress it's > > own source code, then compress the compressed file, and repeat > > the process N times... ;^) > > Idea #2 - make the benchmark not compression but uncompression. > Send out the whole benchmark suite compressed N times. The > first benchmark is uncompressing the benchmark suite! This is silly. The composition of any compression algorithm with its self can not do better than a single application of the algorithm. Otherwise, things would be a lot smaller. If you want to use compress as a benchmark, use it on a large file that is similar (if not the same) everywhere. Take the first 10k of /etc/hosts, for instance. The site differences really won't matter -- time compress on different disjoint equal-size sections of any large homogeneous file and see for yourself. :James -- :James P. Salsman (jps@CAT.CMU.EDU) --