Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!uunet!crdgw1!crdos1!davidsen From: davidsen@crdos1.crd.ge.COM (Wm E Davidsen Jr) Newsgroups: comp.arch Subject: Re: CISC vs. RISC Code Sizes Keywords: CISC RISC Size CRISP MIPS R3000 Message-ID: <3436@crdos1.crd.ge.COM> Date: 18 Jun 91 18:11:54 GMT References: <1991Jun18.132315.8202@cbnewsl.att.com> <1991Jun18.152303.1889@rice.edu> Reply-To: davidsen@crdos1.crd.ge.com (bill davidsen) Organization: GE Corp R&D Center, Schenectady NY Lines: 36 In article <1991Jun18.152303.1889@rice.edu> preston@asta.rice.edu (Preston Briggs) writes: | Again, we'd _really_ like to see instruction counts. | I know the MIPS compiler unrolls loops, at least sometimes. | This alone makes me doubt the validity of any static instruction counts. Speaking just for me the instruction counts are of minimal interest, since my real concern is not how many instructions it generates, or how high an instruction rate, but how many CPU sec it takes to run the program. I have no doubt that my IPC has a higher instruction rate than my 486, but when I start a program I don't care about anything but how long it takes, and how much memory it uses. And while the code size of one program is of little concern within the limits we've been discussing, a 20% increase in stored program size makes a diference on the fileserver. Say I have a total binary size (including X) of 2.5GB, 20% of that is about a reasonable news partition... | An earlier poster noted that sometimes code size dominates all other | considerations. In this case, we should consider an interpreter. | Forth is one example. To an extent, CISC machines (with microcoded | implementations) are another. Then there's table-driven scanners, parsers, | code generators, and so forth. Choosing the right instruction set | can lead to tremendous space compression (orders of magnitude). I'd like to see how you got that. Even if you had a CPU which used LZW or Huffman code as opcodes, I don't think you see even one order of magnitude. Note that this is hard to measure, since any object files almost always contain data. Still, no compression method operating on either RISC or CISC code will give anything like even one order of magnitude, so I don't see that using an interpreted language will save that much. If it will maybe you've hit on another data compression scheme. -- bill davidsen (davidsen@crdos1.crd.GE.COM -or- uunet!crdgw1!crdos1!davidsen) GE Corp R&D Center, Information Systems Operation, tech support group Moderator comp.binaries.ibm.pc and 386-users digest.