Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!rice!asta.rice.edu!preston From: preston@asta.rice.edu (Preston Briggs) Newsgroups: comp.arch Subject: Re: CISC vs. RISC Code Sizes Keywords: CISC RISC Size CRISP MIPS R3000 Message-ID: <1991Jun18.190453.6509@rice.edu> Date: 18 Jun 91 19:04:53 GMT References: <1991Jun18.132315.8202@cbnewsl.att.com> <1991Jun18.152303.1889@rice.edu> <3436@crdos1.crd.ge.COM> Sender: news@rice.edu (News) Organization: Rice University, Houston Lines: 55 I wrote: >| An earlier poster noted that sometimes code size dominates all other >| considerations. In this case, we should consider an interpreter. >| Forth is one example. To an extent, CISC machines (with microcoded >| implementations) are another. Then there's table-driven scanners, parsers, >| code generators, and so forth. Choosing the right instruction set >| can lead to tremendous space compression (orders of magnitude). and, davidsen@crdos1.crd.ge.com (bill davidsen) writes: > I'd like to see how you got that. Even if you had a CPU which used LZW >or Huffman code as opcodes, I don't think you see even one order of >magnitude. Note that this is hard to measure, since any object files >almost always contain data. Still, no compression method operating on >either RISC or CISC code will give anything like even one order of >magnitude, so I don't see that using an interpreted language will save >that much. If it will maybe you've hit on another data compression >scheme. I'm perhaps thinking a little more indirectly than you. It's nothing too radical... A little APL 1-liner is a very compact representation of what might take several K if compiled into lots of (inlined) assembly. We can build a table-driven LR parser, using various table compression ideas (eliminate duplicate rows, ...) that's much smaller (table+interpreter) than directly writing the parser in assembly. Also much slower! In the proceedings of the 86 symposium on compiler construction, Pennello gives an example where they built a very fast LR parser by trading space for time, effectively hard coding much of the parse table. The space cost was estimated at a factor of 4 if error recovery was included. The work on "super-combinators" is another example. Then again, there's Forth. They trade a factor of 10 (?) in time to get a flexible threaded-code interpretation scheme that's very compact. One of the micro-software companies is supposed to deliver applications with 2 kinds of code: machine code, for time-sections, and some sort of interpreted code (like P-code?), for the bulky, non-critical parts of the code. With such a scheme, they were able to fit a lot more functionality into 640K (or whatever constaint they faced). So, people who've got very tight space constraints would do well to consider using an interpreter of some sort. Preston Briggs