Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!apple!sun-barr!ames!hc!lanl!jlg From: jlg@lanl.gov (Jim Giles) Newsgroups: comp.arch Subject: Re: More RISC vs. CISC wars Message-ID: <13982@lanl.gov> Date: 11 Jul 89 21:02:35 GMT References: <42550@bbn.COM> Organization: Los Alamos National Laboratory Lines: 33 From article <42550@bbn.COM>, by slackey@bbn.com (Stan Lackey): < [...] <>I don't <>know of any CISC machines with 'hardwired' instruction sets. Micro- <>coding slows the machine down. < < This is an interesting statement. As I recall hearing, Cray started < this perception back in the 70's. I thought it had been proven wrong. < For example, the Alliant executes the instruction: < < add.d (an)+, fp0 < < in one cycle (yes, that's double precision memory-to-register add, < auto increment), and it's microcoded. Are you saying that it would be < done in zero cycles if we got rid of the microcode? Gee, and after < spending so much real estate on those microcode RAM's... And, how many microcycles does 'one cycle' on the Alliant correspond to? You don't suppose that a smaller instruction set would allow instructions to run closer to the gate delay times rather than be multiple microcycles long? Seems to me that a RISC machine might have _cycle_ times equal to the _microcycle_ of your CISC machine. The real estate for the your microcode rom could better be used as a high speed instruction buffer. With the instruction set hardwired, the individual instructions would operate at gate delay speeds. This could all be done for machine with _fewer_ instructions. And, as everone seems to agree, compilers for CISCs don't use all those extra instructions anyway. Seems like a good idea to get rid of them and speed up the machine! Alliant is obviously fairly slow, since it can do something to an arbitrary memory location in one cycle. The cycle time is aparently longer than the memory delay time.