Xref: utzoo comp.arch:10603 comp.misc:6534 Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!cs.utexas.edu!ico!ism780c!news From: news@ism780c.isc.com (News system) Newsgroups: comp.arch,comp.misc Subject: Re: TRON (a little long) Message-ID: <29418@ism780c.isc.com> Date: 11 Jul 89 21:00:57 GMT References: <32424@apple.Apple.COM> <226@arnor.UUCP> <33015@apple.Apple.COM> Reply-To: marv@ism780.UUCP (Marvin Rubenstein) Organization: Interactive Systems Corp., Santa Monica CA Lines: 101 In article <33015@apple.Apple.COM> baum@apple.UUCP (Allen Baum) writes: >[] >>More specific, please? Efficiency,performance,instruction set (oh, it's CISC, >>but you probably know how WIDE this term is now, don't you), features? Is it >>something like iAPX-432 (I mean - ideas)? >> >>P.S. Don't waste time flaming me - I'd rather like a technical answer. > >Well, it appears that the request for no flaming didn't work. Oh, well, sorry. > >To answer your question, the TRON architecture has lotsa opcodes, and many more >lotsa addressing modes. Instructions range from 16-160 bits (this is probably >an exaggeration, but I can't find my documentation on it right now, and it is >a nice round order of magnitude :-) ) I seem to recall that addressing modes >are more like addressing expressions. >{decwrl,hplabs}!amdahl!apple!baum As an implementor of an assembler and a C compiler for the machine I can make some comments. The assembler recoginizes 442 opcode names. But the actual number of machine instructions is greater than this. For an opcode name like 'mov' the assembler selects from one of 7 different machine instructions depending on the operands. Allen Baum is correct about instruction complexity. But his 160 bit estimate was conservative. Here is an example of a single instruction. The number of bits in the instruction is 416. And there are instructions that are even longer than this one. mov @(@(@(@(a,r1),b,r2),c,r3),d,r4), @(@(@(@(a,r1),b,r2),c,r3),d,r4) D20B 4412 0003 0D40 4812 0004 93E0 4C12 0006 1A80 9012 0000 C350 8A0B 4412 0003 0D40 4812 0004 93E0 4C12 0006 1A80 9012 0000 C350 The block of hex digits is the object form of the assembly source line. This form of 'mov' does a memory to memory move. The address is computed as follows: 1. The displacement 'a' is added to the contents of r1. The contents of the 32 bit word at this memory location becomes the base address for the next step. 2. The displacement 'b' and the contents of r2 are added to the base to form a new memory address. The contents of this memory location is the new base. 3. step three is repeaed twice more using (c,r3) and (d,r4) 4. The result of all this is the address of the operand to be moved. Arithmetic instructions come in two flavors, signed and unsigned. For example: add -- signed add addu -- unsigned add The instruction: add @x.b, r9.w Will add the byte at memory location 'x' to register r9 treated as a word. The source operand is sign extended. Addu does zero extension. Also add and addu produce different condition codes with the same inputs. In general the source operand may be a byte, two bytes, four bytes, or eight bytes wide. And the destination may be any of those sizes. (some versions of the machine to not provide all sizes). There are instructions for accessing bit fields, for operating on doubly linked lists, and for operating on count strings and strings terminated by a program defined sentinal. Conditional branches are done by adding a signed displacement to the program counter. The displacement may be 8, 16, or 32 bits. For the case of 8 bits, the displacement appearing in the instruction is shifted left one before being added. 16 and 32 bit displacements are not shifted but they must be even numbers. One interesting point. Our compiler generated a sequence like: mov @(20,r1*4), r0 mov r0, @(x) The peep whole optimizer replaced the two instructions with the single instruction: mov @(20,r1*4), @(x) It turns out that the original pair executes just as fast as the single instruction. Furthermore, the single instruction takes just as much memory as the two that it replaced. Because a lot of the instruction complexity is hidden by the assembly language we did not notice this anomaly until after we wrote the peep whole optimizer. As to code density, out measurements show that programs are smaller than corresponding 386 programs (a similar compiler was used for both machines). I will make no comment about performence because the machine comes in many models. I have written assemblers for over a dozen machines, and this one is by far the most complex. However, as seen by the assembly langague programmer (or the compiler writer) it looks very orthoginal. Marv Rubinstein