Path: utzoo!utgpu!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tut.cis.ohio-state.edu!zaphod.mps.ohio-state.edu!rpi!uupsi!cmcl2!kramden.acf.nyu.edu!brnstnd From: brnstnd@kramden.acf.nyu.edu (Dan Bernstein) Newsgroups: comp.arch Subject: Re: bizarre instructions Message-ID: <22708:Feb2608:19:5191@kramden.acf.nyu.edu> Date: 26 Feb 91 08:19:51 GMT References: <10244@dog.ee.lbl.gov> <1991Feb25.203629.5059@linus.mitre.org> <10278@dog.ee.lbl.gov> Organization: IR Lines: 37 In article <10278@dog.ee.lbl.gov> torek@elf.ee.lbl.gov (Chris Torek) writes: > Do you want machine-independent semantics, given some syntax, for > (A * B + C) divrem D? Yes. That's exactly the problem here: given an operation glorp, we need to express glorp in a portable way with machine-independent semantics. Worse than that: when a chip supports glorp, we should be able to write the code so that it will be compiled the right way for that chip---but we shouldn't have to lose portability. Worse than that: the language designer probably never heard of glorp. No matter how nearsighted the language designer was, the chip designer has to be able to show his support for glorp. And no matter how nearsighted the language designer and chip designer were, the programmer has to be able to write glorp portably yet efficiently. Is this too much to ask? I don't think so. See my previous article for my thoughts on how this can and should be done. > I further claim that if you require an to exist and > to define a mul_add_div_rem operation, and make constructing a proper > part of `building the compiler' (just as constructing a proper > and and is already part of building any > hosted ANSI C compiler), that your mul_add_div_rem operation will be > `built in'. That's exactly the wrong answer. Programmers can't wait for chip designers, and nobody can wait for language designers. It has to be possible to set up a good enough structure beforehand that (1) chip designers can make innovations available without changing the language, (2) programmers can use those innovations without changing the chips or the language, and (3) programmers don't have to sacrifice portability for efficiency. I believe this is possible, and requires only a small addition to languages and a bit of discipline on the part of compiler writers. ---Dan Brought to you by Super Global Mega Corp .com