Xref: utzoo comp.lang.misc:3490 comp.arch:11450 Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!iuvax!rutgers!mit-eddie!uw-beaver!ubc-cs!alberta!uqv-mts!ualtavm.bitnet!ECULHAM From: ECULHAM@UALTAVM.BITNET (Earl Culham) Newsgroups: comp.lang.misc,comp.arch Subject: Re: Fast conversions, another urban myth? Message-ID: <688@UALTAVM.BITNET> Date: 19 Sep 89 16:37:47 GMT References: <832@dms.UUCP> Reply-To: ECULHAM@UALTAVM.BITNET Organization: University of Alberta VM/CMS Lines: 64 Disclaimer: Author bears full responsibility for contents of this article In article <832@dms.UUCP>, albaugh@dms.UUCP (Mike Albaugh) writes: > > Sorry for the cross-post, but I'm looking for data from >comp.lang.misc or algorithms from there or comp.arch. Besides, >this question stems from a recent bout on comp.arch. > > Anyway, the "conventional wisdom" on comp.arch was that: > > BCD math is inherently slow > Cobol doesn't do much math anyway > ergo: "modern" processors don't need BCD math, just convert to > binary and do the math there, then back to BCD for output. > > ... > >with have the following sort of "costs" (where a 32-bit binary add is 1) > > BCD add 2-8 > BCD->Binary 9-30 > Binary->BCD 16(_very_ optimistic)-100 > >so to "pay back" the cost of conversion, an 8-digit BCD number would >need to particaipate in at least 3 additions (fastest convert, slowest >add), and this assumes that the 1-cycle multiply needed for the fastest >convert cannot be used in the BCD add. Realistic guesses are more like >10 additions to pay for conversion. When all those numbers like SSN and > ... > So, where did I go wrong, or should I say, where do I find >that blazing Binary->BCD routine? :-) This is such a classic RISC discussion that I couldn't resist putting it in the news instead of in direct mail. First, most machines that provide decimal hardware also provide the hardware to convert to and from binary. This skews your numbers slightly. (Numbers from an older Amdahl.) Binary add 1 Decimal add 6.5-23.5 (high end is for 16 digit numbers) Binary to decimal 3.5-20.5 Decimal to binary 21 This gives us a worst case of about 7 to 1. (Every add is first converted to binary, done, then converted back. Like I say, this is the *WORST* case. :-) ) Now, how much time would be spent doing decimal arithmetic (even on a machine with fast decimal hardware)? Likely less than 1%, but certainly less than 10%. If we throw away the hardware, and do the decimal arithmetic in software, we get a lot simpler machine to build. But, what do the performance numbers look like? Hardware ==> .9 + 1(.1) = 1 Software ==> .9 + 7(.1) = 1.6 I've clearly exaggerated the performance gain of the decimal hardware. Yet even with that, all that extra hardware cannot even double the performance of the system. Clearly, decimal arithmetic is one of those high cost, low payback extensions. We should direct our efforts elsewhere.