Xref: utzoo comp.arch:11416 comp.lang.misc:3479 Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!apple!motcsd!dms!albaugh From: albaugh@dms.UUCP (Mike Albaugh) Newsgroups: comp.arch,comp.lang.misc Subject: Fast conversions, another urban myth? Keywords: BCD, radix-conversion, COBOL Message-ID: <832@dms.UUCP> Date: 15 Sep 89 19:29:50 GMT Organization: Atari Games Inc., Milpitas, CA Lines: 58 Sorry for the cross-post, but I'm looking for data from comp.lang.misc or algorithms from there or comp.arch. Besides, this question stems from a recent bout on comp.arch. Anyway, the "conventional wisdom" on comp.arch was that: BCD math is inherently slow Cobol doesn't do much math anyway ergo: "modern" processors don't need BCD math, just convert to binary and do the math there, then back to BCD for output. A counter-example was given of the HPPA, which had decimal correction instructions, but the general attitude seemed to be as expressed above. This seems to imply that the conversion to binary (and back) is free, or at least cheap, or that, contrary to the conventional wisdom, COBOL _does_ do enough math to "buy back" the cost of conversion. If the latter, I'm only curious and would welcome supporting statistics. If the former, then someone, somewhere has some really spiff BCD->binary->BCD routines, because the best I can come up with have the following sort of "costs" (where a 32-bit binary add is 1) BCD add 2-8 BCD->Binary 9-30 Binary->BCD 16(_very_ optimistic)-100 so to "pay back" the cost of conversion, an 8-digit BCD number would need to particaipate in at least 3 additions (fastest convert, slowest add), and this assumes that the 1-cycle multiply needed for the fastest convert cannot be used in the BCD add. Realistic guesses are more like 10 additions to pay for conversion. When all those numbers like SSN and part-number that are not really arithmetic at all get averaged in, it starts looking pretty unlikely from where I sit. Yes, I know SSN _could_ be stored as a string, but consider that a lot of those COBOL programs were written when 10Meg was a _big_ disk, so wanton waste of 4.5 bytes on each of some large number of records was not condoned. I realize that not all (many?) of the numbers involved are 8 BCD digits, but all the faster algorithms I am aware of "scale" within the range of my approximations, and all are hit in proportion (i.e. conversion speeds up about as much as BCD add does). Actually the "sawtooth" of time vs number of digits hits bin->BCD a bit harder than the other two, when we exceed the word length. To forestall possible flames: I am _not_ contending that "modern" processors need BCD in hardware. HP's mix may not be yours. But BCD library routines are not really tough on most processors. So, where did I go wrong, or should I say, where do I find that blazing Binary->BCD routine? :-) Mike (I'll take my answer off the air) | Mike Albaugh (albaugh@dms.UUCP || {...decwrl!pyramid!}weitek!dms!albaugh) | Atari Games Corp (Arcade Games, no relation to the makers of the ST) | 675 Sycamore Dr. Milpitas, CA 95035 voice: (408)434-1709 | The opinions expressed are my own (Boy, are they ever)