Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!yale!mintaka!spdcc!iecc!johnl From: johnl@iecc.cambridge.ma.us (John R. Levine) Newsgroups: comp.arch Subject: Re: machines with some loadable microcode are easier to fix Summary: but not the IBM 360 Keywords: microcode hardware bugs Message-ID: <1991Jan04.205635.16420@iecc.cambridge.ma.us> Date: 4 Jan 91 20:56:35 GMT References: <71537@bu.edu.bu.edu> <1991Jan04.035359.12547@kithrup.COM> <777@TALOS.UUCP> Organization: I.E.C.C. Lines: 53 In article <777@TALOS.UUCP> jerry@TALOS.UUCP (Jerry Gitomer) writes: >Given these circumstances the solution was to use loadable microcode to >make a group of dissimilar computers look alike to the programmer. The >best illustration was the IBM 360 family. To the programmer each 360 was >a 32-bit word machine with 16 registers. The 360/30 was an 8-bit machine, >the 360/40 was a 16-bit machine, the 360/50 was the only 32-bit machine in >the family, the 360/65 was a 64-bit(?) machine, and the 360/75 was even >bigger. No full member of System/360 had loadable microcode. The models 30 through 67 had microcode on little cards that could, I suppose, be changed if you had a screwdriver. The models 85, 91, and I believe also 75 were hard-wired. The model 44 was a hard-wired subset with extra I/O instructions, intended for real time applications. The model 20 was a strange case, it was a desk-sized machine whose architecture was an almost compatible subset of the larger machines although the I/O was entirely different. The slower models had ROM microprograms, but the submodel 5, the fastest one, stored the microcode in the same core memory as the application code. If the microcode got frotzed, (core survives power-off, so that was infrequent) there was a large deck of cards in the back from which you could reload the microcode. I know people who hacked in extra instructions, but it was certainly not sanctioned by IBM. Since I/O was heavily assisted by the microcode, e.g. there was a single "read card and translate to EBCDIC" instruction, they may have distributed extra microcode to go with optional peripherals. The 25 was the same engine running as a real 360, so I presume the same tricks could be played, and the model 22 was a model 30 renumbered late in its career with a lower price. I bet IBM wished they all did have loadable microcode; all early models with floating point had to be recalled to fix a rather serious design error that caused extremely inaccurate results. Many (all?) models of the follow-on 370 series did indeed have loadable microcode. Indeed, the now ubiquitous diskette first appeared as the boot device for 370 microcode. As the 370 series evolved, lots of features were added, most notably the "DAT box" that provides virtual memory, but also various I/O improvements and "assists" for virtual machines. (On the lower models of 360 and 370, the I/O channel was implemented in microcode in the CPU, it didn't have its own processor.) I expect that the ability to ship new microcode disks was key to allowing these incremental upgrades without totally alienating existing customers. I'm not sure whether the 370 was the first commercial use of loadable microcode. I doubt it, but don't know of other earlier uses. (Was the B1700 earlier? I can't tell.) Microcode per se dates back to the EDSAC 2 which Wilkes first envisioned in 1951 and first started to do useful work in 1958, but it used a ROM implemented in core that was not easily changed. -- John R. Levine, IECC, POB 349, Cambridge MA 02238, +1 617 864 9650 johnl@iecc.cambridge.ma.us, {ima|spdcc|world}!iecc!johnl " #(ps,#(rs))' " - L. P. Deutsch and C. N. Mooers