Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!usc!zaphod.mps.ohio-state.edu!mips!synoptics!unix!garth!fouts@bozeman.ingr.com (Martin Fouts) From: fouts@bozeman.ingr.com (Martin Fouts) Newsgroups: comp.arch Subject: Speed Kills (long) Message-ID: <447@garth.UUCP> Date: 9 Jun 90 15:55:23 GMT Sender: fouts@garth.UUCP Distribution: comp Organization: INTERGRAPH (APD) -- Palo Alto, CA Lines: 157 A) Some apparently random thoughts: 1) Processors are much faster than programmers: In 1979, Kuck published a text book on computer architecture (Sorry I can't remember the reference) in which he claimed that between Eniac and then current machines there was a (roughly) six order of magnitude performance improvement in computer systems, based on the observation that Eniac took (his claim) 300 milliseconds to do a floating point add and then current supercomputers (I forget which one) took 300 nanoseconds. Since then the add time is down to about 3ns (actually 4, but we're into orders of magnitude here) so I would claim that another decade has added two more orders of magnitude for a total of 10^8 improvement in raw compute speed. In the same chapter he recalled that it took "about an afternoon" to wire up Eniac to solve a simple system of linear equations. (let's call that 4 hours.) I would claim that it currently takes the same length of time to write the program and/or enter the data needed to solve a system of equations of about the same size. However, I'll be willing to give you that a "power user" with the data on line and a good canned system can solve the problem in .4 hours (24 minutes) At most one order of magnitude. I would argue that it couldn't be done in .04 hours (2.4 minutes = 144 seconds) and I think everyone would agree that it can't be done in .004 hours (14.4 seconds.) 2) User friendly systems aren't getting any friendlier: Lee Feldstein gave an interview to a trade rag in which he pointed out that for his purposes, fancy laser printers and hot 386 systems had almost reached the power and quality where they could produce the same word processing quality in the same time as his old Kaypro/Diablo combination at only twice the cost and requiring only twice the time to do anything. He was very unhappy with what he saw as a regression in the usability of computer systems. 3) Programmers aren't doing new work, the are doing old work on new machines: I've asked several Un*x development managers how long it had taken to multithread their SVR3 kernel's and they've all given me the same length of time. When I asked how long they though it would take to multithread SVR4, they all thought it would take the same time as SVR3 had taken, even though they were using people who "knew how to do it" the second time. B) Observations: 1) Currrently most programming is porting old code to new machines. I don't have the precise numbers handy, but something like 90% of all programmers are at work either porting tools to new machines or upgrading software to work under new releases of operating systems. Most of the rest are working on adding "new" features to operating systems, in an apparent attempt to make the systems more useful. 2) CPU vendors don't help. Next year's Widget-X is going to go X times as fast as this year's widget *but* the time it will take to port and tune all of your codes is on the same order of magnitude as the time it is going to take to introduce Widget-X2 to replace widget X. [This is not an exageration. At one vendor's shop I know of they are already porting to an "X2" simulator while porting to real "X" hardware.] 3) We don't know how to write reusable code. It takes so long to "fix" Un*x to be multithreaded because it is hard to follow the path: +- multithread --+ original tree ---+ +- merge -- +- new stuff ----+ Especially when the upper and lower branches are being followed by different organizations. I would be willing to be that the effort needed to add sockets to SV was duplicated by more than 100 separate shops... C) Prediction: As the rate of introduction and obscelences of new generations of hardware increases the development of truely new software functionality will decrease, dropping to zero. [I claim that this is an observation. There hasn't been any "new" software since the middle 70s.] Further, the rate of introduction may increase to the point that it will be impossible to utilize the speed of the next processor before it is outdated. The result would be complete stagnation. D) PLEA: Speed Kills. This whole problem stems from the need to port software to new generation machines which were made incompatible with old generation machines because that was the way to make them faster. Most of the programming talent is going into supporting the SOS (same old stuff) on yahc (yet another hot cpu.) I propose that a use for comp.arch which is better than "my (insert noun) is (insert compartive adjective) than your (insert noun)" arguments would be to discuss the topic: What architectures can be proposed now which: 1) are extensible in ways which allow high performance implementations without loss of compatibility. (It can be done. IBM did it in the 60s and got 20 years of extensibility from an architecture. They blew the next part though.) 2) enhance programmer productivity by supporting reusability. 3) Support improved *user* speed rather than hot *cpu* speed. E) Why bother? There is a huge untapped market for "computrons" which will remain untapped until they are as easy to use as toasters. They aren't going to get easy to use if all of our effort goes into porting the SOS to YAHP. Besides, I've had a lot of usable functionality at one time or another on one box or another, and I would rather have it all in one place at a usable speed than a little bit of it in each of a lot of places but each one fast. F) Don't flame? Before you decide to attack the "nothing new under the sun" premise, consider your computer history very carefully. In operating systems, languages, networking, and programming environments, I can find 15 to 20 year old systems which between them had all of the features you are going to think up. (Including networks and multiprocessors.) The only thing you are going to be able to point out as advances are speed, cost, and some kinds of 3d graphics. -- Martin Fouts UUCP: ...!pyramid!garth!fouts ARPA: apd!fouts@ingr.com PHONE: (415) 852-2310 FAX: (415) 856-9224 MAIL: 2400 Geng Road, Palo Alto, CA, 94303 If you can find an opinion in my posting, please let me know. I don't have opinions, only misconceptions.