Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!tut.cis.ohio-state.edu!att!emory!hubcap!eugene From: eugene@nas.nasa.gov (Eugene N. Miya) Newsgroups: comp.parallel Subject: Re: Critical Issues in Distributed-Memory Multicomputing. Message-ID: <12460@hubcap.clemson.edu> Date: 3 Jan 91 13:46:54 GMT References: <12235@hubcap.clemson.edu> <12401@hubcap.clemson.edu> Sender: fpst@hubcap.clemson.edu Reply-To: eugene@wilbur.nas.nasa.gov (Eugene N. Miya) Organization: NAS Program, NASA Ames Research Center, Moffett Field, CA Lines: 138 Approved: parallel@hubcap.clemson.edu In article <12401@hubcap.clemson.edu> bcsaic!carroll@cs.washington.edu (Jeff Carroll) writes: > In my experience we now have two "generations" of practicing >engineers - those who have learned how to use the computer as an >engineering tool, and those who have not. Actually I suggest you sub-divide those who learn computers into two cases (pre-card and post card deck). I also suggest those engineers who lack computer experience have valuable practical and theoretical experience which make them knowledge computer skeptics. Older computer people are just now making it into the management ranks of organizations. Taking a FORTRAN/batch oriented view of programming has some distinct dis-advantages. This can be MORE detrimental than total computer naive. I say this from earlier attempts to bring NASA out of card-age. But that is a political discussion. We must not be computer bigoted to those not knowing computers. > I think that it must be admitted that HW is at least way ahead >of SW. The economics of the situation dictate that unless there are Again, this is where I suggested Fred Brook's Mythical Man-Month. See his 1960s S curve on systems development where HW cost and development initially dominate. > The DMMP manufacturers have done a good job of using standards >when it comes to storage and IO. What is needed now is an interprocessor >interface standard - the transputer link is nearly perfect except that >it is *way too slow*. It has all the other goodies - it's simple, it's >cheap, and it's public domain. "Good job" is a bit strong. The first hypercubes had all I/O going thru a single node. They discovered the need for balance quickly. We don't have standards or even good concepts for parallel disk systems like this (infancy). It does have to be "public domain," but that's a touchy issue. Again balance: See Amdahl's article. > There are not enough systems programmers to go around in the >industry at large, and even fewer (I guess) who can deal with MIMD >boxes. Nonetheless the vendors push the hardware because that's >ultimately where the performance is (you can always machine-code a sexy >demo to take around to trade shows. This is not unique to parallel >systems. I understand that that's what IBM did with the RS6000.) When the CM made Time, Steve Squires (then head of DARPA's office on this) made the quote only 1 in 3 programmers will make the transition to parallel programming architectures. Now where did he pull that figure out of his hat? I'm trying to maintain a comprehensive bibliography in the field and that's certainly a topic to inetrest me. Performance appears to be in the hardware, it set bounds, but I would not discount optimizing software. > There are lots and lots of books containing papers about and >algorithms for machines that don`t exist any more. What is lacking (as >far as I know) is a broadly accepted taxonomy upon which parametrizable >algorithms can be developed. Unfortunately there are only a handful of >people around with the mental equipment to do this sort of thing (I >certainly don't claim to be one of them). Actually, there are fairly few books on parallel algorithms. Only a few books on machines. Knowledge gained from most failed projects is lost. and "man" learns from failure. I admit I can't make it to Computer Literacy Bookshop every week [an Ace card for Silicon Valley next to Fry's Electronics (groceries next to electronic chips in a building decorated like a chip), I think Stevenson (your moderator) walked out with $300 in books once 8^)], and I can't impose too special a favor for every new book on parallelism, but I could, and I know they would say yes. A taxonomy would help, but it's only one step. I don't claim the mental equipment either, BTW. > I often wonder what happened to the flowchart. We are just now >developing the technology (GUIs, OOP) which will enable us to make >serious use of the flowchart as a programming tool. I know of at least two attempts and probably more to make dataflow languages using GUI (one of the first crudest was using a language called Appleflow written as a demo on a Mac (1985). I recall a joke how hard it was to program a factorial n! function. Geometry is filled with wonderful analogies but they fall down when it comes to certain aspects of timing and synchronization. Certainly more work needs to be done, but it is no relief to the "dusty deck." > I think a generally useful DMMP system has to support node >multitasking at least to the level of permitting the "virtual processor" >concept. You shouldn't (within practical limits) have to size the >problem to fit the machine. I'm not so concerned about file system >access; you can always build a postprocessor to interpret the answer for >you once it exists in some algorithmically convenient form. The VP is something which was added to the Connection Machine. It certainly helps. We shouldn't, but we do. The US builds and funds special purpose one of a kind machines. I fear your "post-processor" because it seems to leave too much after the fact. Shuggs off potentially important details, and computer users frequently don't like that. Machines/architectures have "failed" because they lack appropriate balance and attention to detail. This is why comp/arch is an art. > Again, I expect to see the resurgence of the flowchart in the >form of graphical programming aids. What good is that Iris on your desk >if it can't help you program? If all you can do is display the results >of your work, it might as well be a big-screen TV in the conference >room. > > Once we have the right taxonomy and flowcharting concepts, we >can start to merge the various parallel technologies that have developed >(MIMD, SIMD, VLIW, dataflow, ...) into systems that can be >architecturally parametrized to suit the application/range of >applications desired. You need to walk to want into a room filled with non-parallelism people. I had a discussion with a former editor (Peter Neumann, SRI) of mine. The software engineers are trying to get away from flow charts (which have their problems). As some of us have read and discussed the Pancake/Bermark paper, we have too much confusion and differing assumptions about parallelism. Users make too many assumptions of synchronization. The notation for parallelism does not completelt scale at this time. > Maybe I should go back to grad school. I need to, too. 8^) We all probably do, and we need to go back to schools filled with collections of different parallel computers (hence the 40s analogy). But our educational system lacks the money for this hardware which has greater architectural diversity than the days of the ENIAC and the Harvard machines. Students should have access to any architecture they want, get a chance to try different things, .... but that is education. Too much said. --e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov {uunet,mailrus,other gateways}!ames!eugene **If you are unfamiliar with Computer Literacy Bookshop, you should be you can even buy UMI Press PhD theses in hardcover form there. (408-435-1118)