Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!sdd.hp.com!samsung!emory!hubcap!fsset From: fsset@bach.lerc.nasa.gov (Scott E. Townsend) Newsgroups: comp.parallel Subject: Re: Critical Issues in Distributed-Memory Multicomputing. Message-ID: <12061@hubcap.clemson.edu> Date: 4 Dec 90 14:39:04 GMT References: <12027@hubcap.clemson.edu> Sender: fpst@hubcap.clemson.edu Reply-To: bach!fsset@uunet.UU.NET (Scott E. Townsend) Organization: NASA/Lewis Research Center, Cleveland Lines: 65 Approved: parallel@hubcap.clemson.edu In article <12027@hubcap.clemson.edu> wangjw@usceast.cs.scarolina.edu (Jingwen Wang) writes: > > The Distributed-Memory Multi-Computers have now been facilitated in >many research institutions and Universities. The most exciting time of >such architectures ( I guess is from 1984-1988 ) seems to have come >to an end. The exploratory stage of such architectures is in fact >completed. Many scientists engaged in such architectures earlier have >now switched to other directions (such as those in the Caltech). The >hypercube computers now seem far from being matual for widespread >engineering usage. The software is always a vital problem for any >parallel computers, and is particularly a problem for multicomputers. > There seems still much to be done in this area. And many researchers >are numerical scientists who develop parallel algorithms for such >machines. The software problem and programming are given less attention >to. > What in hell are the most critical issues in these area? What are the >undergoing topics to cope with these issues? I am not so sure about >these. We seem to be in an endless loop designing endless algorithms >for an endless variaty of applications. When could usual engineer >(not computer engineer) can use such machine with ease? > Shall I suggest that interested experts contribute their ideas to >us. Of course, we don't expect anyone bring with a complete answer >to all problems. Any comments are welcome! > >Jingwen Wang I'm no expert, but it seems to me that Distributed-Memory Multi-Computers are at a point where they need some standardization at the user level, de facto or otherwise. We need some standard portable languages and tools to allow users to worry about their problem, not wether they are running on an NCube or IPSC or some other machine. At a low level, this implies a portable message passing library with tools for debugging and monitoring. People here, at Argonne, at Oak Ridge, and many other facilities are working on this. But a common standard that a user can expect to exist on whatever machine they run on isn't here yet. At the next level, I think you want to hide the details of message passing. This might be through distributed shared memory, or maybe by the compiler generating message passing calls in the object code. I don't expect great efficiency on 'dusty decks', but an algorithm designer should only be concerned with parallelism, not the details of each little message. There are a number of research efforts in this area, but the tools/systems developed are all just a bit different. If the low level gets standardized, then maybe something like a parallel gcc could be developed. Unfortunately, I don't think enough experience has been gained with these systems for people to agree on a standard. We all know about SEND and RECV, but we're still playing around with the details. And I don't think a system will really be usable for general problem solving until the programmer can concentrate on parallel algorithm design rather than passing individual messages. (You might compare this with the stdio package for the C language. I/O is often different on different operating systems, i.e UNIX, VM, MS DOS, yet I can expect fprintf to exist and always work the same way. It also hides the implicit buffering from me) -- ------------------------------------------------------------------------ Scott Townsend | Phone: 216-433-8101 NASA Lewis Research Center | Mail Stop: 5-11 Cleveland, Ohio 44135 | Email: fsset@bach.lerc.nasa.gov ------------------------------------------------------------------------ Brought to you by Super Global Mega Corp .com