Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!wuarchive!emory!hubcap!eugene From: eugene@nas.nasa.gov (Eugene N. Miya) Newsgroups: comp.parallel Subject: Re: Pancake and Bergmark's paper Message-ID: <12399@hubcap.clemson.edu> Date: 27 Dec 90 14:33:10 GMT References: <12382@hubcap.clemson.edu> Sender: fpst@hubcap.clemson.edu Reply-To: eugene@wilbur.nas.nasa.gov (Eugene N. Miya) Organization: NAS Program, NASA Ames Research Center, Moffett Field, CA Lines: 79 Approved: parallel@hubcap.clemson.edu Doug Pase, Doreen Cheng and I just hashed over some of the topics of this paper for 2 hours. We went down line by line of their table. On the table typo: I believe the examples "critical section and barriers" appeared on the wrong side and should be swapped for "routine calls." In article <12382@hubcap.clemson.edu> johnson@cs.uiuc.edu (Ralph Johnson) writes: >One thing that I did NOT like was their assumption that >Fortran was the only way to go. Fortran is a horrible >language. It has very good implementations, it is true. >Scientific computation would progress a lot faster if >people managed to get away from Fortran. Please do not blame the bear of "bad news." In fact I think you will find a lot of people wondering why you think it's horrible. Pancake and Bergmark described the situation, well, in my opinion. I did not detect a particular bias, just a reporting of the situation. Other languages in common scientific use include BASIC (for better or worse), Pascal as well as C for PCs. I think C predominated because of Unix as a first OS and found in graphics packages (P&B's note on other languages). Several parts of the problem: the horrors "evident" to computer scientists such as "GOTO considered harmful," the values of user-defined structures and types are not apparent to scientific programmer/computational scientists. We have a real education problem here. Education takes valuable time and money. It also uses scientific personnel resources. These structures (like GOTOs) are not problems in the small. They are problems "in the large." That's what makes them deceptive problems. Every one must face that science has amassed knowledge in the form of these programs (big programs). Changing them is non-trivial. Our users want to solve their problems, not computer science's problems. We need some give and take. >Of course, I have no immediate candidates to replace >Fortran for scientific computation. A lot of scientists >around here are fans of Mathematica, and claim that it >is very well suited to scientific computation. Of course, >like all new languages, Mathematica is pretty slow. It >seems to take a long time for people to make new languages >as fast as old ones. Fortran's domination of the scientific >programming marketplace seems to have stifled the development >of competitors. I can think of languages I would find interesting, but it takes the accumulated experience of a user community to find problems with a language. It's an art. I'd like to see more work with SISAL, a dataflow language. I know people who would like LISP variants like *LISP and Scheme, and I can't convince one physicist the value of LISP, yet he just wrote an FFT in awk (and uses yacc and lex for physics problems, so not your average physicist). I've had many discussions with many end-users and they wonder why computer scientists 1) dump on Fortran, 2) propose new languages, etc. Why not add features to existing languages? (it's hard for some of them to understand syntax versus semantic difference; one doesn't compare languages simply by counting KEYWORDS.) It has also been said Ada has set back programming language development back ten years, and Unix has set operating system development back as well. Our problem is that it takes time to identify problems. What is slow about Mathematica is not the language by the implementation. I would have added a couple of other observations Scientific programmers seem to run more on IBM PCs, Macs, and SGI Iris whereas more computer scientists run on SUNs. A real workstation preference (with a smattering of H-P, and other machines). Goes with cost and graphics. I would also add that Pancake and Bergmark should read Knuth's article on Algorithmic versus Mathematical Thinking, American Math Monthly, March or April 1985. Doug Pase noted that parallelism is a concern everywhere but the abstract math modeling phase of P&B's hierarchy. So the paper is a good spring board for discussion. --e. nobuo miya, NASA Ames Research Center, eugene@orville.nas.nasa.gov {uunet,mailrus,other gateways}!ames!eugene