Xref: utzoo comp.edu:2737 comp.software-eng:2584 Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!zaphod.mps.ohio-state.edu!samsung!uunet!ncrlnk!ncrcae!hubcap!billwolf%hazel.cs.clemson.edu From: billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu (William Thomas Wolfe, 2847 ) Newsgroups: comp.edu,comp.software-eng Subject: Re: CS education Message-ID: <7290@hubcap.clemson.edu> Date: 2 Dec 89 20:40:52 GMT References: <550@kunivv1.sci.kun.nl> Sender: news@hubcap.clemson.edu Reply-To: billwolf%hazel.cs.clemson.edu@hubcap.clemson.edu Lines: 28 From ge@kunivv1.sci.kun.nl (Ge' Weijers): >> Enough to understand how to use the tool, and no more. > % I can't agree with that one. It is true that you can describe the semantics % of operations without using internal algorithms. But to analyse performance % of file systems you need to know quite a lot about them. And in DP % applications performance is critical. I recently heard a story about % a big database that was split into a number of smaller parts, one for % each of the client organisations of the database. This seemed a good % decision, but the OS did not support multiple processes sharing code. % Swapping bogged down the system critically, because there was much less % locality of references on the system. Performance should be documented by the OS vendor. Compilers should document their performance with respect to the OSs they run under. If this is not done, then the information can be developed by specialists in performance issues. There is no need for application developers to do the work of the vendors or or the performance issues analysts. I repeat: Enough to understand how to use the tool, and no more. It may well be necessary to read and analyze tool performance data in order to use a tool, but this in no way implies that one should have to understand the internal details. Bill Wolfe, wtwolfe@hubcap.clemson.edu Brought to you by Super Global Mega Corp .com