Path: utzoo!utgpu!jarvis.csri.toronto.edu!cs.utexas.edu!uunet!mcsun!cernvax!chx400!ugun2b!ugobs!bartho From: bartho@obs.unige.ch (PAUL BARTHOLDI) Newsgroups: comp.arch Subject: Re: X-terms v. PCs v. Workstations Message-ID: <457@obs.unige.ch> Date: 29 Nov 89 09:31:02 GMT References: <1128@m3.mfci.UUCP> <1989Nov22.175128.24910@ico.isc.com> <3893@scolex.sco.COM> <39361@lll-winken.LLNL.GOV> <17305@netnews.upenn.edu> <1989Nov25.000120.18261@world.std.com> <1989Nov27.144016.23181@jarvis.csri.toronto.edu> Organization: University of Geneva, Switzerland Lines: 78 In article <1989Nov27.144016.23181@jarvis.csri.toronto.edu>, jdd@db.toronto.edu (John DiMarco) writes: This discussion is quite interesting on a very practical point: What shall I (or you) get next time I need to update my computer facilities ? I find some of the arguments quite to the point, but disagree with others. I will assume a non-tyranic central authority that is open to any solution that is both best for users, easiest to support and cheapest ... I will also assume that all facilities, centralized or not, are interconnected through some network. > Resource duplication: If every group must purchase its own resources, > resources which are used only occasionally will > either be unavailable because no small group can afford the requisite > cost (eg. Phototypesetters, supercomputers, Put_your_favourite_expen- > sive_doodad_here), or be duplicated unnecessarily (eg. laser printers, > scanners, ... completely true for supercomputers and phototypesetters (see also maintenance bellow, but not for laser printers. I find the small laser printers, for a given throughput, cheaper, of better quality, of more modern technology, even easier to use (downloaded fonts, ps etc) than the larger ones. It is also very nice to have a printer near to your office, and duplication means that there is always an other printer running if the first is down. > Maximum single-point usage: If each group must purchase its own > computing equipment, ... > ... If you have a distributed computing environment, imagine > putting all the memory and CPUs of your workstations into one massive > multiprocessing machine, ... Two points: - I just compared the prices of large disk (>600MB) and memory boards. Why is it up to 5 time cheaper with faster access for a PC or WS than for a microVax for ewxample ? 5 independant units also means much better throughput. - Massive multiprocessing machine means a lot of overhead, which you have to pay for! Again, there are a lot of situations where you neeed a lot of central memory, a lot of cpu resources, or disk resources etc. which can be provided only from a central system, but do not underestimate its cost. > Expertise: Distributed sites tend to have too many people playing > at system administration in their own little fiefdoms, > few of whom know what they are doing. (But guess who goes screaming > to whom when something goes wrong...) In a centralized environment, > it is much easier to ensure that the people who are in charge of > the computers are competent and capable. Expertise is a very costly and scarce resource. At the same time I can't see but a tree like distribution of expertise with central root and local leaves. This is true even if all other facilities are centralized. Remember that the leaves must be feed from the root! > Maintenance: More things go wrong with many little machines than > with few big ones, because there are so many more machines > around to fail. Repair bills? Repair time? For example, I'd be > surprised if the repair/maintenance cost for a set of 100 little SCSI > drives on 100 different workstations is less than for half-a-dozen > big SMD drives on one or two big machines, per year. I will almost get a new scsi drive every year for the price of the maintenance of the larger ones (assuming same total capacity). I would guess that the large mass production of 'small' (are 600MB small ?) disk make them more reliable than for the larger machines. many small scsi drives (on different machines) tends to have a larger throughput etc. The software maintenance seams to me more critical. Keeping OS uptodate in a compatible and coherent way for 100 WS is a lot more difficult and time consumming than for a few larger machines. How about multicopies of dB etc? > These are some very good reasons to favour a centralized computing authority. From my experience, both centralized and local facilities (including maintenance) are not only a fact of life but also necessary. The real problem is to buildup both hardware, administrative and human connections in such a way to minimize problems and cost while improving the user resources. I essentialy agree with the end of the discussion, and will not comment any further on it. Regards, Paul Bartholdi, Geneva Observatory, Switzerland Brought to you by Super Global Mega Corp .com