Path: utzoo!utgpu!jarvis.csri.toronto.edu!db.toronto.edu!jdd Newsgroups: comp.arch From: jdd@db.toronto.edu (John DiMarco) Subject: Re: X-terms v. PCs v. Workstations Message-ID: <1989Nov27.144016.23181@jarvis.csri.toronto.edu> References: <1128@m3.mfci.UUCP> <1989Nov22.175128.24910@ico.isc.com> <3893@scolex.sco.COM> <39361@lll-winken.LLNL.GOV> <17305@netnews.upenn.edu> <1989Nov25.000120.18261@world.std.com> Date: 27 Nov 89 19:40:16 GMT This posting is more than 140 lines long. A centralized authority -- if it is responsive to the needs of its users -- has the capability to offer better facilities and support at a lower price. Just consider some of the relevant issues: Resource duplication: If every group must purchase its own resources, resources which are used only occasionally will either be unavailable because no small group can afford the requisite cost (eg. Phototypesetters, supercomputers, Put_your_favourite_expen- sive_doodad_here), or be duplicated unnecessarily (eg. laser printers, scanners, put_your_favourite_less_expensive_doodad_here). A centralized authority can provide access to computing resources which are otherwise unavailable, and can provide more economical access to resources which would otherwise be unnecessarily duplicated. Maximum single-point usage: If each group must purchase its own computing equipment, at no point in time can any group utilize more computing resources than that group owns. But in a centralized environment, the maximum amount of computing resources available to any one group increases to the total computing resources available to the centralized authority, a much greater amount. If you have a distributed computing environment, imagine putting all the memory and CPUs of your workstations into one massive multiprocessing machine, for example. Imagine if your group could use the idle cycles of the group down the hall. Wouldn't that be nice? Security: It is much easier to keep a centralized computing environment secure than a distributed one. Responsibilities are clearer, superuser privileges are better defined, and response time is better. Imagine having to respond to a security threat when you have to notify umpteen million sysadmins, all of whom have to respond correctly to eliminate the threat, but none of whom are responsible to any central authority. Imagine not knowing who is in charge of each machine. Expertise: Distributed sites tend to have too many people playing at system administration in their own little fiefdoms, few of whom know what they are doing. (But guess who goes screaming to whom when something goes wrong...) In a centralized environment, it is much easier to ensure that the people who are in charge of the computers are competent and capable. Emergencies: If something goes wrong in a centralized system, it is invariably obvious who should be called. Lots of highly qualified people will jump on the problem and fix it PDQ. If something goes wrong on some little group's machine, who do you call? It's often not clear. And who will fix the problem? That's often not clear either. Frequently the problem doesn't get fixed quickly. Backups: They're a pain to do. Why not have a centralized authority do backups for everybody automatically, rather than have everybody worry about their own backups? Otherwise someone, somewhere will be lazy and/or make a mistake and lose something crucial. Complexity: Who's going to keep track of a big distributed network mishmash with no central authority? Who's going to answer the question "How do I get there from here?" if getting there from here involves passing through any number of un-cooperative little domains. Who's going to track down the machine which throws bogus packets onto the network, fouling up all sorts of other machines? In a centralized environment, things are generally less complex, and those in charge have a much better understanding of the whole shebang. Downtime: Centralized computing authorities tend to do their best to keep their machines running all the time. And they generally do a good job at it, too. If a central machine goes down, lots of good, qualified people jump into the fray to get the machine up again. This doesn't usually happen in a distributed environment. Maintenance: More things go wrong with many little machines than with few big ones, because there are so many more machines around to fail. Repair bills? Repair time? For example, I'd be surprised if the repair/maintenance cost for a set of 100 little SCSI drives on 100 different workstations is less than for half-a-dozen big SMD drives on one or two big machines, per year. Compatibility: If group A gets machine type X and group B gets machine type Y, and they subsequently decide to work together in some way, who is going to get A's Xs and B's Ys talking together? These are some very good reasons to favour a centralized computing authority. bzs@world.std.com (Barry Shein) writes: >The most important issues quickly become politics and administration. >Who tells me what I can do with my system, who administers it? Yes, sometimes politics can override technology in system implementation decisions. If politics are a problem, fix it. But politics differs from organization to organization, so any politically-motivated system decision at one organization will most probably not be applicable to another. > [ Barry writes about how centralized authorities limit users to very little > disk space, when these very users can buy cheap disks which gives them > all the disk space they want. ] If a centralized computing authority is not responsive to the needs of its users, it's got a problem. If users need more disk space, they should get it. A centralized computing authority's sole raison d'etre is to serve its users. There's no reason why a centralized computing authority can't buy cheap SCSI disks, for example, and hang them off one of their central machines, if that is what the users need. A sick centralized computing authority which is not responsive to user needs should be cured, not eliminated. If you're stuck with a defective centralized computing authority, then perhaps a move to a distributed computing environment could be justified. Nevertheless, IMHO, a distributed computing environment is still inferior to a well-run centralized environment. >Well, in a lot of cases, who cares. Escaping out from under >centralized tyranny is more important (at the moment of decision) than >who's going to make the trains run on time once you're free. Put your >important data files to floppies or cartridge tapes (it's easy, don't >need $100K worth of operators to do that) and pray for the best. And enjoy the disasters. These come thick and heavy when every Tom, Dick, and Harry (Tammy, Didi, and Harriet?) tries to run his (her) own computing system. >That's why so many centralized facilities are jumping at becoming >network administrators and proselytes of X-terminals. Maybe because they're sick of trying to get people out of messes? People who are over their heads in complexity? People who are panicky, worried, and desperate? >Not that there's anything wrong with X-terminals, I like them, but >let's be honest about motives: How ya gonna keep them all down on the >farm once they've been to Paree'? That's not a problem. They'll all be a'streamin back from Paree' wid nuttin in their pockets an' wid bumps an' bruises all over. > -Barry Shein >Software Tool & Die, Purveyors to the Trade | bzs@world.std.com >1330 Beacon St, Brookline, MA 02146, (617) 739-0202 | {xylogics,uunet}world!bzs John -- John DiMarco jdd@db.toronto.edu or jdd@db.utoronto.ca University of Toronto, CSRI BITNET: jdd%db.toronto.edu@relay.cs.net (416) 978-8609 UUCP: {uunet!utai,decvax!utcsri}!db!jdd Brought to you by Super Global Mega Corp .com