Path: utzoo!utgpu!jarvis.csri.toronto.edu!db.toronto.edu!jdd Newsgroups: comp.arch From: jdd@db.toronto.edu (John DiMarco) Subject: Re: X-terms v. PCs v. Workstations Message-ID: <1989Nov28.134540.7131@jarvis.csri.toronto.edu> References: <1128@m3.mfci.UUCP> <1989Nov22.175128.24910@ico.isc.com> <3893@scolex.sco.COM> <39361@lll-winken.LLNL.GOV> <17305@netnews.upenn.edu> <1989Nov25.000120.18261@world.std.com> <1989Nov27.144016.23181@jarvis.csri.toronto.edu> <536@kunivv1.sci.kun.nl> Date: 28 Nov 89 18:45:40 GMT ge@kunivv1.sci.kun.nl (Ge' Weijers) writes: >jdd@db.toronto.edu (John DiMarco) writes: >>A centralized authority -- if it is responsive to the needs of its users -- >>has the capability to offer better facilities and support at a lower price. >- I've one seen a bill for use of a VAX780 for one year with 10 people. > It was about $500,000.- (1987). You could buy a faster Sun-3 AND hire > an operator for less ($1,500,000.- in three years!) Can the Vax and get a machine that's cheaper to run. Centralized computing doesn't need to be done on Vax 780s, which are extremely expensive to maintain these days. >> Resource duplication: >In the case of VERY expensive systems you may have a point. Let us look at it: >- laser printers: low-volume high-quality laser printers cost about $30,000.- > Paying someone to manage a central high-volume printer (more expensive, > less reliable) costs a lot more, and you need to walk over there to get > your output. A local printer is more convenient. Half a dozen laserwriters cost lots more than $30K. And they're slower, too. True, it's handy not to have to walk much to get your output. >- Supercomputers: having your own mini-super (1/10th Cray) gives a > better turnaround time than sharing a Cray with > 10 people. And it's > more predictable. Maybe. But there are large jobs that don't run on mini-supers. Note that sharing a resource (like a supercomputer) doesn't necessarily mean sharing that resource with lots of people at the same time. If group A and group B both need to occasionally run jobs which max out a Cray, there's no reason why they can't take turns. >> Maximum single-point usage: >- the problem with central 'mainframes' is that you can't predict the response > time (turnaround time). If I know something is going to take 2 hours I can > go off and do something useful. If it might take 1 to 5 hours I can't plan my > day, or I must assume it takes 5. Good point. But if your choices are 10 hours on a private machine or 1-5 hours on a mainframe, which are you going to pick? It clearly depends on the job, the machines available, etc. Response time prediction isn't necessarily a good reason to move to a distributed system. >> Expertise: >- I've spent enough time explaining evening-shift operators what to do to > know that only one or two people in the Computer Center really know what > they are talking about, and they never do nightshifts. If I've offended > someone, sorry, but that is usually the case. I don't see how this would be improved under a distributed setup. At least in a centralized system, you have a computer center to call. But if you belong to a small group with its own small computing facilities and no resident guru, who are you going to call, either at night or during the day? >> Backups: They're a pain to do. Why not have a centralized authority >> do backups for everybody automatically, rather than have >> everybody worry about their own backups? >- With 2Gbyte DAT tapes backing up is just starting the backup and going home. > They're not even expensive, and an extra backup fits in your pocket > so you can store it at home. (Assuming your data is not very sensitive.) Then every little group needs to have its own DAT/Exabyte/CD RAM/whatever backup unit. Why not buy only a couple of (large) backup units for a centralized facility and spend the rest of the money on something else? >> Downtime: Centralized computing authorities tend to do their >> best to keep their machines running all the time. And >> they generally do a good job at it, too. If a central machine goes >> down, lots of good, qualified people jump into the fray to get the >> machine up again. This doesn't usually happen in a distributed >> environment. >- A central machine going down stop all work in all departments. If my > workstation quits I alone have a problem. It doesn't matter whether your personal machine goes down or if the central machine does. YOU still can't get any work done. But central machines will be down less than personal machines, because so many good people are trying to keep them up. So total machine downtime for YOU will be less under a centralized system. The only difference between centralized and distributed system downtime is that under a centralized system, downtime hits everybody at the same time, while under a distributed system downtime hits people at different times. >> Maintenance: More things go wrong with many little machines than >> with few big ones, because there are so many more machines >> around to fail. Repair bills? Repair time? For example, I'd be >> surprised if the repair/maintenance cost for a set of 100 little SCSI >> drives on 100 different workstations is less than for half-a-dozen >> big SMD drives on one or two big machines, per year. >- It is. You throw them away and take a new one. Ok, if that's the case, throw 100 little SCSIs on a big machine. Or even make the 'machine' operated by the centralized authority a network of interconnected workstations, all with little SCSIs. There's nothing stopping a central site from doing that. The central site is free to take the cheapest route. The distributed system is forced to go the SCSI route. If SCSI is cheaper, than the centralized system is no worse than the distributed system. If SMD is cheaper, than the centralized system is better than the distributed system. Either way, you can't lose with the centralized system. >> Compatibility: If group A gets machine type X and group B gets machine >> type Y, and they subsequently decide to work together >> in some way, who is going to get A's Xs and B's Ys talking together? >- Know of one Killer Micro that does NOT run a brand of Unix with TCP/IP and > NFS on it? (even Macintoshes and PCs support these protocols nowadays) That's often not good enough. How about source or binary compatibility? Ever try to port a C program with all sorts of null dereferences from a Vax to a 68k machine? It's tough for Joe in Graphics to work with Mary in AI on a joint project if she does everything on a Sun, but he does everything on a Personal IRIS. > If >20% of your clients know as much as you do it's a losing game. Even if many users are knowledgeable, a centralized facility still can make sense. > A central facility can't adapt as easily to the wishes of their clients. Sure they can. If the users want some resource, then buy it for them. It's as easy as that. A responsive central facility can be amazingly adaptive. >Ge' Weijers Internet/UUCP: ge@cs.kun.nl >Faculty of Mathematics and Computer Science, (uunet.uu.net!cs.kun.nl!ge) >University of Nijmegen, Toernooiveld 1 >6525 ED Nijmegen, the Netherlands tel. +3180612483 (UTC-2) Brought to you by Super Global Mega Corp .com