Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!uunet!snorkelwacker!bloom-beacon!bu-cs!encore!pinocchio.encore.com From: jdarcy@pinocchio.encore.com (Jeff d'Arcy) Newsgroups: comp.arch Subject: Re: fad computing Message-ID: <10446@encore.Encore.COM> Date: 25 Nov 89 14:05:00 GMT References: <89Nov25.051946est.2233@neat.cs.toronto.edu> Sender: news@Encore.COM Distribution: usa Lines: 25 rayan@cs.toronto.edu (Rayan Zachariassen): > The disadvantage that is always brought up as a counter argument > is ROBUSTNESS. Well, surprise surprise, a distributed environment > is just as fragile as a centralized one with the same functionality > (that's the consensus around here after years of observation), but > it is MUCH more complex. Centralized environments can be made > very robust if well thought out. I'v heard this one time and time again, usually in the form "if my workstation goes down one person is unable to work; it the central computer goes down then nobody can work". If each workstation is down 2% of the time and the central computer is down 1% (unreasonably large figures, I know), this argument falls flat on its face. Given that large systems are more likely than small ones to be administered by people who know what they're doing, and also that they live in a better environment (UPS, A/C, etc.), workstations ar probably down *more* than twice as much as larger hosts. There's also the issue of backups, and the possibility of a misconfigured workstation spitting up on the network or otherwise acting antisocially. In general I think that a bunch of medium-sized hosts (large Vaxen, Multimaxes, Symmetries) and a plethora of X-terms is the best way to go for *most* environments. Jeff d'Arcy OS/Network Software Engineer jdarcy@encore.com Encore has provided the medium, but the message remains my own Brought to you by Super Global Mega Corp .com