Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!cs.utexas.edu!tut.cis.ohio-state.edu!ucbvax!hplabs!hpda!hpcupt1!hpisod2!dhepner From: dhepner@hpisod2.HP.COM (Dan Hepner) Newsgroups: comp.databases Subject: Re: Performance Data (was Re: Client/Server processes and implementations) Message-ID: <13520006@hpisod2.HP.COM> Date: 1 Dec 89 00:17:20 GMT References: <7169@sybase.sybase.com> Organization: Hewlett Packard, Cupertino Lines: 58 From: jkrueger@dgis.dtic.dla.mil (Jon) > > >1. Is it your experience that more than 10% of the work is done by > > the clients? > > Sometimes. If it's only 10%, we may then assign 10 clients per server, > thus balancing the load. Yes, the server load increases too, but not > proportionately; balance might be 12 or 15 clients per server. In the example, if one moved 10 clients taking 10% of a 100% used CPU, we would simplistically end up with the client CPU 10% used, and the server CPU still 90%. Adding one more client, we would end up with a saturated system with 11 Clients on an 11% utilized client machine, while the server was now 99% used. If this were so, it wouldn't seem either all that balanced, and probably a economically unjustifyable move. 100+% increase in hardware cost yielding a 10% increase in throughput. I don't see where the 12 or 15 came from, but even if true they don't seem on the surface to be all that good a deal. > >2. Is it your experience that remote communication costs don't end > > up chewing into the savings attained by moving the clients > > somewhere else? > > No, the lower bandwidth is more than offset by multiprocessing. Let's assume you have plenty of bandwidth, but not plenty of CPU cycles at the server. Remote communication, especially reliable remote comm, being more expensive than local communication. The extreme of my concern would be illustrated if the remote communication costs at the server end exceeded the processing/terminal handling done by the client, in which case one would actually lose by adding a remote machine for the clients. > >>(and in the extreme (and not at all impractical) case, you run each > >> client and each server on its own machine). This model is simple, > >> elegant, and fundamentally right. > > This isn't the extreme case. Multiple processors can divide work > with better granularity than client and server processes. Maybe you can clarify. The case in question was how frequently it would practical to put each client and each server on its own machine, with the assertion that if the client/server workload split weren't near 50-50, it wouldn't be practical. The points of confusion: 1) "Multiple processors" can be ambiguous as to remoteness, but given the context I'll assume remoteness. (right?) 2) Granularity. Are you postulating a flexible division of the work between client and server? A server which is flexibly divisible over both machines? I think all of these questions are facets of the same underlying question: how much of the typical application can be done at the client? > -- Jon Dan Hepner Brought to you by Super Global Mega Corp .com