Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!uwm.edu!rpi!brutus.cs.uiuc.edu!uakari.primate.wisc.edu!xanth!mcnc!rti!xyzzy!aquila.rtp.dg.com!harrism From: harrism@aquila.rtp.dg.com (Mike Harris) Newsgroups: comp.databases Subject: Re: Client/Server processes and implementations Message-ID: <713@xyzzy.UUCP> Date: 28 Nov 89 13:19:13 GMT Sender: usenet@xyzzy.UUCP Reply-To: harrism@aquila.rtp.dg.com (Mike Harris) Distribution: usa Organization: Data General Corporation, Research Triangle Park, NC Lines: 92 >In article <510@xyzzy.UUCP> harrism@aquila.DG.COM (Mike Harris) writes: >[a lot of good stuff about processes, servers, and CPU usage] > >As I understand the issue, the main reason why we don't use multiple >processes is because of locking and synchronizations issues, and because >of the lack of portability inherent in any solutions to these problems. >We have no religious disagreements with using multiple processes. >But, first of all, by controlling locking and synchronization in >a portable single O.S. process environment, I think we can do a better >job than trying to understand the effects of each O.S.'s methods (if any) >of doing locking and synchronization. The locking and synchronizations issues (implementation thereof) aren't trivial. I wrote the kernel portion of our Server Manager product and these issues were the most difficult - especially when performance is required. Timing problems are murder. Please understand that I'm not slamming the Sybase product. I just believe that an MP style architecture is required for larger, faster machines, for cpu utilization, for I/O bandwidth, and for tuning. I do take issues with "lack of portability." By implementing your own locking and sync routines, some portability is lost. True. But a port of that subsystem shouldn't take more than a day or two. As an example, we have a "Server Manager" product which manages multiple servers, etc. The "os/lck" subsystem code is about 300 lines of C, only about 50 required modification to port it. Yes, it had to be ported, but the effort was minimal. As far as the methods for locking & sync, that is part of the porting effort. For hitting the highest numbers of platforms the fastest, with a quick product, their (Sybase) approach was very effective. Now that that has been accomplished, it's time to make it run faster on the bigger machines. > >Don't forget that our limitiation is only that one database can't >be accessed by more than one server at a time. You can run any number >of servers on one machine, as long as you have the resources to support >the servers. Plus, given the fact that servers can talk to other servers >by using RPC's, the end effect isn't that much different than having >multiple servers talking to the same database. > This is a VERY significant limitation. The single server against the one database will be a bottleneck in any high volume application. Take DG's DG/SQL product. It runs twenty times as fast as oracle or ingres. And this is with journaling, synchronous commits, etc. We can't compete with it where 4gl's are required, but all of our large customers choose it for their high volume applications because nobody else can keep up. Not Sybase. Not Oracle. Not Ingres. It is a multi server architecture by the way. >>Jon mentions (elsewhere in the net) a new MP architecture. Is Sybase working >>on this soley to take advantage of multiple processors? Wouldn't this >>architecture allow multiple servers on a single architecture? > >I think as more MP servers come on the market from us and our competitors >there will be some discussion about just what constitutes a single server. Please clarify this question. >I also think this will be true for any kind of server that runs in >an MP environment, and not just a database server. >Other than this I have no comment about MP servers. > >> >[a lot of good comments about dedicating resources to servers] > >There's no doubt that people will use servers of various kinds on >machines that must perform other work. A dedicated machine for each >server is a ideal to strive for, but isn't always practical. Very often the case. Not realistic either when, for some applications, (I can give examples) the Application as a whole must run on the one machine. Otherwise they must communicate on the net to collect the required information. This slows the application down too much. I would state that the goal would be to have the application be self contained on one machine. This would yield highest performance for the application. It does, however, require proper tools & support for tuning from the applications & it's servers to extract the highest performance from the system. An MP (as in multi processed - not multi processor) is one of these requirements. >So, the performance of any server will deteriorate as a function >of what else is being done on the machine. When the server performance >become unacceptable either the competing work or the server itself >will get moved off to a different machine. This is normal and although >I grant that this might not be an axiom of client/server architecture >it is certainly a corolary. When the application gets too large, they will buy a bigger machine or faster components to support it. Mike Harris - KM4UL harrism@dg-rtp.dg.com Data General Corporation {world}!mcnc!rti!dg-rtp!harrism Research Triangle Park, NC Brought to you by Super Global Mega Corp .com