Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!uwm.edu!spool.mu.edu!sdd.hp.com!zaphod.mps.ohio-state.edu!rpi!batcomputer!theory.tn.cornell.edu!dailey From: dailey@theory.tn.cornell.edu (John H. Dailey) Newsgroups: comp.databases Subject: Resizing database Message-ID: <1991Feb26.181017.17028@batcomputer.tn.cornell.edu> Date: 26 Feb 91 18:10:17 GMT Sender: news@batcomputer.tn.cornell.edu Organization: Cornell Theory Center Lines: 18 Nntp-Posting-Host: theory.tn.cornell.edu A friend of mine presented me with the following problem and I am not sure how to proceed. Any pointers or guide to the literature would be helpful. The problem is as follows (and is based on a real problem): The system is an old network style database optimized for writes. When a new record needs to be written (and records can vary in size), the most current page in the buffer is examined. If there is room to store the record, it is placed there, otherwise the other pages in the buffer are examined. If there is no room, a page is RANDOMLY pulled in from disk. This last step is iterated until room is found. There is no free space list, etc. Thus data is randomly scattered throughout the page ranges. The problem arises when the database reaches a certain level of fill (90 percent ?). Then much time is spent in I/O, pulling in pages to store new information. There is a procedure to expand the page range and addressable space but it requires a lengthy outage. Are there any studies which link say database fill with performance, or model database performance as a function of database fill? Forget the manuals which come with the system, it is apparently a very old DBMS with many homegrown modifications. Thanks for any help. Brought to you by Super Global Mega Corp .com