Xref: utzoo comp.periphs:3498 comp.sys.sgi:8443 comp.periphs.scsi:1955 Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!rpi!uupsi!sunic!chalmers.se!fy.chalmers.se!co From: co@fy.chalmers.se (Christer Olsson) Newsgroups: comp.periphs,comp.sys.sgi,comp.periphs.scsi Subject: Re: exabyte record size limit Message-ID: <1991Feb22.165448.11408@fy.chalmers.se> Date: 22 Feb 91 16:54:48 GMT References: <1991Feb8.152435.10875@helios.physics.utoronto.ca> <1991Feb14.223941.4318@b11.ingr.com> Organization: Chalmers University of Technology, G|teborg, Sweden Lines: 19 In article <1991Feb14.223941.4318@b11.ingr.com> mcconnel@b11.UUCP (Guy McConnell) writes: >In article <1991Feb8.152435.10875@helios.physics.utoronto.ca> sysmark@physics.utoronto.ca (Mark Bartelt) writes: > We have a uVAX II/GPX with an IBIS 1.2 Gigabytes videodisc. The Imageprocessing system backups the disk into some hundreds of savesets (one saveset contains one picture and a desciptorfile.). Today, we use a TK50 tapedrive (95 meg) and a complete backup needs a half week continous running... Therefore, we have some plans to buy an Exabyte tapedrive and connect it to a SCSI-host on the machine. But, can an Exabyte handle so many savesets without loosing capacity? I think a complete backup generate more than 600-700 savesets. Is it possible for a slow uVAX II vith VMS 4.7 backup to feed an Exabyte with enough data without loosing space? I think an Exabyte needs at least 100K/byte per second under writing a saveset? (no saveset is smaller than 400K, some savesets is 300 Mbyte large) Brought to you by Super Global Mega Corp .com