Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!sdd.hp.com!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!bellcore!salt!tpc!dcf From: dcf@tpc..bellcore.com (Dave Feldmeier) Newsgroups: comp.multimedia Subject: Re: Re: Data Transmition Rares Summary: Gigabit transmission rates Message-ID: <262@salt.bellcore.com> Date: 26 Feb 91 04:43:55 GMT References: <1991Feb23.135610.5253@cs.fau.edu> <64570001@otter.hpl.hp.com> Sender: news@salt.bellcore.com Reply-To: dcf@bellcore.com Organization: Bellcore MRE Lines: 63 When discussing gigabit transmission, one must define where the rate is measured and over what period. For example, multi-gigabit digital transmission lines already carry telephone conversations among major cities. However, a file transfer of 1 gigabit that takes one second from the user's point of view is an entirely different thing. Sending bits down a fiber at multi-gigabit rates is not the hard part. The hard part is controlling the gigabit transmission. The reason that a telephone network can use gigabit lines is that few devices in the network must operate at high speed because the system is circuit switched. The only control that occurs is when a telephone call is set up or shut down. Otherwise, every nth bit in the wire is yours. Packet switching is much more difficult because each packet of data must be interpretted in the period that it takes to transmit one packet. So why do we use packet switching? Packet switching is a more efficient method of resource allocation than circuit switching for traffic sources that are bursty, such as computers. Thus it is cheaper. What do we have to do besides send the bits? The efficiency of packet switching is achieved with dynamic resource allocation. This means that we now need congestion control and flow control to assure that the transmission rate of our data does not exceed the processing rate in the network and receiver, respectively. This is a difficult control problem, particularly on fast and large networks. Another problem is retransmission. If our data must be perfectly replicated at the receiver, then we must retransmit any data that was corrupted or lost in transit. Whether a packet has been lost or is just delayed is a difficult problem in packet-switching network. Once again, the network dynamics make it difficult. If we are slow to respond to loss, the transfer time increases because we let the network go idle even though we could be retransmitting data. If we retransmit too quickly, then we may congest the network with redundant packets. Determining when to retransmit is another tricky control problem. Even if we get the control right, there are few computers that exist today that could use a gigabit network. Most buses are slower than this and the buses aren't faster because the CPU doesn't need a faster bus. As CPU speed increases, bus speed will increase also. The point is that few people need a gigabit data rate now, and so the appropriate machines are rare. Another problem is that if protocol processing is done with a conventional CPU, the CPU may be unable to process data at a gigabit. In summary, packet-switching networks are cheaper because they are more efficient, and they are more efficient because because they allocate resources dynamically. The network dynamics make high-speed transfer difficult because we our control systems are not good enough. If one were willing to spend the money so that a circuit-switched network could be used, then these control problems are simplified. Even if you have a gigabit network, then you will probably need a supercomputer to accept and process data at that rate. The other possibility is to design a system with custom VLSI protocol processors and a high-speed bus. This approach is not cheap either. As has been noted previously, running an application at a gigabit today is possible, but expensive. -Dave Feldmeier P.S. I am exploring multimedia transport protocol design (layer 4) for the AURORA gigabit network. Brought to you by Super Global Mega Corp .com