Xref: utzoo sci.electronics:18259 comp.dsp:1353 Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!swrinde!mips!pacbell.com!tandem!netcom!mcmahan From: mcmahan@netcom.COM (Dave Mc Mahan) Newsgroups: sci.electronics,comp.dsp Subject: Re: A question about the Nyquist theorm Message-ID: <27194@netcom.COM> Date: 7 Mar 91 04:25:29 GMT References: <11515@pasteur.Berkeley.EDU> <1180@aviary.Stars.Reston.Unisys.COM> Organization: Dave McMahan @ NetCom Services Lines: 80 In a previous article, gaby@Stars.Reston.Unisys.COM (Jim Gaby - UNISYS) writes: >>jbuck@galileo.berkeley.edu (Joe Buck) writes: >> >> >>Example of CD salespeak: pushing oversampling as an advanced technical >>feature. Oversampling is simply inserting zeros between the digital >>samples and thus increasing the sampling rate. It's used because then you >>can use cheaper, less complex analog filters; it reduces the system cost. > >I think the oversampling is not time interpolation (which by Nyquist >does not add any more information to the originial signal), but more >error correction oversampling. I.e. the same bit is sampled multiple >times to determine its value. I think (once again, I too am no CD expert) that this is incorrect. A little thought on the subject should show this. Data on a CD follows an industry standard format. It has to, or nobody could use the same CD player for all the variety of CD's that have been released. This alone indicates that you can't "sample the same bit multiple times to determine it's value". I guess you could try to spin the CD twice as fast to read the same track twice in during the same amount of time and then do some kind of voting to determine which bit is correct, but I doubt this is also the case. CD drive motors are all standard to keep costs down. It is much more effective to use the built-in error correction coding on a CD to correct the random bit flips that occur. The scheme used is pretty powerful for all but the worst scratches. It is my opinion that 'over-sampling' means exactly that. Creating more samples than were originally read off the disk. How can they do that, you ask? It's quite simple. They just stuff in 3 zeros for every value read off the disk in addition to the value from the disk. Why do they do that, you ask? Again, the answer is simple. Doing this allows them to increase the effective sample rate to the FIR digital filters within the CD. They then use a sine(x)/(x) correction (sometimes called the sync function) to 'smooth' the data at the higher sample rate. This effectively increases your sample rate to the DAC and allows you to push your analog low-pass filtering farther out so it distorts the music less. You STILL need to do the final analog lowpass filtering, but now you don't need to make such a critical filter to get the same performance. I have used exactly this technique with ECG waveform restoration, and it works amazingly well. You can take very blocky, crummy data that has been properly sampled (follows the Nyquist criteria) and turn it into a much smoother, better looking waveform. This technique makes the steps between each sample smaller and performs peak restoration of the original sample. This is needed if the original samples didn't happen to fall on the exact peak of the waveform, which almost always happens. A side benefit is that you get automatic scaling of the data to take full advantage of the range of your D-to-A converter. This is probably not a big deal for a CD player since the original sample was intended to be played back exactly as it was recorded, but for my ECG re-construction it works great. Samples come in to me as 7 bit digitial samples, and with no extra overhead (other than scaling the FIR filter weights properly when I first design the filter) I get samples out that take advantage of the full 10 bit range of the playback DAC I have selected. The oversampling interpolates between the original samples to make the full range useful. The original samples are scaled as well and come out of the FIR filter properly scaled along with all the original data. The 'chunky-ness' of the data steps is much reduced, and the whole thing looks better than it did. What is the cost of this technique? This type of over-sampling requires you to be able to do multiplications and additions at a fairly high rate. That is the limiting factor. With some special selection of FIR tap weights and creative hardware design, you can turn the multiplications required into several ROM table lookups that can be implemented quite cheaply. Adders are also needed, but these are relativly simple to do (as compared to a 'true' multiplier). You shift data in at one clock rate, and shift it out at 4 times that rate for a 4x oversampling rate. The next step is to do the final D-to-A conversion and analog lowpass filtering with a less complicated filter. So what do you think? Is that how it is done? Does anybody out there REALLY know and shed some light on this question? >- Jim Gaby > > gaby@rtc.reston.unisys.com -dave -- Dave McMahan mcmahan@netcom.com {apple,amdahl,claris}!netcom!mcmahan