Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!wuarchive!wugate!uunet!zephyr.ens.tek.com!orca!ka7axd.WV.TEK.COM!mhorne From: mhorne@ka7axd.WV.TEK.COM (Michael T. Horne) Newsgroups: comp.dsp Subject: Re: FFTs of Low Frequency Signals (really: decimation) Message-ID: <5340@orca.WV.TEK.COM> Date: 14 Nov 89 19:49:56 GMT References: <5624@videovax.tv.tek.com> <5619@videovax.tv.tek.com> <10208@cadnetix.COM> <2586@irit.oakhill.UUCP> <5305@orca.WV.TEK.COM> Sender: nobody@orca.WV.TEK.COM Reply-To: mhorne%ka7axd.wv.tek.com@relay.cs.net Followup-To: comp.dsp Organization: Visual Systems Group, Tektronix, Inc., Wilsonville, OR Lines: 45 In a recent article by Bart Massey: > > > > You *can* get better resolution by doing 1) zero padding of the data, or > > 2) sampling longer at F and then decimating to reduce the data set... > > No! (2) will increase your resolution, (1) will not! > > As it turns out, zero-padding the DFT input is equivalent to applying > sin(x)/x interpolation to the DFT output. Zero-padding thus can't be > adding resolution, since you can obtain the same effect by applying a > post-transformation to the output of the original DFT on the original data. > What the zero-padding *does* do is increase the *accuracy* of the > *representation* of the data! I stand corrected. Some quick math shows that the apparent resolution gain from zero-padding is really just an increase in resolution between samples in the frequency domain, i.e. interpolation, as Bart has described above. The zero-padding simply decreases the frequency-domain sampling increment, providing more frequency-domain samples, yet no new information has been added. It would appear that for any finite time sequence where less than a full cycle of the frequency components of interest are taken, the ability to discern these components is difficult when using the FFT, apparently due to the fact that these signals are too close to w=0 where the negative and positive components tend to `interact'. > But, if you just want to better use the resolution you have, input > zero-padding, a cheap form of sin(x)/x interpolation for DFT output, is a > useful technique. This is interesting. Considering the fact that the complexity of the DFT is O(NlogN), wouldn't some sort of direct interpolation of the DFT output be faster? At first glance, it would appear that sample-rate conversion techniques can be applied directly to frequency-domain data, much like it is done for time-domain data. It would seem that if the time-domain data set has met the Nyquist criteria, then so has the frequency-domain data (that is, the imaginary and real data parts, not the sqrt(i^2 + r^2)), and for large L (interpolation) values, you might get a big win in computation rate compared to a zero-padded DFT. Has anyone looked into this, or is there something fundamental that I am overlooking? Mike Horne mhorne%ka7axd.wv.tek.com@relay.cs.net