Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!wuarchive!cs.utexas.edu!uunet!zephyr.ens.tek.com!tekcrl!tekfdi!videovax!bart From: bart@videovax.tv.tek.com (Bart Massey) Newsgroups: comp.dsp Subject: Re: FFTs of Low Frequency Signals (really: decimation) Message-ID: <5622@videovax.tv.tek.com> Date: 11 Nov 89 23:07:30 GMT References: <5619@videovax.tv.tek.com> <10208@cadnetix.COM> <2586@irit.oakhill.UUCP> <5305@orca.WV.TEK.COM> Reply-To: bart@videovax.tv.tek.com (Bart Massey) Organization: Tektronix TV Measurement Systems, Beaverton OR Lines: 26 In article <5305@orca.WV.TEK.COM> mhorne@ka7axd.wv.tek.com (Mike Horne) writes: > I agree in part with Bart's comments, however decimating the data will not > provide you with any better frequency domain resolution directly. He's right, you know. The original question was about highly oversampled VLF data, and how to cope with it without doing a million-point FFT. I was responding to a posting which (in my possibly flawed reading) claimed that the best way to get around this was to instead use an ARMA model on the data, so that you could pick the signal out using a short record length and save computation. My claim is that, *given that one is allowed to accumulate a long enough input record*, accumulating more data and decimating is a better way to reduce the computation for this measurement, regardless of what technique is finally used to actually make the measurement. Or, as I said in my original posting, ARMA estimators are certainly better than DFTs at picking a single sinusoid out of white noise, regardless of the ratio of input frequency to sample rate, and regardless of record length. The chief disadvantages of using this class of techiques over the FFT are that (1) they may be more computationally expensive than an FFT, and (2) they make stronger assumptions about the form of their input than the FFT, and thus tend to give wrong or misleading answers in cases of unexpected input. Bart Massey ..tektronix!videovax.tv.tek.com!bart ..tektronix!reed.bitnet!bart