Path: utzoo!utgpu!jarvis.csri.toronto.edu!cs.utexas.edu!tut.cis.ohio-state.edu!snorkelwacker!bloom-beacon!atrp.mit.edu!ashok From: ashok@atrp.mit.edu (Ashok C. Popat) Newsgroups: comp.dsp Subject: Re: FFT vs ARMA (was FFTs of Low Frequency Signals (really: decimation)) Message-ID: <1989Nov28.185555.4259@athena.mit.edu> Date: 28 Nov 89 18:55:55 GMT Sender: root@athena.mit.edu (Wizard A. Root) Organization: Massachvsetts Institvte of Technology Lines: 54 In article <99691@ti-csl.csc.ti.com> Stephen Oh writes: >In article <1989Nov22.170850.21777@athena.mit.edu> ashok@atrp.mit.edu (Ashok C. Popat) writes: >>Suppose I gave you some data (say 10^6 samples) and told you that the >>source was ergodic, but nothing else. How would you estimate the >>spectrum? If you used an ARMA model, how would you decide what the >>order of the model should be? Wouldn't you have much more confidence >>in an averaged-periodogram (i.e., DFT-based) estimate? I would. > >Your assumption is too strong. You have 10^6 samples with ergodicity? >What if you have 10^6 samples with only wide sense stationary? >What if you have 10^6 smaples with only partially w.s.s? I'm not exactly sure what you mean by "too strong" --- it's a "given" in the problem. Are you saying that in many applications, waveforms cannot be usefully modeled as ergodic? If so, I'll buy that. I guess I shouldn't have used "ergodic," since that lumps too many assumptions together. How about agreeing that any piecewise stationary process we discuss is ergodic over each stationary piece (if I hadn't brought up ergodicity in the first place, this would not have been worth mentioning, since we'd have to assume piecewise ergodicity to infer anything it all). That leaves the issue of stationarity. Now from what I remember of stochastic processes, wide-sense and strict-sense stationarity are the same if you're dealing exclusively with second-order statistics (e.g., power spectrum). I guess then I could have described my hypothetic source as being WSS over 10^6 samples. A poor model for speech and images, but realistic in other applications. >BTW, I said that parametric approaches are better than FFTs in terms of >resolution. If we have only 100 samples and the separation of two frequencies >is less than 0.01, there is no way to resolve two frequencies using any >FFT-based method. But AR or ARMA can. :-) :-) Good point. I thought about this and here's what I came up with. The duration-bandwidth uncertainty principle says (for continuous-time waveforms) that delta_t*delta_f >= 1/pi where delta_t is the time window size and delta_f is the frequency resolution (see William Siebert, _Circuits, Signals, and Systems_). I'm sure a similar result applies in the discrete-time case, but I don't have a reference off hand --- I'll assume it has the same form. Now if you're starting with only 100 samples, the uncertainty principle says that there's simply not enough information in the data to get a high-resolution spectrum. If you do manage to get a high-resolution spectrum, the necessary added information must have come from the model, not the data. What do you think? >Also, there are several methods to determine the order of the model such as >AIC, MDL, CAT, etc. Any recommended reading on these techniques? Ashok Chhabedia Popat MIT Rm 36-665 (617) 253-7302 Brought to you by Super Global Mega Corp .com