Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!cs.utexas.edu!tut.cis.ohio-state.edu!quanta.eng.ohio-state.edu!kaa.eng.ohio-state.edu!rob From: rob@kaa.eng.ohio-state.edu (Rob Carriere) Newsgroups: comp.dsp Subject: Re: FFT vs ARMA Message-ID: <3622@quanta.eng.ohio-state.edu> Date: 29 Nov 89 02:45:33 GMT References: <5619@videovax.tv.tek.com> <10208@cadnetix.COM> <1989Nov22.170850.21777@athena.mit.edu> <3589@quanta.eng.ohio-state.edu> <1989Nov26.194904.1376@athena.mit.edu> Sender: news@quanta.eng.ohio-state.edu Reply-To: rob@kaa.eng.ohio-state.edu (Rob Carriere) Organization: Ohio State Univ, College of Engineering Lines: 78 In article <1989Nov26.194904.1376@athena.mit.edu> ashok@atrp.mit.edu (Ashok C. Popat) writes: >In article <3589@quanta.eng.ohio-state.edu> rob@kaa.eng.ohio-state.edu (Rob Carriere) writes: >>In article <1989Nov22.170850.21777@athena.mit.edu>, ashok@atrp.mit.edu (Ashok >>C. Popat) writes: >>> Unless you have a formal model that's *useful* for your application, >>> parametric estimation is worthless. >>> Suppose I gave you some data (say 10^6 samples) and told you that the >>> source was ergodic, but nothing else. >>Nor necessarily. DFT is quite good at some things, not at others. If you >>give me recorded data that I can play with for a while, I would probably run >>FFT, several different periodograms, ARMA or Prony models of several orders >>and whatever alse the data made me feel like. After doing all that, I'd feel >>reasonably confident I could tell you something about your data. > > Sounds reasonable on the surface --- try a few well known techniques, > then sort of mentally average the results to conclude something about > the data. The problem is that there is absolutely no justification > for trying some of the techniques. What you want is a consistent, > unbiased estimate of the spectrum of an (unknown) ergodic random > process, given a bunch of samples. Averaging periodograms (e.g., > Welch's method) gives you a consistent, asymptotically unbiased > estimate. What a parametric technique gives you depends strongly on > the assumed model (which isn't given as part of the problem). Well, that's good. It seems the surface agrees with the inside here :-) What I want is something that gives me a good idea of what is going on. Depending on the circumstances, consistent unbiased may or may not cut it as a good idea. The standard counterexample is to try ML on some data with two closely spaced spectral peaks. It is not at all hard to set the stage so that ML will miserably fail to separate the peaks. If my interest was primarily in the number of spectral peaks present, as it is in some applications, then it is going to be a small consolation indeed to know that at least variance has been minimized. If you are saying that we should have more knowledge of what parametric methods do when the model doesn't fit reality, I entirely agree. There is a body of knowledge, but it is entirely empirical and ad-hoc. >>If averaged periodograms showed different behavior in different segments of >>the data, that means you also want to look at parametric models over subsets >>of the data. > > Nope, ergodicity implies stationarity. You'd have to attribute the > behavior to chance. More probably, I'd attribute it to an umwaranted assumption of ergodicity. I don't know how these things are done elsewhere, but I've seen too many cases where ergodicity or even stationarity was assumed just because. If I saw clear trends between segments of the data, I'd be _very_ unlikely to attribute it to chance. >>In short, if I knew that little no ONE technique would make me happy. And > > Any consistent, unbiased, and efficient estimate should make you > happy. An estimate based on unfounded assumptions should not. If minimal variance is what I'm after, yes. However, all these things tend to have the word "asymptoticaly" before all the good stuff. All too often that means "whenever you have about 10 times more data." Consider also that since speed of convergence typically depends on the (unknown) characteristics of the data, there are "unfounded assumptions" no matter where you turn. >>finally, the fact that the DFT is non-parametric does not mean that you aren't >>making assumptions about the data (in fact, you're assuming periodicity -- >>something that doesn't always make sense either) > > You are making assumptions, but periodicity isn't one of them. > Remember, DFT-based spectral estimation *doesn't* mean simply > computing the DFT of the data. In fact, it is well known that a Yes. I goofed. Apologies for spreading disinformation. SR Brought to you by Super Global Mega Corp .com