Path: utzoo!attcan!uunet!microsoft!brianw From: brianw@microsoft.UUCP (Brian Willoughby) Newsgroups: comp.dsp Subject: Re: Adjust-Speed CD player?? Summary: Continuous FFTs? Message-ID: <7813@microsoft.UUCP> Date: 23 Sep 89 02:48:17 GMT References: <6028@jpl-devvax.JPL.NASA.GOV> <89255.105143P85025@BARILVM.BITNET> <7767@microsoft.UUCP> <89264.171306P85025@BARILVM.BITNET> Reply-To: brianw@microsoft.UUCP (Brian Willoughby) Organization: Microsoft Corp., Redmond WA Lines: 67 In article <89264.171306P85025@BARILVM.BITNET> P85025@BARILVM.BITNET (Doron Shikmoni) writes: [...] > >Others suggested to drop samples (to change output speed). To change >by 1%, drop 1 out of each 100. Of course, this is doable; but what >will happen to the music? Try to draw the new curve when you drop >30% of the samples (or double 30% of them to achieve the opposite >effect). Is this hi-fi? Not to my opinion... This also changes time with pitch, and is the poor-man's resampling method. My first attempts at a variable speed sample player on my Apple II used this method. If the original sample data were taken at a much higher rate than the playback rate, then this method isn't *too* bad. Usually distortion is heard - more for non-integral changes in sampling rate. For example, dropping *exactly* half the samples to raise an octave cause little distortion, but a semi-tone up or down is horrible. >Others suggested spectrum analysis and FFT to move from time domain >to frequency domain and vice versa. (1) Can this really be done in >real time with today's DSP technology? I would doubt that, although >I'm not very familiar with state of the art DSP chips, I must admit. >and (2) as I understand it (I might be wrong here - Fourier stuff >is not one of my stronger parts), this process should be made on >a "quantum" at a time - it's not a continuous process. You will >still have distortion when you connect the reconstructed parts >in the time domain; either you will introduce new harmonics or >you will lose information. This is in the *theoretical* view; >I don't know about tolerance - that is, if you can make this process >"good enough" for hi-fi music processing. > >Doron You're right. The problem with FFTs is that they need a number of points to work on. No matter how fast your 1000 point FFT is, you still have to wait until another 1000 points are available. Based on this assumption, you don't have a continuously changing spectrum, but one which is only updated after N new sample points are input. I read about a technique for a sliding window FFT. It was still an N-point FFT (say 1000), but as each new sample was input the FFT is recalculated. This method is also much faster for continuous data input, because only the end points figure into the calculation. With a 1000 point FFT example, the new transform is computed as a function only of the newest point just added, and the oldest point which "falls out" of the 1000 point buffer. The author mentioned that a problem was initializing the running data, but for music I didn't see a problem. He stated that there were two methods for starting the conversion: A - Execute a normal 1000 point FFT after filling the array with 1000 samples, and then compute new FFTs by the sliding window technique as each new sample arrives. B - Start with an array of zeroes, and assume that the FFT is not a true reflection of the input data until 1000 sliding window-style FFTs have been computed. The latter approach basically generates FFT output as if the input were an impulse starting after 1000 zero-valued samples. I think that for musical applications, the delay of N*(sample rate) would be unnoticable, and the FFT output would appear to be valid instantly. I believe that this article was in the Electronic Design News. Brian Willoughby UUCP: ...!{tikal, sun, uunet, elwood}!microsoft!brianw InterNet: microsoft!brianw@uunet.UU.NET or: microsoft!brianw@Sun.COM Bitnet brianw@microsoft.UUCP