Xref: utzoo comp.dsp:1296 sci.math:15358 Path: utzoo!mnetor!tmsoft!torsqnt!news-server.csri.toronto.edu!cs.utexas.edu!wuarchive!emory!att!pacbell.com!tandem!netcom!mcmahan From: mcmahan@netcom.COM (Dave Mc Mahan) Newsgroups: comp.dsp,sci.math Subject: Re: resampling problem Message-ID: <25328@netcom.COM> Date: 23 Feb 91 10:53:55 GMT References: <1991Feb13.234510.22488@nuchat.sccsi.com> Organization: Dave McMahan @ NetCom Services Lines: 156 In a previous article, steve@nuchat.sccsi.com (Steve Nuchia) writes: >2) I'm trying to analyse the following situation: > > A package of measureing instruments (we can assume the > singular without loss of generality) traverses some > physical environment at an unregulated speed, with > its position indirectly measured. > > The instruments are sampled at intervals. Some instruments > are designed for fixed-frequency sampling, others are sampled > on a temporal basis for historical reasons, and still others > are sampled on a very roughly spacial basis (first opportunity > following floor(pos/interval) changes, usually). > > The underlying phenomena are assumed to be spacially band-limited. > > Output samples of an estimate of the underlying phenomena are > required at regular spacial intervals, in real time (but a > constant lag is OK -- phase linear filters and all that) > > I want to understand the theory well enough to make a sound > recomendation on the design of digital filters (to be implemented > on a general purpose CPU in real time) to compute the estimate samples > from the available sample data. > >Anyway, I'm thinking of proceeding like this: Normalize all cases >to depth-referenced samples on a fine (high spacial frequency) comb. >The synthetic sampling spacial frequency would be selected based on >the output spacial frequency -- say 2x? 1x? 4x? First off, I assume that you KNOW the time at which samples were taken with a 'large' (relative to the overall system) degree of accuracy. Your problem is that you just can't control WHEN the samples are taken. The data you take (in terms of the measured position) is also of sufficient accuracy that you don't have to worry about positional uncertainty. For the sake of argument you 'know' with infinite precision both the time at which a sample was taken and the position of the sample. Next, I assume you wish to synthesize some time/position related output data of what happened between samples. This may mean that you wish to guess-timate the position at any time between two samples, or you may wish to calculate velocity at some point in time (or space) of the thing begin sampled. Assuming that you can guess-timate the position at any desired time, I assume you can extrapolate any other desired information such as velocity or acceleration. I also assume that if some of your measurements are taken as velocity, others are acceleration measurements, and still others are displacement measurements, you can convert (with enough math) to a consistent base for the problem. From here on, I will assume you can convert to or have taken all measurements in terms of displacment. Finally, we shall assume that you have enough time between measurements and enough CPU horsepower to do a reasonable amount of calculations and are therefore able to obtain the desired answers quickly enough to be useful. I'm not sure how many measurements per second you are taking, but I assume it is not more than about 500 or 1000 samples per second, probably much less than this. For the sake of argument, I'm going to assume that you have something on the order of a 80386 (or larger) available to do your desired math, and all the I/O hardware and instrumentation required to take the samples, timestamp them, feed them to the CPU, and then distribute the results in whatever form you need. I also assume that you don't need answers with 16 digits of precision and that your data doesn't span several orders of magnitude in range. If any of my assumptions or problem re-statements above are wrong, please let me know. I'm having a bit of trouble trying to visualize just exactly what you are trying to accomplish. I think I understand, but I am not certain. I would say that it is quite possible for you to get good results with your synthetic sampling interval approach. It strikes me that the spacing between points that you wish to synthesize should be as small as the shortest time between sample points. Since this approach will lead to most of your synthetic samples (the ones you create to just 'fill' sample points you don't know between the ones you do know) being zero, you will be able to write a smart/fast enough program to ignore multiplication by zero and save lots of CPU time. One problem with this is that if your samples are more-or-less evenly spaced but are not exact multiples of each other, you could end up generating lots of synthetic samples that will later need to be discarded. An example of this is if you have samples that are sometimes .9 seconds apart, sometimes 1 second apart, and sometimes 1.05 samples apart, you would either need to wave your hands and just assume that all are 1 second apart (generating the least synthetic samples but losing the most amount of information), assume that all samples should occur at .1 second intervals (thus saving more information on about your data but forcing you to make the decision as to whether 1.05 second samples get binned up to 1.1 seconds or down to 1.0 seconds) or assume that all samples should occur at .05 second intervals (thus extracting the most information from your data at the expense of more CPU time required to crunch numbers). Your synthetic sample interval period will also be dictated by your output data requirement. In the above example, what should you use if you need to know the position estimate at every .025 seconds? I guess the answer is, "It depends." (It depends on your other system requirements, which I don't really know). In this case, you can probably get by with just using the standard 'sync' function method of reconstructing data between known sample points. This method is outlined in the book, "Digital Signal Processing" by Oppenheim & Schafer on page 29. This is esentially creating a low pass filter and passing your data thru it by convolution. This can be done in the time domain and is fairly straightforward. You need to pick a finite number of filter points that will give you good results. Without knowing the nature of your data, I haven't a clue as to how many that is. I would suggest if you do pick this approach that you use an appropriate windowing function to smooth the sync filter function before you pass your data thru it. I have found that with a finite number of points in the sync function, the output results look MUCH better when the sync function has been windowed. One other approach you may wish to think about instead of using filtering is that of curve fitting. It can be done quite quickly and is pretty good and interpolation if your data can be made to fit onto a smoothly varying curve. Curve fitting is somewhat of an art, and there are many tricks to getting good results. I know one or two just by having observed a few masters at the art, but it generally boils down to knowing something about the nature of your data. If (for example) you can expect all of your data to fit a logrithmic curve, there are tricks to play that will give you optimal results for that type of curve. This works quite well with things like temperature sensors that exhibit general responses that fit on a log curve but need to be corrected based on a few know sample points taken at calibration time. The tricks you pick all depend on your expectation of what your data will look like. Anyway, curve fitting to interpolate between data points may be what you need, but I can't say for sure without more info. >One could dynamically adjust the implicit scale of the >FIR function, withing limits, to at least partially compensate for >changes in instrument velocity. You lost me on this one. Are you saying that you are trying to take measurements that aren't time-invariant? This could be big problems. It seems to me that if your instrument passes point A at one velocity and you take a measurement, you better have the ability to pass point A at a different velocity later and measure the same position. If you don't then you will need to record the velocity (or temperature, or direction, or whatever affects your position measuring) and filter that out before you ever begin. >Am I even asking the right questions? Should I be thinking IIR? >Should I just hook the plotter up to a canned demo data stream? [:-)] You should definately test drive any approach you come up with against a known (and repeatable!) data set so you can test different approaches. Having some way to play back the same data into different systems and comparing it with known-good results always gives you ideas of how you can make your system better. I'm sure I have missed a few (or more) points you think I should know. Please post to comp.dsp or e-mail me directly and I'll give you my input. >-- >Steve Nuchia South Coast Computing Services (713) 964-2462 -dave -- Dave McMahan mcmahan@netcom.com {apple,amdahl,claris}!netcom!mcmahan Brought to you by Super Global Mega Corp .com