Extrapolation using fft in octave - signal-processing

Using GNU octave, I'm computing a fft over a piece of signal, then eliminating some frequencies, and finally reconstructing the signal. This give me a nice approximation of the signal ; but it doesn't give me a way to extrapolate the data.
Suppose basically that I have plotted three periods and a half of
f: x -> sin(x) + 0.5*sin(3*x) + 1.2*sin(5*x)
and then added a piece of low amplitude, zero-centered random noise. With fft/ifft, I can easily remove most of the noise ; but then how do I extrapolate 3 more periods of my signal data? (other of course that duplicating the signal).
The math way is easy : you have a decomposition of your function as an infinite sum of sines/cosines, and you just need to extract a partial sum and apply it anywhere. But I don't quite get the programmatic way...
Thanks!

The Discrete Fourier Transform relies on the assumption that your time domain data is periodic, so you can just repeat your time domain data ad nauseam - no explicit extrapolation is necessary. Of course this may not give you what you expect if your individual component periods are not exact sub-multiples of the DFT input window duration. This is one reason why we typically apply window functions such as the Hanning Window prior to the transform.

Related

How to remove sidelobes while computing frequency from fft?

I am currently operating in vhf band AND trying to detect frequencies using Fast Fourier transform thresholding method.
While detection of multiple frequencies , i received spurs(May not appropriate word) in addition with original frequencies, Such as
in case of f1,f2 that are incoming frequencies i also receive their sum f1+f2 and difference f1-f2.
i am trying to eliminate these using thresholding method but i can't differentiate them with real frequency magnitudes.
Please suggest me some method, or methodology to eliminate this problem
Input frequencies F1, F2
Expected frequencies F1,F2
Receive frequencies ,F1,F2,F1-F2,F1+F2
https://imgur.com/3rYYNv2 plot link that elaborate problem
Windowing can reduce windowing artifacts and distant side lobes, but makes the main lobe wider in exchange. But a large reduction in both the main-lobe and near side-lobes normally requires using more data and a longer FFT.

Time Series DFT Signals Clustering

I have a number of time series data sets, which I want to transform to dft signals in order to reduce dimensionality. After transforming to dft, I want to cluster the resulting dft data sets using k-means algorithm.
Since dft signals contain an imaginary number how can one cluster them?
You could simply treat the imaginary part as another component in your vectors. In other applications, you will want to ignore it!
But you'll be facing other, more severe challenges.
Data mining, and clustering in particular, rarely is as easy as appliyng function a (dft) and function b (k-means) and then you have the result, hooray. Sorry - that is not how exploratory data mining works.
First of all, for many time series, DFT will not be helpful at all. On others, you will first have to do appropriate resampling, or segmentation, or get rid of uninteresting effects such as seasonality. Even if DFT works, it may emphasize artifacts such as the sampling frequency or some interferences.
And then you'll run into one major problem: k-means is based on the assumption that all attributes have the same importance. And DFT is based on the very opposite idea: the first components capture most of the signal, the later ones only minor deviations from it (and that is the very motivation for using this as dimensionality reduction).
So based on this intuition, you maybe never should apply k-means on DFT coefficients at all. At the same time, data-mining repeatedly has shown that appfoaches that are "statistical nonsense" can nevertheless provide useful results... so you can try, but verify your resultd with care, and avoid being too enthusiastic or optimistic.
With the help of FFT, it converts dataset into dft signals. It helps to calculates DFT for each small data set.

Can GLSL perform a recursion formula calculation? Or how can I speed up this formular

I want to implement this formula in my iOS App. Is there any way to using GLSL to speed this formula up. Or can I use mental or something to speed this formula up?
for (k = 0; k < imageSize; k++) {
imageOut[k] = imageOut[k-1] * a + imageIn[k] * b;
}
OpenCL is not available.
This is a classic IIR filter, and the data dependencies cause problems when converting it to SIMD code. This means that you can't do the operation as a simple transform feedback or render-to-texture operation. In other words, the GPU is designed to work on a bunch of data in parallel, but your formula forces the output to be computed serially (you can't compute out[k] without computing out[k-1] first).
I see two ways to optimize this:
You can use SIMD on the CPU. For iOS, this means ARM NEON. See articles like Optimising IIR Filters Using ARM NEON or Parallelization of IIR Filters using SIMD Extensions.
You can redesign the filter as an FIR filter, completely eliminating data dependencies.
Unfortunately, there is no easy translation to GLSL. Maybe you could use Metal instead of NEON, I'm not sure.
What you have there, as Dietrich Epp already pointed out, is a IIR filter. Now on a computer there's no such thing as "inifinite", you're always limited by number precision, memory, available computational time etc. – even if you executed that loop ad infinitum, due to the limited precision of your typical number representation you'll loose anything meaningful to roundoff errors quite early on.
So lets be honest about it and call a FIR filter with a very long response time. Can those be parallelized? Yes, they can, but for that we have to leave the time domain and look at it from the frequency domain.
Assume you can model the response to a system (=filter) to all the possible signals there are, then "playing back" that response based on the signal gives you the desired output. In the frequency domain that would be a "recording" of the system in response to a broadband signal covering all the frequencies. But that signal is just a simple impulse. That's where the terms FIR and IIR get their middle I from.
Any applying the impulse response of the system to an arbitrary signal by means of a convolution gives you what the system would respond to like to the signal itself. However calculating a convolution in the time domain is the same as multiplying the Fourier transform of the signal with the Fourier transform of the impulse response and transforming the result back, i.e.
s * r = F^-1(F(s) · F(r))
And Fourier transforms are one of the things that can be well parallelized and GPUs are really quite good at doing.
Now there are GLSL based Fourier transform codes, but normally these are written in OpenCL or CUDA to run on GPUs.
Anyway, here's the recipe for you:
Determine the cutoff k for which your IIR becomes indistinguishable from a FIR. Determine the Fourier transform of the impulse response (= complex spectral response, CSR). Fourier transform the signal (=image) multiply with the CSR and transform back.

Complex interpolation on an FPGA

I have a problem in that I need to implement an algorithm on an FPGA that requires a large array of data that is too large to fit into block or distributed memory. The array contains complex fixed-point values, and it turns out that I can do a good job by reducing the total number of stored values through decimation and then linearly interpolating the interim values on demand.
Though I have DSP blocks (and so fixed-point hardware multipliers) which could be used trivially for real and imaginary part interpolation, I actually want to do the interpolation on the amplitude and angle (of the polar form of the complex number) and then convert the result to real-imaginary form. The data can be stored in polar form if it improves things.
I think my question boils down to this: How should I quickly convert between polar complex numbers and real-imaginary complex numbers (and back again) on an FPGA (noting availability of DSP hardware)? The solution need not be exact, just close, but be speed optimised. Alternatively, better strategies are gladly received!
edit: I know about cordic techniques, so this would be how I would do it in the absence of a better idea. Are there refinements specific to this problem I could invoke?
Another edit: Following from #mbschenkel's question, and some more thinking on my part, I wanted to know if there were any known tricks specific to the problem of polar interpolation.
In my case, the dominant variation between samples is a phase rotation, with a slowly varying amplitude. Since the sampling grid is known ahead of time and is regular, one trick could be to precompute some complex interpolation factors. So, for two complex values a and b, if we wish to find (N-1) intermediate equally spaced values, we can precompute the factor
scale = (abs(b)/abs(a))**(1/N)*exp(1j*(angle(b)-angle(a)))/N)
and then find each intermediate value iteratively as val[n] = scale * val[n-1] where val[0] = a.
This works well for me as I need the samples in order and I compute them all. For small variations in amplitude (i.e. abs(b)/abs(a) ~= 1) and 0 < n < N, (abs(b)/abs(a))**(n/N) is approximately linear (though linear is not necessarily better).
The above is all very good, but still results in a complex multiplication. Are there other options for approximating this? I'm interested in resource and speed constraints, not accuracy. I know I can do the rotation with CORDIC, but still need a pair of multiplications for the scaling, so I'm adding lots of complexity and resource usage for potentially limited results. I don't really have a feel for the convergence of CORDIC, so perhaps I just truncate early, or use lots of resources to converge quickly.

DWT or WP for signal filtration

I'm dealing with a tricky problem related to the wavelet transform (tricky at least for me :). I have a signal, say a sinusoid (frequency f1) with another sinusoid (freq. f2) superposed. If the other signal has higher frequency than the original one, no problem with its filtration appears. However, this is not my case as I have to deal with two signals with similar frequencies, for example, f2 = 1.2 f1. Is there any way to reconstruct the original sinusoid using wavelet transformation, preferably DWT or wavelet packages? I would probably better benefit from CWT as it shows complete time-scale properties, but it is not the option.
Many thanks in advance.
You are looking at a frequency versus time uncertainty issue. You will need longer basis vectors to separate spectral content that is closer together in frequency.
For a delta F of 0.2, you might want to try using basis vectors that are in the range of 10 times longer than the period(s) of the sinusoids of interest.

Resources