If the input data is not even, what should I do? And what if the input data is not power of 2? Should I just ignore the rest of the numbers?
There is no such restriction on the signal length. Use some signal extension to handle the samples beyond signal borders. A symmetric extension is a popular choice, as it guarantees the invertibility of the transform without adding any unnecessary coefficients. Such treatment is even easier if you use a lifting scheme (although this does not matter in the case of the Haar wavelet). The same inherently applies to the powers of the two.
Related
I am currently operating in vhf band AND trying to detect frequencies using Fast Fourier transform thresholding method.
While detection of multiple frequencies , i received spurs(May not appropriate word) in addition with original frequencies, Such as
in case of f1,f2 that are incoming frequencies i also receive their sum f1+f2 and difference f1-f2.
i am trying to eliminate these using thresholding method but i can't differentiate them with real frequency magnitudes.
Please suggest me some method, or methodology to eliminate this problem
Input frequencies F1, F2
Expected frequencies F1,F2
Receive frequencies ,F1,F2,F1-F2,F1+F2
https://imgur.com/3rYYNv2 plot link that elaborate problem
Windowing can reduce windowing artifacts and distant side lobes, but makes the main lobe wider in exchange. But a large reduction in both the main-lobe and near side-lobes normally requires using more data and a longer FFT.
I have a number of time series data sets, which I want to transform to dft signals in order to reduce dimensionality. After transforming to dft, I want to cluster the resulting dft data sets using k-means algorithm.
Since dft signals contain an imaginary number how can one cluster them?
You could simply treat the imaginary part as another component in your vectors. In other applications, you will want to ignore it!
But you'll be facing other, more severe challenges.
Data mining, and clustering in particular, rarely is as easy as appliyng function a (dft) and function b (k-means) and then you have the result, hooray. Sorry - that is not how exploratory data mining works.
First of all, for many time series, DFT will not be helpful at all. On others, you will first have to do appropriate resampling, or segmentation, or get rid of uninteresting effects such as seasonality. Even if DFT works, it may emphasize artifacts such as the sampling frequency or some interferences.
And then you'll run into one major problem: k-means is based on the assumption that all attributes have the same importance. And DFT is based on the very opposite idea: the first components capture most of the signal, the later ones only minor deviations from it (and that is the very motivation for using this as dimensionality reduction).
So based on this intuition, you maybe never should apply k-means on DFT coefficients at all. At the same time, data-mining repeatedly has shown that appfoaches that are "statistical nonsense" can nevertheless provide useful results... so you can try, but verify your resultd with care, and avoid being too enthusiastic or optimistic.
With the help of FFT, it converts dataset into dft signals. It helps to calculates DFT for each small data set.
I am trying to get smooth rssi value from Bluetooth low energy beacons deployed at ceiling of my lab. I used Weighted-mean filter and moving average filter but couldn't get good result. Through various journal papers I got to know that Kalman filter can be used for this purpose. But I couldn't get a proper mathematical equation to code with objective-c. Can somebody provide any hint regarding mathematical equation or Kalman filter implementation?Thanks a lot.
A one-dimensional case like this means that all of the matrices are actually just scalar values. You need to know two things:
R, the measurement variance. You can directly measure this by recording a series of RSSI values (in a fixed location) exactly how you would normally and then measuring their variance. You can do this easily with Excel, or python, or even write your own code from scratch.
Q, the process variance. This is how much you expect RSSI to actually change in the same amount of time (between measurements). You can also measure this, or you can reason about it.
If you look at the Kalman Filter equations you'll notice that P is not dependent on your actual measurements, only the two values above. As a result, since they are constant, P will converge to a fixed value. And since K (the Kalman gain) only relies on those values, it will also converge. For an application like yours, it is usually sufficient to find the steady-state K and use it all the time.
This is now just a complicated (but optimal in a least-squares sense) way of creating a simple moving average filter.
If you are looking for a swift implementation of Kalman Filter, then it is worth looking at this framework. It is a generic implementation of conventional filter algorithm and it also provides a Matrix struct and all the necessary operations on Matrices that are used in Kalman filter
I want to implement this formula in my iOS App. Is there any way to using GLSL to speed this formula up. Or can I use mental or something to speed this formula up?
for (k = 0; k < imageSize; k++) {
imageOut[k] = imageOut[k-1] * a + imageIn[k] * b;
}
OpenCL is not available.
This is a classic IIR filter, and the data dependencies cause problems when converting it to SIMD code. This means that you can't do the operation as a simple transform feedback or render-to-texture operation. In other words, the GPU is designed to work on a bunch of data in parallel, but your formula forces the output to be computed serially (you can't compute out[k] without computing out[k-1] first).
I see two ways to optimize this:
You can use SIMD on the CPU. For iOS, this means ARM NEON. See articles like Optimising IIR Filters Using ARM NEON or Parallelization of IIR Filters using SIMD Extensions.
You can redesign the filter as an FIR filter, completely eliminating data dependencies.
Unfortunately, there is no easy translation to GLSL. Maybe you could use Metal instead of NEON, I'm not sure.
What you have there, as Dietrich Epp already pointed out, is a IIR filter. Now on a computer there's no such thing as "inifinite", you're always limited by number precision, memory, available computational time etc. – even if you executed that loop ad infinitum, due to the limited precision of your typical number representation you'll loose anything meaningful to roundoff errors quite early on.
So lets be honest about it and call a FIR filter with a very long response time. Can those be parallelized? Yes, they can, but for that we have to leave the time domain and look at it from the frequency domain.
Assume you can model the response to a system (=filter) to all the possible signals there are, then "playing back" that response based on the signal gives you the desired output. In the frequency domain that would be a "recording" of the system in response to a broadband signal covering all the frequencies. But that signal is just a simple impulse. That's where the terms FIR and IIR get their middle I from.
Any applying the impulse response of the system to an arbitrary signal by means of a convolution gives you what the system would respond to like to the signal itself. However calculating a convolution in the time domain is the same as multiplying the Fourier transform of the signal with the Fourier transform of the impulse response and transforming the result back, i.e.
s * r = F^-1(F(s) · F(r))
And Fourier transforms are one of the things that can be well parallelized and GPUs are really quite good at doing.
Now there are GLSL based Fourier transform codes, but normally these are written in OpenCL or CUDA to run on GPUs.
Anyway, here's the recipe for you:
Determine the cutoff k for which your IIR becomes indistinguishable from a FIR. Determine the Fourier transform of the impulse response (= complex spectral response, CSR). Fourier transform the signal (=image) multiply with the CSR and transform back.
I have a problem in that I need to implement an algorithm on an FPGA that requires a large array of data that is too large to fit into block or distributed memory. The array contains complex fixed-point values, and it turns out that I can do a good job by reducing the total number of stored values through decimation and then linearly interpolating the interim values on demand.
Though I have DSP blocks (and so fixed-point hardware multipliers) which could be used trivially for real and imaginary part interpolation, I actually want to do the interpolation on the amplitude and angle (of the polar form of the complex number) and then convert the result to real-imaginary form. The data can be stored in polar form if it improves things.
I think my question boils down to this: How should I quickly convert between polar complex numbers and real-imaginary complex numbers (and back again) on an FPGA (noting availability of DSP hardware)? The solution need not be exact, just close, but be speed optimised. Alternatively, better strategies are gladly received!
edit: I know about cordic techniques, so this would be how I would do it in the absence of a better idea. Are there refinements specific to this problem I could invoke?
Another edit: Following from #mbschenkel's question, and some more thinking on my part, I wanted to know if there were any known tricks specific to the problem of polar interpolation.
In my case, the dominant variation between samples is a phase rotation, with a slowly varying amplitude. Since the sampling grid is known ahead of time and is regular, one trick could be to precompute some complex interpolation factors. So, for two complex values a and b, if we wish to find (N-1) intermediate equally spaced values, we can precompute the factor
scale = (abs(b)/abs(a))**(1/N)*exp(1j*(angle(b)-angle(a)))/N)
and then find each intermediate value iteratively as val[n] = scale * val[n-1] where val[0] = a.
This works well for me as I need the samples in order and I compute them all. For small variations in amplitude (i.e. abs(b)/abs(a) ~= 1) and 0 < n < N, (abs(b)/abs(a))**(n/N) is approximately linear (though linear is not necessarily better).
The above is all very good, but still results in a complex multiplication. Are there other options for approximating this? I'm interested in resource and speed constraints, not accuracy. I know I can do the rotation with CORDIC, but still need a pair of multiplications for the scaling, so I'm adding lots of complexity and resource usage for potentially limited results. I don't really have a feel for the convergence of CORDIC, so perhaps I just truncate early, or use lots of resources to converge quickly.