DWT or WP for signal filtration - image-processing

I'm dealing with a tricky problem related to the wavelet transform (tricky at least for me :). I have a signal, say a sinusoid (frequency f1) with another sinusoid (freq. f2) superposed. If the other signal has higher frequency than the original one, no problem with its filtration appears. However, this is not my case as I have to deal with two signals with similar frequencies, for example, f2 = 1.2 f1. Is there any way to reconstruct the original sinusoid using wavelet transformation, preferably DWT or wavelet packages? I would probably better benefit from CWT as it shows complete time-scale properties, but it is not the option.
Many thanks in advance.

You are looking at a frequency versus time uncertainty issue. You will need longer basis vectors to separate spectral content that is closer together in frequency.
For a delta F of 0.2, you might want to try using basis vectors that are in the range of 10 times longer than the period(s) of the sinusoids of interest.

Related

How to remove sidelobes while computing frequency from fft?

I am currently operating in vhf band AND trying to detect frequencies using Fast Fourier transform thresholding method.
While detection of multiple frequencies , i received spurs(May not appropriate word) in addition with original frequencies, Such as
in case of f1,f2 that are incoming frequencies i also receive their sum f1+f2 and difference f1-f2.
i am trying to eliminate these using thresholding method but i can't differentiate them with real frequency magnitudes.
Please suggest me some method, or methodology to eliminate this problem
Input frequencies F1, F2
Expected frequencies F1,F2
Receive frequencies ,F1,F2,F1-F2,F1+F2
https://imgur.com/3rYYNv2 plot link that elaborate problem
Windowing can reduce windowing artifacts and distant side lobes, but makes the main lobe wider in exchange. But a large reduction in both the main-lobe and near side-lobes normally requires using more data and a longer FFT.

Time Series DFT Signals Clustering

I have a number of time series data sets, which I want to transform to dft signals in order to reduce dimensionality. After transforming to dft, I want to cluster the resulting dft data sets using k-means algorithm.
Since dft signals contain an imaginary number how can one cluster them?
You could simply treat the imaginary part as another component in your vectors. In other applications, you will want to ignore it!
But you'll be facing other, more severe challenges.
Data mining, and clustering in particular, rarely is as easy as appliyng function a (dft) and function b (k-means) and then you have the result, hooray. Sorry - that is not how exploratory data mining works.
First of all, for many time series, DFT will not be helpful at all. On others, you will first have to do appropriate resampling, or segmentation, or get rid of uninteresting effects such as seasonality. Even if DFT works, it may emphasize artifacts such as the sampling frequency or some interferences.
And then you'll run into one major problem: k-means is based on the assumption that all attributes have the same importance. And DFT is based on the very opposite idea: the first components capture most of the signal, the later ones only minor deviations from it (and that is the very motivation for using this as dimensionality reduction).
So based on this intuition, you maybe never should apply k-means on DFT coefficients at all. At the same time, data-mining repeatedly has shown that appfoaches that are "statistical nonsense" can nevertheless provide useful results... so you can try, but verify your resultd with care, and avoid being too enthusiastic or optimistic.
With the help of FFT, it converts dataset into dft signals. It helps to calculates DFT for each small data set.

How can I compare an image to identify a person in the image?

Through webcam I'm capturing an image of the person in front. Then I display a video. After that I have to find out if its the same person standing in front. How can I do this? The possibilities in internet require many images to train an SVM. I only have one photo of the person to be recognized. How can I achieve this? Please provide with some code sample if possible as I'm new to this. I have already implemented the webcam logic. Just the image recognition I need.
Okay, let's give this a try.
A fairly typical method is to use something called eigenfaces. OpenCV has a whole section on facial recognition using EigenFaces and similar approaches found at http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html
But this assumes you have a database of images to use (but it's not a bad place to start looking).
Method using facial/image features:
This method isn't necessarily all that efficient as it depends on how efficient your calculation of certain features is. And the accuracy of this method depends on how well you define your features and their weighting factors. But nonetheless it's a method that doesn't necessarily require any machine learning. (although it would certainly help!)
Another method is to try to compare how similar your source and target face are. This can be done by comparing a set of features.
The first thing you'll want to do is decide if the target image contains a face.
See
http://docs.opencv.org/master/d7/d8b/tutorial_py_face_detection.html
or you can implement the Viola-Jones algorithm found Here.
Now, if your face detection algorithm doesn't already do this you'll want to find the orientation, scale, and position of your target face (this will be useful for finding certain features about a person face).
Now you'll need to calculate your target face's features.
You can use image features and descriptors like Fast, SIFT, and ORB to compute image features and compare.
see http://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html
Or since you know your dealing with faces you can calculate features that can help distinguish people.
Examples being:
Distance between Eyes, face shape. Length of nose. Height of eyes. distance between nose and mouth, etc.
The tricky part is figuring out how to reliably calculate the feature metrics and then combine all of these into a single metric. Machine learning algorithms are generally employed to find weighting factors for combining each metric.
But you can use some guess work to pick initial weights and then do some trial and error until you find a set of weights that work for you.
Once you have the weights figured out you can combine them by finding the squared difference between the source and target features and add them all together. (this works best if all the subfeatures are normalized first (ie always in the range 0 to 1) and weighted so that the overall metric ranges from 0 to 1.
Lets say you have 5 features f0, f1, f2, f3, f4, f5 all between 0 and 1
These values are the normalized squared difference between the source and target face.
and you have 5 weighting factors: 0.3, 0.1, 0.15, 0.25, 0.2 (sums up to 1)
your overall metric would be
Overall Metric = 0.3 * f0 + 0.1 * f2 + 0.15 * f3 + 0.25 * f4 + 0.2 * f5
Then the two faces are more similar if the value is closer to 0 and less similar if the value is closer to 1. In the above example feature 0 is the most significant feature while feature 2 is the least.

Complex interpolation on an FPGA

I have a problem in that I need to implement an algorithm on an FPGA that requires a large array of data that is too large to fit into block or distributed memory. The array contains complex fixed-point values, and it turns out that I can do a good job by reducing the total number of stored values through decimation and then linearly interpolating the interim values on demand.
Though I have DSP blocks (and so fixed-point hardware multipliers) which could be used trivially for real and imaginary part interpolation, I actually want to do the interpolation on the amplitude and angle (of the polar form of the complex number) and then convert the result to real-imaginary form. The data can be stored in polar form if it improves things.
I think my question boils down to this: How should I quickly convert between polar complex numbers and real-imaginary complex numbers (and back again) on an FPGA (noting availability of DSP hardware)? The solution need not be exact, just close, but be speed optimised. Alternatively, better strategies are gladly received!
edit: I know about cordic techniques, so this would be how I would do it in the absence of a better idea. Are there refinements specific to this problem I could invoke?
Another edit: Following from #mbschenkel's question, and some more thinking on my part, I wanted to know if there were any known tricks specific to the problem of polar interpolation.
In my case, the dominant variation between samples is a phase rotation, with a slowly varying amplitude. Since the sampling grid is known ahead of time and is regular, one trick could be to precompute some complex interpolation factors. So, for two complex values a and b, if we wish to find (N-1) intermediate equally spaced values, we can precompute the factor
scale = (abs(b)/abs(a))**(1/N)*exp(1j*(angle(b)-angle(a)))/N)
and then find each intermediate value iteratively as val[n] = scale * val[n-1] where val[0] = a.
This works well for me as I need the samples in order and I compute them all. For small variations in amplitude (i.e. abs(b)/abs(a) ~= 1) and 0 < n < N, (abs(b)/abs(a))**(n/N) is approximately linear (though linear is not necessarily better).
The above is all very good, but still results in a complex multiplication. Are there other options for approximating this? I'm interested in resource and speed constraints, not accuracy. I know I can do the rotation with CORDIC, but still need a pair of multiplications for the scaling, so I'm adding lots of complexity and resource usage for potentially limited results. I don't really have a feel for the convergence of CORDIC, so perhaps I just truncate early, or use lots of resources to converge quickly.

Extrapolation using fft in octave

Using GNU octave, I'm computing a fft over a piece of signal, then eliminating some frequencies, and finally reconstructing the signal. This give me a nice approximation of the signal ; but it doesn't give me a way to extrapolate the data.
Suppose basically that I have plotted three periods and a half of
f: x -> sin(x) + 0.5*sin(3*x) + 1.2*sin(5*x)
and then added a piece of low amplitude, zero-centered random noise. With fft/ifft, I can easily remove most of the noise ; but then how do I extrapolate 3 more periods of my signal data? (other of course that duplicating the signal).
The math way is easy : you have a decomposition of your function as an infinite sum of sines/cosines, and you just need to extract a partial sum and apply it anywhere. But I don't quite get the programmatic way...
Thanks!
The Discrete Fourier Transform relies on the assumption that your time domain data is periodic, so you can just repeat your time domain data ad nauseam - no explicit extrapolation is necessary. Of course this may not give you what you expect if your individual component periods are not exact sub-multiples of the DFT input window duration. This is one reason why we typically apply window functions such as the Hanning Window prior to the transform.

Resources