This may be a naive question, but I didn't find exact details in searching.
In FFT with window overlapping, after we've applied window functions to sequences of data set with overlapping and got the FFT results, how do we combine those FFT results for overlapping sequence?
Do we just add them together, treating those frequency domain results as non-overlapping parts?
Are magnitudes of these results in complex numbers frequency magnitudes?
Thank you.
For each FFT you typically calculate the magnitude of each complex output bin - this gives you a spectrum (magnitude versus frequency) for one window. The sequence of magnitude spectra for all time windows is effectively a 3D data set or graph - magnitude versus frequency versus time - which is typically plotted as a a spectrogram, waterfall or time varying 2D spectrum.
In the specific case where the data is statistically stationary and you just want to reduce the variance you can average the successive magnitude spectra - this is called ensemble averaging. Normally though for time-varying signals such as speech or music you would not want to do this.
Related
I have time series data that I want to decompose into its intrinsic frequencies. After running FFT on the series, I am left with a periodogram with the main frequencies.
Is there any way to extract a specific frequency and convert it back into the original time-domain?
For example, if I find a 7-day frequency, I can understand that this is a weekly cycle, however, is there any way to understand on what day the peaks occur?
Use a complex FFT instead of a scalar periodogram. The magnitude of a complex FFT shows the peaks, but the complex inverse (complex IFFT) of a single complex FFT peak (might be more than one sample in width) will retain the phase information and result in the proper time relationship of a particular cyclic component to the original time series.
Note: The input to the complex IFFT needs to retain conjugate symmetry for the result to be real instead of complex (except for rounding/quantization noise).
I'm currently working on a program in C++ in which I am computing the time varying FFT of a wav file. I have a question regarding plotting the results of an FFT.
Say for example I have a 70 Hz signal that is produced by some instrument with certain harmonics. Even though I say this signal is 70 Hz, it's a real signal and I assume will have some randomness in which that 70Hz signal varies. Say I sample it for 1 second at a sample rate of 20kHz. I realize the sample period probably doesn't need to be 1 second, but bear with me.
Because I now have 20000 samples, when I compute the FFT. I will have 20000 or (19999) frequency bins. Let's also assume that my sample rate in conjunction some windowing techniques minimize spectral leakage.
My question then: Will the FFT still produce a relatively ideal impulse at 70Hz? Or will there 'appear to be' spectral leakage which is caused by the randomness the original signal? In otherwords, what does the FFT look like of a sinusoid whose frequency is a random variable?
Some of the more common modulation schemes will add sidebands that carry the information in the modulation. Depending on the amount and type of modulation with respect to the length of the FFT, the sidebands can either appear separate from the FFT peak, or just "fatten" a single peak.
Your spectrum will appear broadened and this happens in the real world. Look e.g for the Voight profile, which is a Lorentizan (the result of an ideal exponential decay) convolved with a Gaussian of a certain width, the width being determined by stochastic fluctuations, e.g. Doppler effect on molecules in a gas that is being probed by a narrow-band laser.
You will not get an 'ideal' frequency peak either way. The limit for the resolution of the FFT is one frequency bin, (frequency resolution being given by the inverse of the time vector length), but even that (as #xvan pointed out) is in general broadened by the window function. If your window is nonexistent, i.e. it is in fact a square window of the length of the time vector, then you'll get spectral peaks that are convolved with a sinc function, and thus broadened.
The best way to visualize this is to make a long vector and plot a spectrogram (often shown for audio signals) with enough resolution so you can see the individual variation. The FFT of the overall signal is then the projection of the moving peaks onto the vertical axis of the spectrogram. The FFT of a given time vector does not have any time resolution, but sums up all frequencies that happen during the time you FFT. So the spectrogram (often people simply use the STFT, short time fourier transform) has at any given time the 'full' resolution, i.e. narrow lineshape that you expect. The FFT of the full time vector shows the algebraic sum of all your lineshapes and therefore appears broadened.
To sum it up there are two separate effects:
a) broadening from the window function (as the commenters 1 and 2 pointed out)
b) broadening from the effect of frequency fluctuation that you are trying to simulate and that happens in real life (e.g. you sitting on a swing while receiving a radio signal).
Finally, note the significance of #xvan's comment : phi= phi(t). If the phase angle is time dependent then it has a derivative that is not zero. dphi/dt is a frequency shift, so your instantaneous frequency becomes f0 + dphi/dt.
I am using opencv to implement finger tracking system
And also use
calcOpticalFlowPyrLK(pGmask,nGmask,fingers,track,status,err);
to perform a LK tracker.
The concept I am not clear, after I implement the LK tracker, how should I detect the movement of fingers? Also, the tracker get the last frame and current frame, how to detect a series of action or continuous gesture like within 5 frames?
The 4th parameter of calcOpticalFlowPyrLK (here track) will contain the calculated new positions of input features in the second image (here nGmask).
In the simple case, you can estimate the centroid separately of fingers and track where you can infer to the movement. Making decision can be done from the direction and magnitude of the vector pointing from fingers' centroid to track's centroid.
Furthermore, complex movements can be considered as time series, because movements are consisting of some successive measurements made over a time interval. These measurements could be the direction and magnitude of the vector mentioned above. So any movement can be represented as below:
("label of movement", time_series), where
time_series = {(d1, m1), (d2, m2), ..., (dn, mn)}, where
di is direction and mi is magnitude of the ith vector (i=1..n)
So the time-series consists of n * 2 measurements (sampling n times), that's the only question how to recognize movements?
If you have prior information about the movement, i.e. you know how to perform a circular movement, write an a letter etc. then the question can be reduced to: how to align time series to themselves?
Here comes the well known Dynamic Time Warping (DTW). It can be also considered as a generative model, but it is used between pairs of sequences. DTW is an algorithm for measuring similarity between two temporal sequences which may vary in time or speed (such in our case).
In general, DTW calculates an optimal match between two given time series with certain restrictions. The sequences are warped non-linearly in the time dimension to determine a measure of their similarity independent of certain non-linear variations in the time dimension.
I have some geographical trajectories sampled to analyze, and I calculated the histogram of data in spatial and temporal dimension, which yielded a time domain based feature for each spatial element. I want to perform a discrete FFT to transform the time domain based feature into frequency domain based feature (which I think maybe more robust), and then do some classification or clustering algorithms.
But I'm not sure using what descriptor as frequency domain based feature, since there are amplitude spectrum, power spectrum and phase spectrum of a signal and I've read some references but still got confused about the significance. And what distance (similarity) function should be used as measurement when performing learning algorithms on frequency domain based feature vector(Euclidean distance? Cosine distance? Gaussian function? Chi-kernel or something else?)
Hope someone give me a clue or some material that I can refer to, thanks~
Edit
Thanks to #DrKoch, I chose a spatial element with the largest L-1 norm and plotted its log power spectrum in python and it did show some prominent peaks, below is my code and the figure
import numpy as np
import matplotlib.pyplot as plt
sp = np.fft.fft(signal)
freq = np.fft.fftfreq(signal.shape[-1], d = 1.) # time sloth of histogram is 1 hour
plt.plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
And I have several trivial questions to ask to make sure I totally understand your suggestion:
In your second suggestion, you said "ignore all these values."
Do you mean the horizontal line represent the threshold and all values below it should be assigned to value zero?
"you may search for the two, three largest peaks and use their location and probably widths as 'Features' for further classification."
I'm a little bit confused about the meaning of "location" and "width", does "location" refer to the log value of power spectrum (y-axis) and "width" refer to the frequency (x-axis)? If so, how to combine them together as a feature vector and compare two feature vector of "a similar frequency and a similar widths" ?
Edit
I replaced np.fft.fft with np.fft.rfft to calculate the positive part and plot both power spectrum and log power spectrum.
code:
f, axarr = plt.subplot(2, sharex = True)
axarr[0].plot(freq, np.abs(sp) ** 2)
axarr[1].plot(freq, np.log10(np.abs(sp) ** 2))
plt.show()
figure:
Please correct me if I'm wrong:
I think I should keep the last four peaks in first figure with power = np.abs(sp) ** 2 and power[power < threshold] = 0 because the log power spectrum reduces the difference among each component. And then use the log spectrum of new power as feature vector to feed classifiers.
I also see some reference suggest applying a window function (e.g. Hamming window) before doing fft to avoid spectral leakage. My raw data is sampled every 5 ~ 15 seconds and I've applied a histogram on sampling time, is that method equivalent to apply a window function or I still need apply it on the histogram data?
Generally you should extract just a small number of "Features" out of the complete FFT spectrum.
First: Use the log power spec.
Complex numbers and Phase are useless in these circumstances, because they depend on where you start/stop your data acquisiton (among many other things)
Second: you will see a "Noise Level" e.g. most values are below a certain threshold, ignore all these values.
Third: If you are lucky, e.g. your data has some harmonic content (cycles, repetitions) you will see a few prominent Peaks.
If there are clear peaks, it is even easier to detect the noise: Everything between the peaks should be considered noise.
Now you may search for the two, three largest peaks and use their location and probably widths as "Features" for further classification.
Location is the x-value of the peak i.e. the 'frequency'. It says something how "fast" your cycles are in the input data.
If your cycles don't have constant frequency during the measuring intervall (or you use a window before caclculating the FFT), the peak will be broader than one bin. So this widths of the peak says something about the 'stability' of your cycles.
Based on this: Two patterns are similar if the biggest peaks of both hava a similar frequency and a similar widths, and so on.
EDIT
Very intersiting to see a logarithmic power spectrum of one of your examples.
Now its clear that your input contains a single harmonic (periodic, oscillating) component with a frequency (repetition rate, cycle-duration) of about f0=0.04.
(This is relative frquency, proprtional to the your sampling frequency, the inverse of the time beetween individual measurment points)
Its is not a pute sine-wave, but some "interesting" waveform. Such waveforms produce peaks at 1*f0, 2*f0, 3*f0 and so on.
(So using an FFT for further analysis turns out to be very good idea)
At this point you should produce spectra of several measurements and see what makes a similar measurement and how differ different measurements. What are the "important" features to distinguish your mesurements? Thinks to look out for:
Absolute amplitude: Height of the prominent (leftmost, highest) peaks.
Pitch (Main cycle rate, speed of changes): this is position of first peak, distance between consecutive peaks.
Exact Waveform: Relative amplitude of the first few peaks.
If your most important feature is absoulute amplitude, you're better off with calculating the RMS (root mean square) level of our input signal.
If pitch is important, you're better off with calculationg the ACF (auto-correlation function) of your input signal.
Don't focus on the leftmost peaks, these come from the high frequency components in your input and tend to vary as much as the noise floor.
Windows
For a high quality analyis it is importnat to apply a window to the input data before applying the FFT. This reduces the infulens of the "jump" between the end of your input vector ant the beginning of your input vector, because the FFT considers the input as a single cycle.
There are several popular windows which mark different choices of an unavoidable trade-off: Precision of a single peak vs. level of sidelobes:
You chose a "rectangular window" (equivalent to no window at all, just start/stop your measurement). This gives excellent precission of your peaks which now have a width of just one sample. Your sidelobes (the small peaks left and right of your main peaks) are at -21dB, very tolerable given your input data. In your case this is an excellent choice.
A Hanning window is a single cosine wave. It makes your peaks slightly broader but reduces side-lobe levels.
The Hammimg-Window (cosine-wave, slightly raised above 0.0) produces even broader peaks, but supresses side-lobes by -42 dB. This is a good choice if you expect further weak (but important) components between your main peaks or generally if you have complicated signals like speech, music and so on.
Edit: Scaling
Correct scaling of a spectrum is a complicated thing, because the values of the FFT lines depend on may things like sampling rate, lenght of FFT, window, and even implementation details of the FFT algorithm (there exist several different accepted conventions).
After all, the FFT should show the underlying conservation of energy. The RMS of the input signal should be the same as the RMS (Energy) of the spectrum.
On the other hand: if used for classification it is enough to maintain relative amplitudes. As long as the paramaters mentioned above do not change, the result can be used for classification without further scaling.
Assume a sequence of numbers (wave-like data). I perform then the DFT (or FFT) transform. Next step I want to achieve is to find the frequencies, that correspond to the real frequencies that are included in data. As we know, DFT output has real and imaginary part a[i] and b[i]. If we look at spectrum (sqrt(a[i]^2+b[i]^2) then the maximum in it corresponds to the frequency that is included to the data. The question is how to find all frequencies from DFT? The problem arises when there are many other peaks that can be falsely selected.
I had a similar problem when doing spectral analysis processing of data when I was writing my honours thesis.
You are right: To find dominant frequencies you generally only need to look at the magnitude of the complex value in the DFT.
Unfortunately, you pretty much have to write some sort of intelligent algorithm which will identify the peaks (frequencies). The way the algorithm works is highly dependent on what the DFT looks like for your application. My DFTs all had similar characteristics, so it wasn't too difficult to put together a heuristic algorithm. If your DFT can take on any form, then you will probably get a lot of false positives and/or false negatives.
The way I did it was to identify regions in the DFT with high magnitude (peaks) which were surrounded by low magnitude (troughs). You can define the minimum difference between peaks and troughs (the sensitivity) as a constant times the standard deviation of the data. Additionally, you can say that any peaks that fall below a certain magnitude (threshold) are ignored altogether, as they are just noise.
Of course, the above technique will only really work if you have relatively well defined frequencies in your data. If your DFT is highly random, then you will need to take extra care to set the sensitivity and threshold carefully.
Don't forget that the magnitude of your data is symmetric, so you only need to look at half of it.
Once you have identified the frequencies in your DFT, don't forget to convert it into the units you want. From memory, if you have n samples taken with time discretisation dt, then if you have a peak at data point 5 (for example), where the first data point is 1, then the frequency is 1/(n*dt) radians per time unit. (I haven't done this in a while, so that formula might be off by a factor of Pi or something)