Optimal value of sampling frequency for guitar notes detection - signal-processing

I am running FFT algorithm to detect the music note played on a guitar.
The frequencies that I am interested are in the range 65.41Hz (C2) to 1864.7Hz (A#6).
If I set the sampling frequency of the input to 16KHz, the output of FFT would yield N points from 0Hz to 16KHz linearly. All the input I am interested would be in the first N/8 points approximately. The other N*7/8 points are of no use to me. They actually are decreasing my resolution.
From Nyquist's theory (https://en.wikipedia.org/wiki/Nyquist_frequency), the sampling frequency that is needed is just twice the maximum frequency one desires. In my case, this would be about 4KHz.
Is 4KHz really the ideal sampling frequency for a guitar tuning app?
Intuitively, one would feel a better sampling frequency would give you more accurate results. However, in this case, it seems having a lesser sampling frequency is better for improving the resolution. Regards.

You are confusing the pitch of a guitar note with spectral frequency. A guitar generates lots of overtones and harmonics at a much higher frequency than the pitch of a played note. Those higher harmonics and overtones, more than the possibly weak fundamental frequency in some cases, is what the human ear hears and interprets as the lower perceived pitch.
Any of the overtones and harmonics around or above 2 kHz that are not completely low pass filtered out before sampling at 4 kHz will cause aliasing and thus corruption of your sampled data and its spectrum.
If you want to create an accurate tuner, use a pitch estimation algorithm, not an FFT peak frequency bin estimator. And depending on which pitch estimation method you choose, a higher density of samples per unit time might allow finer accuracy or greater reliability under background noise or more prompt responsiveness.

Is 4KHz really the ideal sampling frequency for a guitar tuning app?
You've been mis-reading Nyquist's theorem if you ask it like that.
States that every sampling frequency above twice your maximum signal frequency will allow you to perfectly reconstruct your original signal. So there's no "ideal" frequency. Just a set of frequencies that are sufficient. What is ideal hence depends on a lot of other things: mainly, what your digitizer really supports (hint: most sound cards can do 44.1kHz, but not 4kHz), what kind of margin you want to have for filters etc to work on, and what kind of processing power you can spend (hint: modern smart phones, PCs and even pocket calculators don't really have a hard time processing a couple hundred kHz in real time).
Also note that #hotpaw2 is right, the harmonics are important, and are multiples of the base tone frequency.
However, in this case, it seems having a lesser sampling frequency is better for improving the resolution.
no. No matter where that comes from, it's wrong. Information theory's first and foremost result is that based upon more information, you can't make worse estimates. An oversampled signal is simply more information on the same signal.

Yes, if all you are interested in is frequencies up to 2 kHz then you only need a sampling frequency of 4 kHz. This should include an anti-aliasing filter in front of the ADC or any downconverter to prevent any higher frequency components from aliasing into a lower frequency.
If all you are interested in is specific frequencies (one or two) then you may want to look at the Goertzel algorithm which is more efficient than an FFT for a single frequency. Also, the chirp-Z transform can be used to effectively get a zoomed FFT (resulting in a higher resolution over a smaller bandwidth without the computational complexity of an FFT with the same resolution). You may want to check out this CZT tutorial

Related

Sinusoids with frequencies that are random variales - What does the FFT impulse look like?

I'm currently working on a program in C++ in which I am computing the time varying FFT of a wav file. I have a question regarding plotting the results of an FFT.
Say for example I have a 70 Hz signal that is produced by some instrument with certain harmonics. Even though I say this signal is 70 Hz, it's a real signal and I assume will have some randomness in which that 70Hz signal varies. Say I sample it for 1 second at a sample rate of 20kHz. I realize the sample period probably doesn't need to be 1 second, but bear with me.
Because I now have 20000 samples, when I compute the FFT. I will have 20000 or (19999) frequency bins. Let's also assume that my sample rate in conjunction some windowing techniques minimize spectral leakage.
My question then: Will the FFT still produce a relatively ideal impulse at 70Hz? Or will there 'appear to be' spectral leakage which is caused by the randomness the original signal? In otherwords, what does the FFT look like of a sinusoid whose frequency is a random variable?
Some of the more common modulation schemes will add sidebands that carry the information in the modulation. Depending on the amount and type of modulation with respect to the length of the FFT, the sidebands can either appear separate from the FFT peak, or just "fatten" a single peak.
Your spectrum will appear broadened and this happens in the real world. Look e.g for the Voight profile, which is a Lorentizan (the result of an ideal exponential decay) convolved with a Gaussian of a certain width, the width being determined by stochastic fluctuations, e.g. Doppler effect on molecules in a gas that is being probed by a narrow-band laser.
You will not get an 'ideal' frequency peak either way. The limit for the resolution of the FFT is one frequency bin, (frequency resolution being given by the inverse of the time vector length), but even that (as #xvan pointed out) is in general broadened by the window function. If your window is nonexistent, i.e. it is in fact a square window of the length of the time vector, then you'll get spectral peaks that are convolved with a sinc function, and thus broadened.
The best way to visualize this is to make a long vector and plot a spectrogram (often shown for audio signals) with enough resolution so you can see the individual variation. The FFT of the overall signal is then the projection of the moving peaks onto the vertical axis of the spectrogram. The FFT of a given time vector does not have any time resolution, but sums up all frequencies that happen during the time you FFT. So the spectrogram (often people simply use the STFT, short time fourier transform) has at any given time the 'full' resolution, i.e. narrow lineshape that you expect. The FFT of the full time vector shows the algebraic sum of all your lineshapes and therefore appears broadened.
To sum it up there are two separate effects:
a) broadening from the window function (as the commenters 1 and 2 pointed out)
b) broadening from the effect of frequency fluctuation that you are trying to simulate and that happens in real life (e.g. you sitting on a swing while receiving a radio signal).
Finally, note the significance of #xvan's comment : phi= phi(t). If the phase angle is time dependent then it has a derivative that is not zero. dphi/dt is a frequency shift, so your instantaneous frequency becomes f0 + dphi/dt.

Analyzing Pulse Train

I am trying to post-process pulse train data. It is a 0 to 5V square wave, where the frequency of pulses corresponds to a physical measurement. During measuring, I may see anywhere from 100 pulses/second to 10,000 pulses per second. The duty cycle changes.
I wrote a pulse counter function to analyze the pulse data in the time domain, but the result was very noisy. I suspect that an FFT may be appropriate, though I have never really done anything like this before.
Has anybody done anything similar? What would be the broad methodology behind analyzing the pulse train in the frequency domain? Would it be best to take an FFT at specific time intervals (for instance every seconds worth of data)?
An FFT might be useful if your pulse train were stationary over some interval (the length of an FFT) and embedded in noise. Otherwise, why not just use at the reciprocal of the time between rising edges?

Effect of sampling rate of a signal on its Fourier Transform

I am running some experiments on MATLAB, and I have noticed that, keeping the period fixed, increasing the sampling rate of a sine signal causes the different shifted waveforms in the Fourier transform to become more distinct. They get further apart, I think this makes sense because as the sampling rate increases, the difference between the Nyquist rate and the sampling rate increases too, which creates an effect opposed to aliasing. I have also noticed that the amplitude of the peaks of the transform also increase as the sampling rate increases. Even the DC component (frequency = 0) changes. It's shown as being 0 at some sampling rate, but when increasing the sampling rate it's not 0 anymore.
All the sampling rates are above the Nyquist rate. It seems odd to me that the Fourier transform changes its shape, since according to the sampling theorem, the original signal can be recovered if the sampling rate is above the Nyquist rate, no matter if it's 2 times the nyquist rate or 20 times. Wouldn't a different Fourier waveform mean a different recovered signal?
I am wondering, formally, what's the impact of the sampling rate
Thank you.
You're conflating conversion between time-discrete and time-continuous forms of a signal with reversibility of a transform.
The only guarantee is: For a given transform of some discrete signal, its inverse transform will yield the "same" discrete signal back. The discrete signal is abstract from any frequencies. All that the transform does is take some vector of complex values, and give the dimensionally matching vector of complex values back. You can then take this vector, run an inverse transform on it, and get the "original" vector. I use quotes since there may be some numerical errors that depend on the implementation. As you can see, nowhere does the word frequency appear because it's irrelevant.
So, your real question is then, how to get an FFT with values that are useful for something besides getting the original discrete signal back through an inverse transform. Say, how to get an FFT that will tell a human something nice about the frequency content of a signal. A transform "tweaked" for human usefulness, or for use in further signal processing such as automated music transcription, can't reproduce the original signal anymore after inversion. We're trading off veracity for usefulness. Detailed discussion of this can't really fit into one answer, and is off topic here anyway.
Another of your real questions is how to go between a continuous signal and a discrete signal - how to sample the continuous signal, and how to reconstruct it from its discrete representation. The reconstruction means a function (or process) that will yield the values the signal had at points in time between the samples. Again, this is a big topic.
You are seeing several things when you increase the sample rate:
most (forward) FFT implementations have an implicit scaling factor of N (sometimes sqrt(N)) - if you're increasing your FFT size as you increase the sample rate (i.e. keeping the time window constant) then the apparent magnitude of the peaks in the FFT will increase. When calculating absolute magnitude values you would normally need to take this scaling factor into account.
I'm guessing that you are not currently applying a window function prior to the FFT - this will result in "smearing" of the spectrum, due to spectral leakage, and the exact nature of this will be very dependent on the relationship between sample rate and the frequencies of the various components in your signal. Apply a window function and the spectrum should look a lot more consistent as you vary the sample rate.

Is a "rolling" FFT possible and could it be of use?

Lately I have been experimenting with audio and FFTs, specifically the Minim library in Processing (basically Java, not that its particularly important for this question). What I have come to understand is that with a buffer/sample size N and sample rate K, after performing a forward FFT, I will get N frequency bins (only N/2 usable data and in fact Minim only returns N/2 bins) linearly spaced representing the spectrum from 0 to K/2 HZ.
With Minim (as well as other typical FFT implementations) you wait to gather N samples, and then perform the forward transformation, then wait for N more samples, and so on. In order to get a reasonable frame-rate (for audio visualizations, beat detection, etc.), I must use a small sample size relative to the sampling frequency.
The problem with this, though, is that a small sample size results in a very low resolution for the low end of the spectrum when I compute logarithmically spaced averages (Since a bass octave is much narrower than a high pitched octave).
I was wondering if a possible way to squeeze more apparent resolution would be to perform FFTs more often than every N samples on a slightly larger sample size than I am currently using. (I.E. with input buffer of size 2048, every 100 samples, add those samples to the input buffer and remove the oldest 100 samples, and perform a FFT). It seems like this would possibly create a rolling-average type of affect (which I can live with) but I'm not too sure.
What would be the pros and cons of this approach? Are there any other ways I could increase my apparent resolution while still being able to do real-time visualization and analysis?
That approach goes by the name Short-time Fourier transform. You get all the answers to your question on wikipedia: https://en.wikipedia.org/wiki/Short-time_Fourier_transform
It works great in practice and you can even get better resolution out of it compared to what you would expect from a rolling window by using the phase difference between the fft's.
Here is one article that does pitch shifting of audio signals. The way how to get higher frequency resolution is well explained: http://www.dspdimension.com/admin/pitch-shifting-using-the-ft/
We use the approach you describe, which we call overlapping, to make sure all the rows of a spectral waterfall are filled in. Overlap can be used to provide spectra that are spaced as closely as a single sample interval.
The primary disadvantage is the extra processing to produce all those spectra.
On the positive side, while the time resolution of each spectra is still constrained by FFT size, looking at closely spaced adjacent spectra seems to provide a kind of a visual interpolation that, I think, lets you see the data with higher precision.
One common way this is done is to use multiple lengths of windowed FFTs on the same data, short FFTs for good time resolution, much longer FFTs for better frequency resolution of lower frequencies. Then the problem for visualization becomes picking the best FFT result out of several possible at each plot point (such as the highest contrast sub-block, etc.) and blending them attractively.
Most modern processors (in PCs and mobile phones, etc.) can easily do multiple lengths (dozens) of FFTs still in real-time for audio.

Pitch detection using FFT for trumpet

How do i get frequency using FFT? What's the right procedure and codes?
Pitch detection typically involves measuring the interval between harmonics in the power spectrum. The power spectrum is obtained form the FFT by taking the magnitude of the first N/2 bins (sqrt(re^2 + im^2)). However there are more sophisticated techniques for pitch detection, such as cepstral analysis, where we take the FFT of the log of the power spectrum, in order to identify periodicity in the spectral peaks.
A sustained note of a musical instrument is a periodic signal, and our friend Fourier (the second "F" in "FFT") tells us that any periodic signal can be constructed by adding a set of sine waves (generally with different amplitudes, frequencies, and phases). The fundamental is the lowest frequency component and it corresponds to pitch; the remaining components are overtones and are multiples of the fundamental's frequency. It is the relative mixture of fundamental and overtones that determines timbre, or the character of an instrument. A clarinet and a trumpet playing in unison sound "in tune" because they share the same fundamental frequency, however, they are individually identifiable because of their differing timbre (overtone mixture).
For your problem, you could sample the trumpet over a time window, calculate the FFT (which decomposes the sequence of samples into its constituent digital frequencies), and then assert that the pitch is the frequency of the bin with the greatest magnitude. If you desire, this could then be trivially quantized to the nearest musical half step, like E flat. (Lookup FFT on Wikipedia if you don't understand the relationship between the sampling frequency and the resultant frequency bins, or if you don't understand the detriment of having too low a sampling frequency.) This will probably meet your needs because the fundamental component usually has greater energy than any other component. The longer the window, the greater the pitch accuracy because the bin centers will become more closely spaced in frequency. However, if the window is so long that the trumpet is changing its pitch appreciably over the duration of the window, then the technique's effectiveness will break down considerably.
DansTuner is my open source project to solve this problem. I am in fact a trumpet player. It has pitch detection code lifted from Audacity.
ia added this org.apache.commons.math.transform.FastFourierTransforme package to the project and its works perfectly
Here is a short blog article on non-parametric techniques to estimating the PSD (power spectral density) along with some more detailed links. This might get you started in estimating the PSD - and then finding the pitch.

Resources