According to what I have read on the internet, the normal range of fundamental frequency of female voice is 165 to 255 Hz .
I am using Praat and also python library called Parselmouth to get the fundamental frequency values of female voice in an audio file(.wav). however, I got some values that are over 255Hz(eg: 400+Hz, 500Hz).
Is it normal to get big values like this?
It is possible, but unlikely, if you are trying to capture the fundamental frequency (F0) of a speaking voice. It sounds likely that you are capturing a more easily resonating overtone (e.g. F1 or F2) instead.
My experiments with Praat give me the impression that the with good parameters it will reliably extract F0.
What you'll want to do is to verify that by comparing the pitch curve with a spectrogram. Here's an example of a fitting made by Praat (female speaker):
You can see from the image that
Most prominent frequency seems to be F2
Around 200 Hz seems likely to be F0, since there's only noise below that (compared to before/after the segment)
Praat has calculated a good estimate of F0 for the voiced speech segments
If, after a visual inspection, it seems that you are getting wrong results, you can try to tweak the parameters. Window length greatly affects the frequency resolution.
If you can't capture frequencies this low, you should try increasing the window length - the intuition is that it gives the algorithm a better chance at finding slowly changing periodic features in the data.
Related
I've been given some digitized sound recordings and asked to plot the sound pressure level per Hz.
The signal is sampled at 40KHz and the units for the y axis are simply volts.
I've been asked to produce a graph of the SPL as dB/Hz vs Hz.
EDIT: The input units are voltage vs time.
Does this make sense? I though SPL was a time domain measure?
If it does make sense how would I go about producing this graph? Apply the dB formula (20 * log10(x) IIRC) and do an FFT on that or...?
What you're describing is a Power Spectral Density. Matlab, for example, has a pwelch function that does literally what you're asking for. To scale to dBSPL/Hz, simply apply 10*log10([psd]) where psd is the output of pwelch. Let me know if you need help with the function inputs.
If you're working with a different framework, let me know which, 100% sure they'll have a version of this function, possibly with a different output format in which case the scaling might be different.
I'm doing some data augmentation on a speech dataset, and I want to stretch/squeeze each audio file in the time domain.
I found the following three ways to do that, but I'm not sure which one is the best or more optimized way:
dimension = int(len(signal) * speed)
res = librosa.effects.time_stretch(signal, speed)
res = cv2.resize(signal, (1, dimension)).squeeze()
res = skimage.transform.resize(signal, (dimension, 1)).squeeze()
However, I found that librosa.effects.time_stretch adds unwanted echo (or something like that) to the signal.
So, my question is: What are the main differences between these three ways? And is there any better way to do that?
librosa.effects.time_stretch(signal, speed) (docs)
In essence, this approach transforms the signal using stft (short time Fourier transform), stretches it using a phase vocoder and uses the inverse stft to reconstruct the time domain signal. Typically, when doing it this way, one introduces a little bit of "phasiness", i.e. a metallic clang, because the phase cannot be reconstructed 100%. That's probably what you've identified as "echo."
Note that while this approach effectively stretches audio in the time domain (i.e., the input is in the time domain as well as the output), the work is actually being done in the frequency domain.
cv2.resize(signal, (1, dimension)).squeeze() (docs)
All this approach does is interpolating the given signal using bilinear interpolation. This approach is suitable for images, but strikes me as unsuitable for audio signals. Have you listened to the result? Does it sound at all like the original signal only faster/slower? I would assume not only the tempo changes, but also the frequency and perhaps other effects.
skimage.transform.resize(signal, (dimension, 1)).squeeze() (docs)
Again, this is meant for images, not sound. Additionally to the interpolation (spline interpolation with the order 1 by default), this function also does anti-aliasing for images. Note that this has nothing to do with avoiding audio aliasing effects (Nyqist/Aliasing), therefore you should probably turn that off by passing anti_aliasing=False. Again, I would assume that the results may not be exactly what you want (changing frequencies, other artifacts).
What to do?
IMO, you have several options.
If what you feed into your ML algorithms ends up being something like a Mel spectrogram, you could simply treat it as image and stretch it using the skimage or opencv approach. Frequency ranges would be preserved. I have successfully used this kind of approach in this music tempo estimation paper.
Use a better time_stretch library, e.g. rubberband. librosa is great, but its current time scale modification (TSM) algorithm is not state of the art. For a review of TSM algorithms, see for example this article.
Ignore the fact that the frequency changes and simply add 0 samples on a regular basis to the signal or drop samples on a regular basis from the signal (much like your image interpolation does). If you don't stretch too far it may still work for data augmentation purposes. After all the word content is not changed, if the audio content has higher or lower frequencies.
Resample the signal to another sampling frequency, e.g. 44100 Hz -> 43000 Hz or 44100 Hz -> 46000 Hz using a library like resampy and then pretend that it's still 44100 Hz. This still change the frequencies, but at least you get the benefit that resampy does proper filtering of the result so that you avoid the aforementioned aliasing, which otherwise occurs.
First time posting, thanks for the great community!
I am using AudioKit and trying to add frequency weighting filters to the microphone input and so I am trying to understand the values that are coming out of the AudioKit AKFFTTap.
Currently I am trying to just print the FFT buffer converted into dB values
for i in 0..<self.bufferSize {
let db = 20 * log10((self.fft?.fftData[Int(i)])!)
print(db)
}
I was expecting values ranging in the range of about -128 to 0, but I am getting strange values of nearly -200dB and when I blow on the microphone to peg out the readings it only reaches about -60. Am I not approaching this correctly? I was assuming that the values being output from the EZAudioFFT engine would be plain amplitude values and that the normal dB conversion math would work. Anyone have any ideas?
Thanks in advance for any discussion about this issue!
You need to add all of the values from self.fft?.fftData (consider changing negative values to positive before adding) and then change that to decibels
The values in the array correspond to the values of the bins in the FFT. Having a single bin contain a magnitude value close to 1 would mean that a great amount of energy is in that narrow frequency band e.g. a very loud sinusoid (a signal with a single frequency).
Normal sounds, such as the one caused by you blowing on the mic, spread their energy across the entire spectrum, that is, in many bins instead of just one. For this reason, usually the magnitudes get lower as the FFT size increases.
Magnitude of -40dB on a single bin is quite loud. If you try to play a tone, you should see a clear peak in one of the bins.
I have some very short signals from oscilloscope (50k-200k samples) registered over about 2ms time length. Those are acoustic signals with registered signal of a spark of ESD (electrostatic discharge).
I'd like to get some frequency data of that signal, in near-acoustic frequency range (up to about 30kHz) with as high time resolution as possible.
I have tried ploting a spectrogram (specgram in Octave) to view the signal, but the output is not really usefull. Using specgram( x, N, fs );, where x is my signal of fs sampling rate, I receive plot starting at very high frequencies of about 500MHz for low values of N and I get better frequency resolution for big N values (like 2^12-13) but the window is too wide and I receive only 2 spectrum values over whole signal length.
I understand that it may be the limitation of Fourier transform which is probably used by the specgram function (actually, I don't know much about signal analysis).
Is there any other way to get some frequency (as a function of time) information of that kind of signal? I've read something about wavelets, but when I tried using dwt function of signal package, I received this error:
error: 'wfilters' undefined near line 51 column 14
error: called from
dwt at line 51 column 12
Even if this would work, I am not so sure if I'd know how to actually use the output of those wavelet functions ...
To get audio frequency information from such a high sample rate, you will need obtain a sample vector long enough to contain at least a few whole cycles at audio frequencies, e.g. many 10's of milliseconds of contiguous samples, which may or may not be more than your scope can gather. To reasonably process this amount of data, you might low pass filter the sample data to just contain audio frequencies, and then resample it to a lower sample rate, but above twice that filter cut-off frequency. Then you will end up with a much shorter sample vector to feed an FFT for your audio spectrum analysis.
Does anyone know of anywhere I can find actual code examples of Software Phase Locked Loops (SPLLs) ?
I need an SPLL that can track a PSK modulated signal that is somewhere between 1.1 KHz and 1.3 KHz. A Google search brings up plenty of academic papers and patents but nothing usable. Even a trip to the University library that contains a shelf full of books on hardware PLL's there was only a single chapter in one book on SPLLs and that was more theoretical than practical.
Thanks for your time.
Ian
I suppose this is probably too late to help you (what did you end up doing?) but it may help the next guy.
Here's a golfed example of a software phase-locked loop I just wrote in one line of C, which will sing along with you:
main(a,b){for(;;)a+=((b+=16+a/1024)&256?1:-1)*getchar()-a/512,putchar(b);}
I present this tiny golfed version first in order to convince you that software phase-locked loops are actually fairly simple, as software goes, although they can be tricky.
If you feed it 8-bit linear samples on stdin, it will produce 8-bit samples of a sawtooth wave attempting to track one octave higher on stdout. At 8000 samples per second, it tracks frequencies in the neighborhood of 250Hz, just above B below middle C. On Linux you can do this by typing arecord | ./pll | aplay. The low 9 bits of b are the oscillator (what might be a VCO in a hardware implementation), which generates a square wave (the 1 or -1) which gets multiplied by the input waveform (getchar()) to produce the output of the phase detector. That output is then low-pass filtered into a to produce the smoothed phase error signal which is used to adjust the oscillation frequency of b to push a toward 0. The natural frequency of the square wave, when a == 0, is for b to increment by 16 every sample, which increments it by 512 (a full cycle) every 32 samples. 32 samples at 8000 samples per second are 1/250 of a second, which is why the natural frequency is 250Hz.
Then putchar() takes the low 8 bits of b, which make up a sawtooth wave at 500Hz or so, and spews them out as the output audio stream.
There are several things missing from this simple example:
It has no good way to detect lock. If you have silence, noise, or a strong pure 250Hz input tone, a will be roughly zero and b will be oscillating at its default frequency. Depending on your application, you might want to know whether you've found a signal or not! Camenzind's suggestion in chapter 12 of Designing Analog Chips is to feed a second "phase detector" 90° out of phase from the real phase detector; its smoothed output gives you the amplitude of the signal you've theoretically locked onto.
The natural frequency of the oscillator is fixed and does not sweep. The capture range of a PLL, the interval of frequencies within which it will notice an oscillation if it's not currently locked onto one, is pretty narrow; its lock range, over which it will will range in order to follow the signal once it's locked on, is much larger. Because of this, it's common to sweep the PLL's frequency all over the range where you expect to find a signal until you get a lock, and then stop sweeping.
The golfed version above is reduced from a much more readable example of a software phase-locked loop in C that I wrote today, which does do lock detection but does not sweep. It needs about 100 CPU cycles per input sample per PLL on the Atom CPU in my netbook.
I think that if I were in your situation, I would do the following (aside from obvious things like looking for someone who knows more about signal processing than I do, and generating test data). I probably wouldn't filter and downconvert the signal in a front end, since it's at such a low frequency already. Downconverting to a 200Hz-400Hz band hardly seems necessary. I suspect that PSK will bring up some new problems, since if the signal suddenly shifts phase by 90° or more, you lose the phase lock; but I suspect those problems will be easy to resolve, and it's hardly untrodden territory.
This is an interactive design package
for designing digital (i.e. software)
phase locked loops (PLLs). Fill in the
form and press the ``Submit'' button,
and a PLL will be designed for you.
Interactive Digital Phase Locked Loop Design
This will get you started, but you really need to understand the fundamentals of PLL design well enough to build it yourself in order to troubleshoot it later - This is the realm of digital signal processing, and while not black magic it will certainly give you a run for your money during debugging.
-Adam
Have Matlab with Simulink? There are PLL demo files available at Matlab Central here. Matlab's code generation capabilities might get you from there to a PLL written in C.