I'm developing a Bandpass Filter for iOS and the filter needs two parameters (center frequency and bandwidth). The problem basically is that bandwidth is in Cents (range 100-1200) instead of Hz. I've tried to find a way to convert from Cents to Hz but apparently there is no way. I also tried this link but the range of bandwidth that I'm using doesn't fit.
So, does anyone know something about this? Is there another way to implement a bandpass filter using audio units?
Thanks for the help. Any explanation would be really helpful!
The explanation can be found in AudioUnitProperties.h :
if f is freq in hertz then absoluteCents = 1200 * log2(f / 440) + 6900
Related
I've been given some digitized sound recordings and asked to plot the sound pressure level per Hz.
The signal is sampled at 40KHz and the units for the y axis are simply volts.
I've been asked to produce a graph of the SPL as dB/Hz vs Hz.
EDIT: The input units are voltage vs time.
Does this make sense? I though SPL was a time domain measure?
If it does make sense how would I go about producing this graph? Apply the dB formula (20 * log10(x) IIRC) and do an FFT on that or...?
What you're describing is a Power Spectral Density. Matlab, for example, has a pwelch function that does literally what you're asking for. To scale to dBSPL/Hz, simply apply 10*log10([psd]) where psd is the output of pwelch. Let me know if you need help with the function inputs.
If you're working with a different framework, let me know which, 100% sure they'll have a version of this function, possibly with a different output format in which case the scaling might be different.
I would like to know if it's possible to resample an already written AVAudioFile.
All the references that I found don't work on this particular problem, since:
They propose the resampling while the user is recording an AVAudioFile, while installTap is running. In this approach, the AVAudioConverter works in each buffer chunk given by the inputNode and appends it in the AVAudioFile. [1] [2]
The point is that I would like to resample my audio file regardless of the recording process.
The harder approach would be to upsample the signal by a L factor and applying decimation by a factor of M, using vDSP:
Audio on Compact Disc has a sampling rate of 44.1 kHz; to transfer it to a digital medium that uses 48 kHz, method 1 above can be used with L = 160, M = 147 (since 48000/44100 = 160/147). For the reverse conversion, the values of L and M are swapped. Per above, in both cases, the low-pass filter should be set to 22.05 kHz. [3]
The last one obviously seems like a too hard coded way to solve it. I hope there's a way to resample it with AVAudioConverter, but it lacks documentation :(
According to what I have read on the internet, the normal range of fundamental frequency of female voice is 165 to 255 Hz .
I am using Praat and also python library called Parselmouth to get the fundamental frequency values of female voice in an audio file(.wav). however, I got some values that are over 255Hz(eg: 400+Hz, 500Hz).
Is it normal to get big values like this?
It is possible, but unlikely, if you are trying to capture the fundamental frequency (F0) of a speaking voice. It sounds likely that you are capturing a more easily resonating overtone (e.g. F1 or F2) instead.
My experiments with Praat give me the impression that the with good parameters it will reliably extract F0.
What you'll want to do is to verify that by comparing the pitch curve with a spectrogram. Here's an example of a fitting made by Praat (female speaker):
You can see from the image that
Most prominent frequency seems to be F2
Around 200 Hz seems likely to be F0, since there's only noise below that (compared to before/after the segment)
Praat has calculated a good estimate of F0 for the voiced speech segments
If, after a visual inspection, it seems that you are getting wrong results, you can try to tweak the parameters. Window length greatly affects the frequency resolution.
If you can't capture frequencies this low, you should try increasing the window length - the intuition is that it gives the algorithm a better chance at finding slowly changing periodic features in the data.
First time posting, thanks for the great community!
I am using AudioKit and trying to add frequency weighting filters to the microphone input and so I am trying to understand the values that are coming out of the AudioKit AKFFTTap.
Currently I am trying to just print the FFT buffer converted into dB values
for i in 0..<self.bufferSize {
let db = 20 * log10((self.fft?.fftData[Int(i)])!)
print(db)
}
I was expecting values ranging in the range of about -128 to 0, but I am getting strange values of nearly -200dB and when I blow on the microphone to peg out the readings it only reaches about -60. Am I not approaching this correctly? I was assuming that the values being output from the EZAudioFFT engine would be plain amplitude values and that the normal dB conversion math would work. Anyone have any ideas?
Thanks in advance for any discussion about this issue!
You need to add all of the values from self.fft?.fftData (consider changing negative values to positive before adding) and then change that to decibels
The values in the array correspond to the values of the bins in the FFT. Having a single bin contain a magnitude value close to 1 would mean that a great amount of energy is in that narrow frequency band e.g. a very loud sinusoid (a signal with a single frequency).
Normal sounds, such as the one caused by you blowing on the mic, spread their energy across the entire spectrum, that is, in many bins instead of just one. For this reason, usually the magnitudes get lower as the FFT size increases.
Magnitude of -40dB on a single bin is quite loud. If you try to play a tone, you should see a clear peak in one of the bins.
I'm making a guitar tuner for iOS with Objective-C.Due to the fact I'm Beginner I'm struggling a bit to gather all the resources and information about it. I know the theory like ( correct me if I'm wrong ) :-
First I need to get the input from microphone.
Then need to apply apply FFT algorithm to get the frequency
Then compare the frequency with the fundamental frequency of notes.
while searching on stack and google I found people are talking a lot about AurioTouch example soo I take a look at the example but however I'm not able to figure out what is exactly going on under the hood because the code is mostly return with C++ and pointer that I can't understand at least for now. I also found the EZAudio example ,and that was a great one seems to be working fine but according to my research the fundamental frequency of the guitar are measured as:-
String Frequency
1 (E) 329.63 Hz
2 (B) 246.94 Hz
3 (G) 196.00 Hz
4 (D) 146.83 Hz
5 (A) 110.00 Hz
6 (E) 82.41 Hz
and from EZAudioFFT example I'm getting the frequency:-
String Frequency
1 (E) 333.02 Hz
2 (B) 247.60 Hz
3 (G) 398-193 Hz (398 when start and 193 when end)
4 (D) 290-150 Hz (290 when start and 150 when end)
5 (A) 333-215 Hz (333 when start and 215 when end)
6 (E) 247-161 Hz (247 when start and 161 when end)
One thing to notice the EZAudioFFT example was showing the Max Frequency.
So what I'm asking here is there anybody who have implemented this before
and give me the direction and some detail information about the topic so In future nobody have to research more and more to gather all the resources.I need help with the following topics:-
What is the best possible way to get the accurate frequency of guitar string and the libraries to use.
Which type of frequency need to tune a guitar like maxFrequency, is there a role of altitude, magnitude (i have less information about this topic).
What to do after getting the right frequency.
Any help would be truly appreciated and my apologies in advance if i'm asking something stupid , it would be great if someone provide me information taking Objective-C as the primary language also I found this beautiful Example but I was enable to implement it in Objective-C and was enable to compile amazing framework AudioKit maybe due to Xcode 8.0 and some swift issues.