Guitar tuner frequency - ios

I'm making a guitar tuner for iOS with Objective-C.Due to the fact I'm Beginner I'm struggling a bit to gather all the resources and information about it. I know the theory like ( correct me if I'm wrong ) :-
First I need to get the input from microphone.
Then need to apply apply FFT algorithm to get the frequency
Then compare the frequency with the fundamental frequency of notes.
while searching on stack and google I found people are talking a lot about AurioTouch example soo I take a look at the example but however I'm not able to figure out what is exactly going on under the hood because the code is mostly return with C++ and pointer that I can't understand at least for now. I also found the EZAudio example ,and that was a great one seems to be working fine but according to my research the fundamental frequency of the guitar are measured as:-
String Frequency
1 (E) 329.63 Hz
2 (B) 246.94 Hz
3 (G) 196.00 Hz
4 (D) 146.83 Hz
5 (A) 110.00 Hz
6 (E) 82.41 Hz
and from EZAudioFFT example I'm getting the frequency:-
String Frequency
1 (E) 333.02 Hz
2 (B) 247.60 Hz
3 (G) 398-193 Hz (398 when start and 193 when end)
4 (D) 290-150 Hz (290 when start and 150 when end)
5 (A) 333-215 Hz (333 when start and 215 when end)
6 (E) 247-161 Hz (247 when start and 161 when end)
One thing to notice the EZAudioFFT example was showing the Max Frequency.
So what I'm asking here is there anybody who have implemented this before
and give me the direction and some detail information about the topic so In future nobody have to research more and more to gather all the resources.I need help with the following topics:-
What is the best possible way to get the accurate frequency of guitar string and the libraries to use.
Which type of frequency need to tune a guitar like maxFrequency, is there a role of altitude, magnitude (i have less information about this topic).
What to do after getting the right frequency.
Any help would be truly appreciated and my apologies in advance if i'm asking something stupid , it would be great if someone provide me information taking Objective-C as the primary language also I found this beautiful Example but I was enable to implement it in Objective-C and was enable to compile amazing framework AudioKit maybe due to Xcode 8.0 and some swift issues.

Related

How would I break down a signals sound pressure level by frequency

I've been given some digitized sound recordings and asked to plot the sound pressure level per Hz.
The signal is sampled at 40KHz and the units for the y axis are simply volts.
I've been asked to produce a graph of the SPL as dB/Hz vs Hz.
EDIT: The input units are voltage vs time.
Does this make sense? I though SPL was a time domain measure?
If it does make sense how would I go about producing this graph? Apply the dB formula (20 * log10(x) IIRC) and do an FFT on that or...?
What you're describing is a Power Spectral Density. Matlab, for example, has a pwelch function that does literally what you're asking for. To scale to dBSPL/Hz, simply apply 10*log10([psd]) where psd is the output of pwelch. Let me know if you need help with the function inputs.
If you're working with a different framework, let me know which, 100% sure they'll have a version of this function, possibly with a different output format in which case the scaling might be different.

fundamental frequency of female voice

According to what I have read on the internet, the normal range of fundamental frequency of female voice is 165 to 255 Hz .
I am using Praat and also python library called Parselmouth to get the fundamental frequency values of female voice in an audio file(.wav). however, I got some values that are over 255Hz(eg: 400+Hz, 500Hz).
Is it normal to get big values like this?
It is possible, but unlikely, if you are trying to capture the fundamental frequency (F0) of a speaking voice. It sounds likely that you are capturing a more easily resonating overtone (e.g. F1 or F2) instead.
My experiments with Praat give me the impression that the with good parameters it will reliably extract F0.
What you'll want to do is to verify that by comparing the pitch curve with a spectrogram. Here's an example of a fitting made by Praat (female speaker):
You can see from the image that
Most prominent frequency seems to be F2
Around 200 Hz seems likely to be F0, since there's only noise below that (compared to before/after the segment)
Praat has calculated a good estimate of F0 for the voiced speech segments
If, after a visual inspection, it seems that you are getting wrong results, you can try to tweak the parameters. Window length greatly affects the frequency resolution.
If you can't capture frequencies this low, you should try increasing the window length - the intuition is that it gives the algorithm a better chance at finding slowly changing periodic features in the data.

Audio Unit Bandpass Filter iOS

I'm developing a Bandpass Filter for iOS and the filter needs two parameters (center frequency and bandwidth). The problem basically is that bandwidth is in Cents (range 100-1200) instead of Hz. I've tried to find a way to convert from Cents to Hz but apparently there is no way. I also tried this link but the range of bandwidth that I'm using doesn't fit.
So, does anyone know something about this? Is there another way to implement a bandpass filter using audio units?
Thanks for the help. Any explanation would be really helpful!
The explanation can be found in AudioUnitProperties.h :
if f is freq in hertz then absoluteCents = 1200 * log2(f / 440) + 6900

MATLAB - Trouble of converting training data to spectrogram

I am a student and new to signal processing just few months ago. I picked "A Novel Fuzzy Approach to Speech Recognition" for my project (you can google for the downloadable version).
I am a little stuck in converting the training data into a spectrogram which has been passed through a mel-filter.
I use this for my mel-filterbank, with a little modification of course.
Then I wrote this simple code to make the spectrogram of my training data:
p =25;
fl =0.0;
fh =0.5;
w ='hty';
[a,fs]=wavread('a.wav'); %you can simply record a sound and name it a.wav, other param will follows
n=length(a)+1;
fa=rfft(a);
xa=melbank_me(p,n,fs); %the mel-filterbank function
za=log(xa*abs(fa).^2);
ca=dct(za);
spectrogram(ca(:,1))
All I got is just like this which is not like the paper say::
Please let me know that either my code or the spectrogram I have was right. if so, what do I have to do to make my spectrogram like the paper's? and if didn't, please tell me where's the wrong
And another question, is it ok to having the lenght of FFT that much?
Because when I try to lower it, my code gives errors.
You shouldn't be doing an FFT of the entire file - that will include too much time-varing information - you should pick a window size in which the sound is relatively stationary, e.g. 10 ms # 44.1 kHz = 441 samples, so perhaps N = 512 might be a good starting point. You can then generate your spectrogram over successive windows if needed, in order to display the time-varying frequency content.

Software Phase Locked Loop example code needed

Does anyone know of anywhere I can find actual code examples of Software Phase Locked Loops (SPLLs) ?
I need an SPLL that can track a PSK modulated signal that is somewhere between 1.1 KHz and 1.3 KHz. A Google search brings up plenty of academic papers and patents but nothing usable. Even a trip to the University library that contains a shelf full of books on hardware PLL's there was only a single chapter in one book on SPLLs and that was more theoretical than practical.
Thanks for your time.
Ian
I suppose this is probably too late to help you (what did you end up doing?) but it may help the next guy.
Here's a golfed example of a software phase-locked loop I just wrote in one line of C, which will sing along with you:
main(a,b){for(;;)a+=((b+=16+a/1024)&256?1:-1)*getchar()-a/512,putchar(b);}
I present this tiny golfed version first in order to convince you that software phase-locked loops are actually fairly simple, as software goes, although they can be tricky.
If you feed it 8-bit linear samples on stdin, it will produce 8-bit samples of a sawtooth wave attempting to track one octave higher on stdout. At 8000 samples per second, it tracks frequencies in the neighborhood of 250Hz, just above B below middle C. On Linux you can do this by typing arecord | ./pll | aplay. The low 9 bits of b are the oscillator (what might be a VCO in a hardware implementation), which generates a square wave (the 1 or -1) which gets multiplied by the input waveform (getchar()) to produce the output of the phase detector. That output is then low-pass filtered into a to produce the smoothed phase error signal which is used to adjust the oscillation frequency of b to push a toward 0. The natural frequency of the square wave, when a == 0, is for b to increment by 16 every sample, which increments it by 512 (a full cycle) every 32 samples. 32 samples at 8000 samples per second are 1/250 of a second, which is why the natural frequency is 250Hz.
Then putchar() takes the low 8 bits of b, which make up a sawtooth wave at 500Hz or so, and spews them out as the output audio stream.
There are several things missing from this simple example:
It has no good way to detect lock. If you have silence, noise, or a strong pure 250Hz input tone, a will be roughly zero and b will be oscillating at its default frequency. Depending on your application, you might want to know whether you've found a signal or not! Camenzind's suggestion in chapter 12 of Designing Analog Chips is to feed a second "phase detector" 90° out of phase from the real phase detector; its smoothed output gives you the amplitude of the signal you've theoretically locked onto.
The natural frequency of the oscillator is fixed and does not sweep. The capture range of a PLL, the interval of frequencies within which it will notice an oscillation if it's not currently locked onto one, is pretty narrow; its lock range, over which it will will range in order to follow the signal once it's locked on, is much larger. Because of this, it's common to sweep the PLL's frequency all over the range where you expect to find a signal until you get a lock, and then stop sweeping.
The golfed version above is reduced from a much more readable example of a software phase-locked loop in C that I wrote today, which does do lock detection but does not sweep. It needs about 100 CPU cycles per input sample per PLL on the Atom CPU in my netbook.
I think that if I were in your situation, I would do the following (aside from obvious things like looking for someone who knows more about signal processing than I do, and generating test data). I probably wouldn't filter and downconvert the signal in a front end, since it's at such a low frequency already. Downconverting to a 200Hz-400Hz band hardly seems necessary. I suspect that PSK will bring up some new problems, since if the signal suddenly shifts phase by 90° or more, you lose the phase lock; but I suspect those problems will be easy to resolve, and it's hardly untrodden territory.
This is an interactive design package
for designing digital (i.e. software)
phase locked loops (PLLs). Fill in the
form and press the ``Submit'' button,
and a PLL will be designed for you.
Interactive Digital Phase Locked Loop Design
This will get you started, but you really need to understand the fundamentals of PLL design well enough to build it yourself in order to troubleshoot it later - This is the realm of digital signal processing, and while not black magic it will certainly give you a run for your money during debugging.
-Adam
Have Matlab with Simulink? There are PLL demo files available at Matlab Central here. Matlab's code generation capabilities might get you from there to a PLL written in C.

Resources