I am using this GitHub project called "The Amazing Audio Engine", to capture audio from the microphone. So I am using this method:
id<AEAudioReceiver> receiver = [AEBlockAudioReceiver audioReceiverWithBlock: ^(void *source, const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
// Do something with 'audio'
}];
This method fires every 23 ms delivering an audio array containing all amplitudes of the sound wave over that 23 ms interval.
This is the catch. This audio sound I am dealing with is a FM signal, composed of two frequencies, one at 1000 Hz and one at twice the frequency that represents zeros and ones of a digital stream.
This is my problem. At that point I have an array of audio amplitudes over 0.23 ms.
So I thought I could do a FFT to convert the signal into frequency levels. I used this code:
// Setup the length
vDSP_Length log2n = log2f(numFrames);
// Calculate the weights array. This is a one-off operation.
FFTSetup fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = numFrames/2;
// Populate *window with the values for a hamming window function
float *window = (float *)malloc(sizeof(float) * numFrames);
vDSP_hamm_window(window, numFrames, 0);
// Window the samples
vDSP_vmul(data, 1, window, 1, data, 1, numFrames);
// Define complex buffer
COMPLEX_SPLIT A;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
// Pack samples:
// C(re) -> A[n], C(im) -> A[n+1]
vDSP_ctoz((COMPLEX*)data, 2, &A, 1, numFrames/2);
// RUN THE FFT
//Perform a forward FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
Because each interval is 172 Hz and I want to isolate 1000Hz, I think the 6th "bucket" of the FFT result would be the one, so I have this code:
//Convert COMPLEX_SPLIT A result to magnitudes
float amp[numFrames];
amp[0] = A.realp[0]/(numFrames*2);
for(int i=1; i<numFrames; i++) {
amp[i]=A.realp[i]*A.realp[i]+A.imagp[i]*A.imagp[i];
}
// I need the 6th and the 12th bucket, so I need a[5] and a[11]
but then I am starting to think that the FFT is not what I want because a[5] and a[11] will give me the amplitudes of ~1000Hz and ~2000Hz over 0.23 ms but in fact what I need are all the variations of the 1000 Hz and 2000 Hz sounds had over the 0.23ms time. In fact I need to obtain arrays, not single values.
In broad lines what should I do to obtain the amplitudes over time of the two frequencies, 1000 and 2000 Hz?
If you know what time resolution you want, two Goertzel filters slid by that length would allow you to measure the amplitudes of your two frequencies with much less overhead than using FFTs. The length of the filter or FFT need not (usually should not) be the same length as the number of frames from each audio callback. You can use a circular buffer or fifo to decouple the lengths. (In iOS, the numFrames can be different on different device models, and may suddenly change depending on other factors outside the apps control).
Related
I'm building an iOS app using EZAudio. It's delegate returns back a float** buffer, which contains float values indicating the volume detected. This delegate is called constantly and it's work is done a different thread.
What I am trying to do is to take the float value from EZAudio and convert it into decibels.
EZAudioDelegate
Here's my simplified EZAudio Delegate for getting Microphone Data:
- (void)microphone:(EZMicrophone *)microphone hasAudioReceived:(float **)buffer withBufferSize:(UInt32)bufferSize withNumberOfChannels:(UInt32)numberOfChannels {
/*
* Returns a float array called buffer that contains the stereo signal data
* buffer[0] is the left audio channel
* buffer[1] is the right audio channel
*/
// Using a separate audio thread to not block the main UI thread
dispatch_async(dispatch_get_main_queue(), ^{
float decibels = [self getDecibelsFromVolume:buffer withBufferSize:bufferSize];
NSLog(#"Decibels: %f", decibels);
});
}
The Problem
The problem is that after implementing solutions from the links below, I do not understand how it works. If someone could explain how it converts volume to decibels I would be very grateful
How to convert audio input to DB? #85
How to change the buffer size to increase FFT window? #50
Changing Buffer Size #84
The Code
The solution uses the following methods from the Accelerate Framework to convert the volume into decibels:
vDSP_vsq
vDSP_meanv
vDSP_vdbcon
Below is the method getDecibelsFromVolume that is called from the EZAudio Delegate. It is passed the float** buffer and bufferSize from the delegate.
- (float)getDecibelsFromVolume:(float**)buffer withBufferSize:(UInt32)bufferSize {
// Decibel Calculation.
float one = 1.0;
float meanVal = 0.0;
float tiny = 0.1;
float lastdbValue = 0.0;
vDSP_vsq(buffer[0], 1, buffer[0], 1, bufferSize);
vDSP_meanv(buffer[0], 1, &meanVal, bufferSize);
vDSP_vdbcon(&meanVal, 1, &one, &meanVal, 1, 1, 0);
// Exponential moving average to dB level to only get continous sounds.
float currentdb = 1.0 - (fabs(meanVal) / 100);
if (lastdbValue == INFINITY || lastdbValue == -INFINITY || isnan(lastdbValue)) {
lastdbValue = 0.0;
}
float dbValue = ((1.0 - tiny) * lastdbValue) + tiny * currentdb;
lastdbValue = dbValue;
return dbValue;
}
I'll explain how one would compute a dB value for a signal using code and then show how that relates to the vDSP example.
First, compute the RMS sum of a chunk of data
double sumSquared = 0;
for (int i = 0 ; i < numSamples ; i++)
{
sumSquared += samples[i]*samples[i];
}
double rms = sumSquared/numSamples;
For more information on RMS
Next convert the RMS value to dB
double dBvalue = 20*log10(rms);
How this relates to the example code
vDSP_vsq(buffer[0], 1, buffer[0], 1, bufferSize);
This line loops over the buffer and computes squares all of the elements in the buffer. If buffer contained the values [1,2,3,4] before the call then after the call it would contain the values [1,4,9,16]
vDSP_meanv(buffer[0], 1, &meanVal, bufferSize);
This line loops over the buffer, summing the values in the buffer and then returning the sum divided by the number of elements. So for the input buffer [1,4,9,16] in computes the sum 30, divides by 4 and returns the result 7.5.
vDSP_vdbcon(&meanVal, 1, &one, &meanVal, 1, 1, 0);
This line converts the meanVal to decibels. There is really no point in calling a vectorized function here since it is only operating on a single element. What it is doing however is plugging the parameters into the following formula:
meanVal = n*log10(meanVal/one)
where n is either 10 or 20 depending on the last parameter. In this case it is 10. 10 is used for power measurements and 20 is used for amplitudes. I think 20 would make more sense for you to use.
The last little bit of code looks to be doing some simple smoothing of the result to make the meter a little less bouncy.
I'm fairly new to signal processing, so please bear with me. I'm trying to implement a bandpass filter to apply to an audio recording obtained from an iPad. The recording has been converted to a Float32 pointer using ExtFile functions and AudioBufferList. The sampling rate is 44100Hz. The recording is about 9 seconds long (that's about 396900 samples) and contains a 2-6kHz chirp and some ambient noise. I need to bandpass filter the recording around the frequency range 2-6kHz in order to find at what point in time the chirp occurs. I have referred to the following resources to create a bandpass filter:
https://github.com/bartolsthoorn/NVDSP/blob/master/NVDSP.mm
https://github.com/bartolsthoorn/NVDSP/blob/master/Filters/NVBandpassFilter.m
My question is, can I simply pass in the array of float values for the recording to the bandpass filter above? I have tried this, but I'm not sure if it's working, since it seems to simply decrease the value of every single value in the array. What should I expect to see after passing the recording th
However, I've seen some resources say that I first need to convert the values from time-domain to frequency-domain using a FFT. I've tried the following code to do this using some vDSP functions:
- (Float32 *)calculateFFT
{
// Acquired from http://batmobile.blogs.ilrt.org/fourier-transforms-on-an-iphone/
int numSamples = _recordingLength; //~9 seconds * 44100Hz ~= 396900 samples
// Setup the length
vDSP_Length log2n = log2f(numSamples);
// Calculate the weights array. This is a one-off operation.
FFTSetup fftSetup = vDSP_create_fftsetup(log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = numSamples/2;
// Populate *window with the values for a hamming window function
float *window = (float *)malloc(sizeof(float) * numSamples);
vDSP_hamm_window(window, numSamples, 0);
// Window the samples
vDSP_vmul(_recordingSamples, 1, window, 1, _recordingSamples, 1, numSamples);
// Define complex buffer
COMPLEX_SPLIT A;
A.realp = (float *) malloc(nOver2*sizeof(float));
A.imagp = (float *) malloc(nOver2*sizeof(float));
// Pack samples:
// C(re) -> A[n], C(im) -> A[n+1]
vDSP_ctoz((COMPLEX*)_recordingSamples, 2, &A, 1, numSamples/2);
//Perform a forward FFT using fftSetup and A
//Results are returned in A
vDSP_fft_zrip(fftSetup, &A, 1, log2n, FFT_FORWARD);
//Convert COMPLEX_SPLIT A result to magnitudes
Float32 *amp = new Float32[numSamples];
amp[0] = A.realp[0]/(numSamples*2);
for(int i=1; i<numSamples; i++) {
amp[i]=A.realp[i]*A.realp[i]+A.imagp[i]*A.imagp[i];
//printf("%f ",amp[i]);
}
return amp;
}
But I don't understand what is being returned from this function. If I do need to apply a FFT before passing the recording to the filter, what needs to be returned from the calculateFFT function and then passed to the filter?
Thanks in advance.
I'm currently using FFT code from here:
https://github.com/syedhali/EZAudio/tree/master/EZAudioExamples/iOS/EZAudioFFTExample
Here's the code from the 2 relevant methods:
-(void)createFFTWithBufferSize:(float)bufferSize withAudioData:(float*)data {
// Setup the length
_log2n = log2f(bufferSize);
// Calculate the weights array. This is a one-off operation.
_FFTSetup = vDSP_create_fftsetup(_log2n, FFT_RADIX2);
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = bufferSize/2;
// Populate *window with the values for a hamming window function
float *window = (float *)malloc(sizeof(float)*bufferSize);
vDSP_hamm_window(window, bufferSize, 0);
// Window the samples
vDSP_vmul(data, 1, window, 1, data, 1, bufferSize);
free(window);
// Define complex buffer
_A.realp = (float *) malloc(nOver2*sizeof(float));
_A.imagp = (float *) malloc(nOver2*sizeof(float));
}
-(void)updateFFTWithBufferSize:(float)bufferSize withAudioData:(float*)data {
// For an FFT, numSamples must be a power of 2, i.e. is always even
int nOver2 = bufferSize/2;
// Pack samples:
// C(re) -> A[n], C(im) -> A[n+1]
vDSP_ctoz((COMPLEX*)data, 2, &_A, 1, nOver2);
// Perform a forward FFT using fftSetup and A
// Results are returned in A
vDSP_fft_zrip(_FFTSetup, &_A, 1, _log2n, FFT_FORWARD);
// Convert COMPLEX_SPLIT A result to magnitudes
float amp[nOver2];
float maxMag = 0;
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
maxMag = mag > maxMag ? mag : maxMag;
}
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
// Bind the value to be less than 1.0 to fit in the graph
amp[i] = [EZAudio MAP:mag leftMin:0.0 leftMax:maxMag rightMin:0.0 rightMax:1.0];
}
I've modified the updateFFTWithBufferSize method above so that I could get the frequency in Hz like this:
for(int i=0; i<nOver2; i++) {
// Calculate the magnitude
float mag = _A.realp[i]*_A.realp[i]+_A.imagp[i]*_A.imagp[i];
if(maxMag < mag) {
_i_max = i;
}
maxMag = mag > maxMag ? mag : maxMag;
}
float frequency = _i_max / bufferSize * 44100;
NSLog(#"FREQUENCY: %f", frequency);
I've generated a few pure sine tones with Audacity at different frequencies to test with. The issue I'm seeing is that the code is returning the same frequency for two different sine tones that are relatively close in value.
For example:
A sine tone generated at 19255Hz will show up from FFT as 19293.750000Hz. So will a sine tone generated at 19330Hz. Something must be off in the calculations.
Any assistance in how I can modify the above code to get a more precise FFT frequency reading for pure sine tones is greatly appreciated.
Thank you!
You can get a rough frequency estimate by fitting a parabolic curve to the 3 FFT bin magnitudes around the peak magnitude bin, and then finding the extrema of that parabola.
A better estimate can be created by using the transform of your FFT window as an interpolation kernel, and doing successive approximation to refine an estimate of the maxima of the interpolated points. (Zero padding and using a much longer FFT will give you a similar type of interpolated estimate.)
The easy way for a stationary signal is, if possible, to just use a longer FFT with more samples that span a longer time interval.
You've got a number of problems going on here:
1) Your frequency axis spacing is fmax/N, or about 80Hz, so you're not going to get a resolution much better than that.
2) You're signal is very close to the Nyquist frequency (ie, 20KHz/44.1KHz is almost 0.5), and when you're this close to the Nyquist limit you need to be very careful if you want accurate results. (That is, at 20KHz, you're only recording about two data points for each full oscillation cycle.)
3) Since 20KHz is at the edge of human hearing (and higher for most people), many microphones don't really worry about it. Here's a measurement for the iPhone.
Perhaps your sampling frequency isn't high enough?
The FFT is a very good method to get a spectrum if you don't know anything about the input. If you know that the input is a pure sine wave, you can do much better. Start off by calculating the FFT to get a rough idea where the sine is. Get the minimum and maximum to estimate the amplitude [or get that from the FFT - square all inputs, add them, take square root] , get the phase at the begin and end given the estimated frequency and amplitude.
In general, you'll find that the phase does not match. That's because the phase at the end is off by 2*Δf * N. f - Δf is a better estimate of the frequency. Keep in mind that such a method is super noise sensitive. The method works because the input is a pure sine wave, and noise is everything but that. Using this method iteratively blows up quickly; you even hit rounding errors (not sinusoidal either)
Another similar trick is subtracting the estimated wave. The difference between two sines is the product of two sines, one with the frequencies added (in your case, ±38.5 kHz) and one with the frequencies subtracted (Δ_f_, less than 100 Hz). See also Heterodyne detection
I am trying to port an existing FFT based low-pass filter to iOS using the Accelerate vDSP framework.
It seems like the FFT works as expected for about the first 1/4 of the sample. But then after that the results seem wrong, and even more odd are mirrored (with the last half of the signal mirroring most of the first half).
You can see the results from a test application below. First is plotted the original sampled data, then an example of the expected filtered results (filtering out signal higher than 15Hz), then finally the results of my current FFT code (note that the desired results and example FFT result are at a different scale than the original data):
The actual code for my low-pass filter is as follows:
double *lowpassFilterVector(double *accell, uint32_t sampleCount, double lowPassFreq, double sampleRate )
{
double stride = 1;
int ln = log2f(sampleCount);
int n = 1 << ln;
// So that we get an FFT of the whole data set, we pad out the array to the next highest power of 2.
int fullPadN = n * 2;
double *padAccell = malloc(sizeof(double) * fullPadN);
memset(padAccell, 0, sizeof(double) * fullPadN);
memcpy(padAccell, accell, sizeof(double) * sampleCount);
ln = log2f(fullPadN);
n = 1 << ln;
int nOver2 = n/2;
DSPDoubleSplitComplex A;
A.realp = (double *)malloc(sizeof(double) * nOver2);
A.imagp = (double *)malloc(sizeof(double) * nOver2);
// This can be reused, just including it here for simplicity.
FFTSetupD setupReal = vDSP_create_fftsetupD(ln, FFT_RADIX2);
vDSP_ctozD((DSPDoubleComplex*)padAccell,2,&A,1,nOver2);
// Use the FFT to get frequency counts
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_FORWARD);
const double factor = 0.5f;
vDSP_vsmulD(A.realp, 1, &factor, A.realp, 1, nOver2);
vDSP_vsmulD(A.imagp, 1, &factor, A.imagp, 1, nOver2);
A.realp[nOver2] = A.imagp[0];
A.imagp[0] = 0.0f;
A.imagp[nOver2] = 0.0f;
// Set frequencies above target to 0.
// This tells us which bin the frequencies over the minimum desired correspond to
NSInteger binLocation = (lowPassFreq * n) / sampleRate;
// We add 2 because bin 0 holds special FFT meta data, so bins really start at "1" - and we want to filter out anything OVER the target frequency
for ( NSInteger i = binLocation+2; i < nOver2; i++ )
{
A.realp[i] = 0;
}
// Clear out all imaginary parts
bzero(A.imagp, (nOver2) * sizeof(double));
//A.imagp[0] = A.realp[nOver2];
// Now shift back all of the values
vDSP_fft_zripD(setupReal, &A, stride, ln, FFT_INVERSE);
double *filteredAccell = (double *)malloc(sizeof(double) * fullPadN);
// Converts complex vector back into 2D array
vDSP_ztocD(&A, stride, (DSPDoubleComplex*)filteredAccell, 2, nOver2);
// Have to scale results to account for Apple's FFT library algorithm, see:
// http://developer.apple.com/library/ios/#documentation/Performance/Conceptual/vDSP_Programming_Guide/UsingFourierTransforms/UsingFourierTransforms.html#//apple_ref/doc/uid/TP40005147-CH202-15952
double scale = (float)1.0f / fullPadN;//(2.0f * (float)n);
vDSP_vsmulD(filteredAccell, 1, &scale, filteredAccell, 1, fullPadN);
// Tracks results of conversion
printf("\nInput & output:\n");
for (int k = 0; k < sampleCount; k++)
{
printf("%3d\t%6.2f\t%6.2f\t%6.2f\n", k, accell[k], padAccell[k], filteredAccell[k]);
}
// Acceleration data will be replaced in-place.
return filteredAccell;
}
In the original code the library was handling non power-of-two sizes of input data; in my Accelerate code I am padding out the input to the nearest power of two. In the case of the sample test below the original sample data is 1000 samples so it's padded to 1024. I don't think that would affect results but I include that for the sake of possible differences.
If you want to experiment with a solution, you can download the sample project that generates the graphs here (in the FFTTest folder):
FFT Example Project code
Thanks for any insight, I've not worked with FFT's before so I feel like I am missing something critical.
If you want a strictly real (not complex) result, then the data before the IFFT must be conjugate symmetric. If you don't want the result to be mirror symmetric, then don't zero the imaginary component before the IFFT. Merely zeroing bins before the IFFT creates a filter with a huge amount of ripple in the passband.
The Accelerate framework also supports more FFT lengths than just powers of 2.
I'm trying to get frequency from iPhone / iPod music library for a spectrum app on iPod library, helping myself with reading-audio-samples-via-avassetreader to get audio samples and then with using-the-apple-fft-and-accelerate-framework and Apple vDSP Samples, but somehow I'm wrong somewhere and unable to calculate the frequency.
So step by step:
read audio sample
Hanning window
calculate fft
Is this the correct way to get frequencies from an iPod mp3 library?
Here is my code:
static COMPLEX_SPLIT A;
static FFTSetup setupReal;
static uint32_t log2n, n, nOver2;
static int32_t stride;
static float *obtainedReal;
static float scale;
+ (void)initialize
{
log2n = 10;
n = 1 << log2n;
stride = 1;
nOver2 = n / 2;
A.realp = (float *) malloc(nOver2 * sizeof(float));
A.imagp = (float *) malloc(nOver2 * sizeof(float));
obtainedReal = (float *) malloc(n * sizeof(float));
setupReal = vDSP_create_fftsetup(log2n, FFT_RADIX2);
}
- (float) performAcceleratedFastFourierTransForAudioBuffer:(AudioBufferList)ioData
{
NSUInteger * sampleIn = (NSUInteger *)ioData.mBuffers[0].mData;
for (int i = 0; i < nOver2; i++) {
double multiplier = 0.5 * (1 - cos(2*M_PI*i/nOver2-1));
A.realp[i] = multiplier * sampleIn[i];
A.imagp[i] = 0;
}
memset(ioData.mBuffers[0].mData, 0, ioData.mBuffers[0].mDataByteSize);
vDSP_fft_zrip(setupReal, &A, stride, log2n, FFT_FORWARD);
vDSP_zvmags(&A, 1, A.realp, 1, nOver2);
scale = (float) 1.0 / (2 * n);
vDSP_vsmul(A.realp, 1, &scale, A.realp, 1, nOver2);
vDSP_vsmul(A.imagp, 1, &scale, A.imagp, 1, nOver2);
vDSP_ztoc(&A, 1, (COMPLEX *)obtainedReal, 2, nOver2);
int peakIndex = 0;
for (size_t i=1; i < nOver2-1; ++i) {
if ((obtainedReal[i] > obtainedReal[i-1]) && (obtainedReal[i] > obtainedReal[i+1]))
{
peakIndex = i;
break;
}
}
//here I don't know how to calculate frequency with my data
float frequency = obtainedReal[peakIndex-1] / 44100 / n;
vDSP_destroy_fftsetup(setupReal);
free(obtainedReal);
free(A.realp);
free(A.imagp);
return frequency;
}
I got 1.485757 and 1.332233 as my first frequencies
It looks to me like there is a problem in the conversion to complex input for the FFT. vDSP_ctoz() splits a buffer where real and imaginary components are interleaved into two buffers, one real and one imaginary. Your input to that function appears to be just real data that has been casted to COMPLEX. This means that your input buffer to vDSP_ctoz() is only half as long as it needs to be and some garbage data beyond the buffer size is getting converted.
You either need to create sampleOut to be 2*n in length and set every other value (the real parts) or better yet, you can bypass the vDSP_ctoz() and directly copy your input data into A.realp and set A.imagp to zeros. vDSP_ctoz() should only be needed when interfacing to a source that produces interleaved complex data.
Edit
Ok, I think I was wrong on my first suggestion since the vDSP documentation says that the real input of the real-to-complex in-place fft should be formatted into the split complex format such that imagp contains even samples and realp contains the odd samples. I have not actually used the vDSP library, but I am familiar with a lot of other FFT libraries and I missed that detail.
You should be able to find the peaks using A.realp after the call to vDSP_zvmags(&A, 1, A.realp, 1, nOver2); At that point, A.realp should contain the magnitude squared of the FFT output, which is scalar. If you are going to do the scaling, it should be done before the mag2 operation, but it may not be needed if you are just looking for the peaks.
To get the real frequencies represented by the FFT output, use this formula:
F = (i * Fs) / N, i=0,1,...,N/2
where
i is the index of the FFT output buffer
Fs is the audio sampling rate
N is the FFT length
so your calculation might look like this:
float frequency = (peakIndex * 44100) / n;
Keep in mind that vDSP only returns the first half of the input spectrum for real input since the second half is redundant. So the FFT output represents frequencies from 0 to Fs/2.
One other note is that I don't know if your peak finding algorithm will work very well since FFT output will not be smooth and there will often be a lot of oscillation. You are simply taking the first sample where the two adjacent samples are lower. If you just want to find a single peak, it would be better just to find the max magnitude across the entire output. If you want to find multiple peaks, you will have to do something more sophisticated.