I am trying to make a volume-meter for my app, which will show while recording a video. I have found a lot of support for such meters for iOS, but mostly for AVAudioPlayer, which is no option for me. I am using AVCaptureSession to record, and will then end up with the delegate method shown below:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
CFRetain(sampleBuffer);
CFRetain(formatDescription);
if(connection == audioConnection)
{
CMBlockBufferRef blockBuffer;
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
NULL, &audioBufferList, sizeof(AudioBufferList), NULL, NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&blockBuffer);
SInt16 *data = audioBufferList.mBuffers[0].mData;
}
//Releases etc..
}
(Only showing relevant code)
Of what I understand, I receive a 'sample buffer', containing either audio or video. Once I've verified that the connection indeed is audio, then I 'extract' the audioBufferList from the buffer, and I am sitting here left with a list of one (or more?) audioBuffers. The actual data is, as I understand, represented as SInt16, or '16 bits signed integer', which as far as I understand has a range from -32,768 to 32,767. However, if I simply print out this received value, I get A LOT of bouncing numbers. When in "silence" I get values bouncing rapidly between -200 and 200, and when there's noise I get values from -4,000 to 13,000, completely out of order.
As I've understood from reading, the value 0 will represent silence. However, I do not understand the difference between negative and positive values, as well as I do not know if the are able to reach all the way up/down to +-32,768.
I believe I need a percentage of how 'loud' it is, but have been unable to find anything.
I have read a couple of tutorials and references on the matter, but nothing makes sense to me. I followed one guide by doing this(appending to the code above, inside the if):
float accumulator = 0;
for(int i = 0; i < audioBufferList.mBuffers[0].mDataByteSize; i++)
accumulator += data[i] * data[i];
float power = accumulator / audioBufferList.mBuffers[0].mDataByteSize;
float decibels = log10f(power);
NSLog(#"%f", decibels);
Apparently, this code was supposed to align from -1 to +1, but that did not happen. I am now getting values around 6.194681 when silence, and 7.773492 for some noise. This is feels like the correct 'range', but in the 'wrong place'. I can't simply subtract 7 from the number and assume I'm between -1 and +1. There should be some logic and science behind how this should work, but I do not know enough about how digital audio works.
Does anyone know the logic behind this? Is 0 always silence while -32,768 and 32,767 are loud noises? Can I then simply multiply all negative values by -1 to always get positive values, and then find out how many percent they are at (between 0 and 32767)? Somehow, I don't believe this will work, as I guess there is a reason for the negative values.. I'm not completely sure what to try.
The code in your question is wrong in several ways. This code is trying to copy that from the article below, but you've not handled it properly converting from the float-based code in the article to 16-bit integer math. You're also looping on the wrong number of values (max i) and will end up pulling in garbage data. So this is all kinds of wrong.
https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html
The code in the article is correct. Here's what it is, expanded a bit. This is only looking at the first buffer in a 32-bit float buffer list.
float accumulator = 0;
AudioBuffer buffer = bufferList->mBuffers[0];
float * data = (float *)buffer.mData;
UInt32 numSamples = buffer.mDataByteSize / sizeof(float);
for (UInt32 i = 0; i < numSamples; i++) {
accumulator += data[i] * data[i];
}
float power = accumulator / (float)numSamples;
float decibels = 10 * log10f(power);
As the article says, the result here is decibels uses 0dB reference. eg, 0.0 is the maximum value. This is the same thing that AVAudioPlayer's averagePowerForChannel returns for example.
To use this in your 16-bit integer context, you'd need to a) loop appropriately through each 16-bit sample, b) convert the data[i] value from a 16-bit integer to a floating point value in the [-1.0, 1.0] range before squaring and adding to the accumulator.
Related
I know this question has been asked quite a bit, but none of the solutions really fit my case. I am looking to add a second type of object to the canvas with the code shown below. I know I didn't provide much but its a quick start. Just ask for more if you think you have a hunch. This code below is in my render function.
So far I have checked that
I have enough vertices in my points array
I have enough normal vectors in my normals array
I have enough texture coordinates in my texCoords array
There are no mismatches between the vectors added when creating my terrain and my propeller.
The terrain renders just fine with the texture, lighting and all but,I am unable to get the propeller to render. I get the error I listed above. I have added multiple objects to canvases before and never run into an error like this.
//----------------------------------------- Draw Terrain ------------------------------------
var i = 0;
for(var row=0-dimension; row<dimension; row+=3){
for(var col=0-dimension; col<dimension; col+=3, i++){
var mv = mult(viewer, mult(translate(row, -1, col), mult(scale[i],rot[i])));
gl.uniformMatrix4fv(modelViewLoc, false, flatten(mv));
gl.uniformMatrix3fv(normalLoc, false, flatten(normalMatrix(mv, true)));
gl.drawArrays( gl.TRIANGLES, 0, index);
}
}
//----------------------------------------- Draw Propeller ------------------------------------
mv = mult(viewer, mult( translate(-2.1, -2.9, -.2), scalem(4,5,5)));
gl.uniformMatrix4fv(modelViewLoc, false, flatten(mv));
gl.uniformMatrix3fv(normalLoc, false, flatten(normalMatrix(mv, true)));
gl.drawArrays( gl.TRIANGLES, propellerStart, points.length);
Is there any way i can use the "Attribute 2" in the error message to track down the variable giving me this issue?
Appreciate the help!
What part don't you understand? The error is clear, whatever buffer you have attached to attribute 2 is not big enough to handle the propellerStart, points.length draw request.
So first thing is figure out which attribute is attribute 2. Do this by printing out your attribute locations. Is your points, normals, or texcoords?
You should already be looking them up somewhere with gl.getAttribLocation so print out those values, find out which one is #2.
Then go look at the size of the buffer you attached to that attribute. To do that somewhere you would have called.
gl.bindBuffer(gl.ARRAY_BUFFER, someBuffer);
gl.vertexAttribPointer(locationForAttribute2, size, type, normalize, stride, offset);
So we know it's someBuffer from the above code. We also need to know size, type, stride, and offset
Somewhere else you filled that buffer with data using
gl.bindBuffer(gl.ARRAEY_BUFFER, someBuffer);
gl.bufferData(gl.ARRAY_BUFFER, someData, ...);
So you need to find the size of someData.
sizeOfBuffer = someData.length * someData.BYTES_PER_ELEMENT
Let's it's a 1000 element Float32Array so it someData.length is 1000 and someData.BYTES_PER_ELEMENT is 4 therefore sizeOfBuffer is 4000.
Using all of that you can now check if your buffer is too small. (note: we already know it's too small since the browser told us so but if you want know how to compute it yourself)
Let's say size is 3, type is gl.FLOAT, stride is 32, offset is 12 (note: I personally never use anything but stride = 0 and offset = 0)
Let's say points.length = 50
numPoints = points.length;
bytesPerElement = size * 4; // because a gl.FLOAT is 4 bytes
realStride = stride === 0 ? bytesPerElement : stride;
bytesNeeded = realStride * (numPoints - 1) + bytesPerElement;
bytesNeeded in this case is (64 * 49) + 12 = 3148
So now we know how many bytes are needed. Does are buffer have enough data? We'll when you called draw you passed in an offset propellerStart. Let's assume it's 900 and there's the offset in the attribute so.
bufferSizeNeeded = offset + propellerStart + bytesNeeded
so bufferSizeNeeded = 12 + 900 + 3148 which is 4060. Since 4060 is > sizeOfBuffer which was 4000 you're going to get the error you got.
In any case the point is really it's up to you to figure out which buffer is used by attribute #2, then go look at why your buffer is too small. Is your offset to drawArrays wrong? Is your stride too big? Is your offset wrong in vertexAttribPointer (it's in number of bytes not number of units). Do you put the wrong size (1,2,3,4). Do you mis-calculate the number of points?
I'm trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the audio buffer. I'd like to convert the signal in the buffer to a [Float:Float] dictionary where the key represents the frequency and the value represents the magnitude of the audio on the corresponding frequency.
I tried using Apple's Accelerate framework but the results I get seem dubious. I'm sure it's just in the way I'm converting the signal.
I looked at this blog post amongst other things for a reference.
Here is what I have:
self.audioEngine.mainMixerNode.installTapOnBus(0, bufferSize: 1024, format: nil, block: { buffer, when in
let bufferSize: Int = Int(buffer.frameLength)
// Set up the transform
let log2n = UInt(round(log2(Double(bufferSize))))
let fftSetup = vDSP_create_fftsetup(log2n, Int32(kFFTRadix2))
// Create the complex split value to hold the output of the transform
var realp = [Float](count: bufferSize/2, repeatedValue: 0)
var imagp = [Float](count: bufferSize/2, repeatedValue: 0)
var output = DSPSplitComplex(realp: &realp, imagp: &imagp)
// Now I need to convert the signal from the buffer to complex value, this is what I'm struggling to grasp.
// The complexValue should be UnsafePointer<DSPComplex>. How do I generate it from the buffer's floatChannelData?
vDSP_ctoz(complexValue, 2, &output, 1, UInt(bufferSize / 2))
// Do the fast Fournier forward transform
vDSP_fft_zrip(fftSetup, &output, 1, log2n, Int32(FFT_FORWARD))
// Convert the complex output to magnitude
var fft = [Float](count:Int(bufferSize / 2), repeatedValue:0.0)
vDSP_zvmags(&output, 1, &fft, 1, vDSP_length(bufferSize / 2))
// Release the setup
vDSP_destroy_fftsetup(fftsetup)
// TODO: Convert fft to [Float:Float] dictionary of frequency vs magnitude. How?
})
My questions are
How do I convert the buffer.floatChannelData to UnsafePointer<DSPComplex> to pass to the vDSP_ctoz function? Is there a different/better way to do it maybe even bypassing vDSP_ctoz?
Is this different if the buffer contains audio from multiple channels? How is it different when the buffer audio channel data is or isn't interleaved?
How do I convert the indices in the fft array to frequencies in Hz?
Anything else I may be doing wrong?
Update
Thanks everyone for suggestions. I ended up filling the complex array as suggested in the accepted answer. When I plot the values and play a 440 Hz tone on a tuning fork it registers exactly where it should.
Here is the code to fill the array:
var channelSamples: [[DSPComplex]] = []
for var i=0; i<channelCount; ++i {
channelSamples.append([])
let firstSample = buffer.format.interleaved ? i : i*bufferSize
for var j=firstSample; j<bufferSize; j+=buffer.stride*2 {
channelSamples[i].append(DSPComplex(real: buffer.floatChannelData.memory[j], imag: buffer.floatChannelData.memory[j+buffer.stride]))
}
}
The channelSamples array then holds separate array of samples for each channel.
To calculate the magnitude I used this:
var spectrum = [Float]()
for var i=0; i<bufferSize/2; ++i {
let imag = out.imagp[i]
let real = out.realp[i]
let magnitude = sqrt(pow(real,2)+pow(imag,2))
spectrum.append(magnitude)
}
Hacky way: you can just cast a float array. Where reals and imag values are going one after another.
It depends on if audio is interleaved or not. If it's interleaved (most of the cases) left and right channels are in the array with STRIDE 2
Lowest frequency in your case is frequency of a period of 1024 samples. In case of 44100kHz it's ~23ms, lowest frequency of the spectrum will be 1/(1024/44100) (~43Hz). Next frequency will be twice of this (~86Hz) and so on.
4: You have installed a callback handler on an audio bus. This is likely run with real-time thread priority and frequently. You should not do anything that has potential for blocking (it will likely result in priority inversion and glitchy audio):
Allocate memory (realp, imagp - [Float](.....) is shorthand for Array[float] - and likely allocated on the heap`. Pre-allocate these
Call lengthy operations such as vDSP_create_fftsetup() - which also allocates memory and initialises it. Again, you can allocate this once outside of your function.
I am trying to display a spectrum analyser for iOS and am stuck after two weeks. I have read pretty much every post about FFT and the Accelerate Frameworks on here and have downloaded the aurioTouch2 example from Apple.
I think I understand the mechanism of FFT (did it in Uni 20 years ago) and am a fairly experienced iOS programmer but I have hit a wall.
I am using AudioUnit to play mp3, m4a, and wav files and have that working beautifully. I have attached a Render Callback to the AUGraph and I can plot Waveforms to the music. The waveform goes with the music nicely.
When I take the data from the Render Callback which is in Float form in the range 0 .. 1 and attempt to pass that through the FFT code (either my own or aurioTouch2's FFTBufferManager.mm) I get something thats not completely wrong, but is not correct either. or instance this is a 440Hz sine wave:
That peak value is -6.1306, followed by -24. -31., -35. and those values towards the end are around -63.
Animated gif for "Black Betty":
Animated gif for "Black Betty
The format I receive from the Render callback:
AudioStreamBasicDescription outputFileFormat;
outputFileFormat.mSampleRate = 44100;
outputFileFormat.mFormatID = kAudioFormatLinearPCM;
outputFileFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
outputFileFormat.mBitsPerChannel = 32;
outputFileFormat.mChannelsPerFrame = 2;
outputFileFormat.mFramesPerPacket = 1;
outputFileFormat.mBytesPerFrame = outputFileFormat.mBitsPerChannel / 8;
outputFileFormat.mBytesPerPacket = outputFileFormat.mBytesPerFrame;
In looking at the aurioTouch2 example it looks like they are receiving their data in a signed int format but then running an AudioConverter to convert it to Float. Their format is hard to decipher but is using a macro:
drawFormat.SetAUCanonical(2, false);
drawFormat.mSampleRate = 44100;
XThrowIfError(AudioConverterNew(&thruFormat, &drawFormat, &audioConverter), "couldn't setup AudioConverter");
In their render callback they are copying the data out of the AudioBufferList into mAudioBuffer (Float32*) and passing it to the CalculateFFT method which calls vDSP_ctoz
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)mAudioBuffer, 2, &mDspSplitComplex, 1, mFFTLength);
I think this is where my problem is. What format does vDSP_ctoz expect? It is cast as a (COMPLEX*) but I cannot find anywhere in the aurioTouch2 code which puts the mAudioBuffer data into the (COMPLEX*) format. So is must be coming from the Render Callback in this format?
typedef struct DSPComplex {
float real;
float imag;
} DSPComplex;
typedef DSPComplex COMPLEX;
If I don't have the format correct at this point (or understand the format) then there is no point in debugging the rest of it.
Any help would be greatly appreciated.
Code from AurioTouch2 that I am using:
Boolean FFTBufferManager::ComputeFFTFloat(Float32 *outFFTData)
{
if (HasNewAudioData())
{
// Added after Hotpaw2 comment.
UInt32 windowSize = mFFTLength;
Float32 *window = (float *) malloc(windowSize * sizeof(float));
memset(window, 0, windowSize * sizeof(float));
vDSP_hann_window(window, windowSize, 0);
vDSP_vmul( mAudioBuffer, 1, window, 1, mAudioBuffer, 1, mFFTLength);
// Added after Hotpaw2 comment.
DSPComplex *audioBufferComplex = new DSPComplex[mFFTLength];
for (int i=0; i < mFFTLength; i++)
{
audioBufferComplex[i].real = mAudioBuffer[i];
audioBufferComplex[i].imag = 0.0f;
}
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)audioBufferComplex, 2, &mDspSplitComplex, 1, mFFTLength);
//Take the fft and scale appropriately
vDSP_fft_zrip(mSpectrumAnalysis, &mDspSplitComplex, 1, mLog2N, kFFTDirection_Forward);
vDSP_vsmul(mDspSplitComplex.realp, 1, &mFFTNormFactor, mDspSplitComplex.realp, 1, mFFTLength);
vDSP_vsmul(mDspSplitComplex.imagp, 1, &mFFTNormFactor, mDspSplitComplex.imagp, 1, mFFTLength);
//Zero out the nyquist value
mDspSplitComplex.imagp[0] = 0.0;
//Convert the fft data to dB
vDSP_zvmags(&mDspSplitComplex, 1, outFFTData, 1, mFFTLength);
//In order to avoid taking log10 of zero, an adjusting factor is added in to make the minimum value equal -128dB
vDSP_vsadd( outFFTData, 1, &mAdjust0DB, outFFTData, 1, mFFTLength);
Float32 one = 1;
vDSP_vdbcon(outFFTData, 1, &one, outFFTData, 1, mFFTLength, 0);
free( audioBufferComplex);
free( window);
OSAtomicDecrement32Barrier(&mHasAudioData);
OSAtomicIncrement32Barrier(&mNeedsAudioData);
mAudioBufferCurrentIndex = 0;
return true;
}
else if (mNeedsAudioData == 0)
OSAtomicIncrement32Barrier(&mNeedsAudioData);
return false;
}
After reading the answer below I tried adding this to the top of the method:
DSPComplex *audioBufferComplex = new DSPComplex[mFFTLength];
for (int i=0; i < mFFTLength; i++)
{
audioBufferComplex[i].real = mAudioBuffer[i];
audioBufferComplex[i].imag = 0.0f;
}
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)audioBufferComplex, 2, &mDspSplitComplex, 1, mFFTLength);
And the result I got was this:
I am now rendering the 5 last results, they are the faded ones behind.
After adding hann window:
Now looks a lot better after applying the hann window (Thanks hotpaw2). Not worried about the mirror image.
My main problem now is using a real song it doesn't look like other Spectrum Analysers. Everything is always pushed high on the left no matter what music i push through it. After applying the window it seems to go to the beat a lot better though.
The AU render callback only returns the real part of the complex input required. To use a complex FFT, you need to fill an equal number of imaginary components with zeros yourself, and copy over the elements of the real part, if needed.
I have an FFMPEG AVFrame in YUVJ420P and I want to convert it to a CVPixelBufferRef with CVPixelBufferCreateWithBytes. The reason I want to do this is to use AVFoundation to show/encode the frames.
I selected kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange and tried converting it since the AVFrame has the data in three planes
Y480 Cb240 Cr240. And according to what I've researched this matches the selected kCVPixelFormatType. By being biplanar I need to convert it into a buffer that contains Y480 and CbCr480 Interleaved.
I tried to create a buffer with 2 planes:
frame->data[0] on the first plane,
frame->data[1] and frame->data[2] interleaved on the second plane.
However, I'm getting return error -6661 (invalid a) from CVPixelBufferCreateWithBytes:
"Invalid function parameter. For example, out of range or the wrong type."
I don't have expertise on image processing at all, so any pointers to documentation that can get me started in the right approach to this problem are appreciated. My C skills aren't top of the line either so maybe I'm making a basic mistake here.
uint8_t **buffer = malloc(2*sizeof(int *));
buffer[0] = frame->data[0];
buffer[1] = malloc(frame->linesize[0]*sizeof(int));
for(int i = 0; i<frame->linesize[0]; i++){
if(i%2){
buffer[1][i]=frame->data[1][i/2];
}else{
buffer[1][i]=frame->data[2][i/2];
}
}
int ret = CVPixelBufferCreateWithBytes(NULL, frame->width, frame->height, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, buffer, frame->linesize[0], NULL, 0, NULL, cvPixelBufferSample)
The frame is the AVFrame with the rawData from FFMPEG Decoding.
My C skills aren't top of the line either so maybe im making a basic mistake here.
You're making several:
You should be using CVPixelBufferCreateWithPlanarBytes(). I do not know if CVPixelBufferCreateWithBytes() can be used to create a planar video frame; if so, it will require a pointer to a "plane descriptor block" (I can't seem to find the struct in the docs).
frame->linesize[0] is the bytes per row, not the size of the whole image. The docs are unclear, but the usage is fairly unambiguous.
frame->linesize[0] refers to the Y plane; you care about the UV planes.
Where is sizeof(int) from?
You're passing in cvPixelBufferSample; you might mean &cvPixelBufferSample.
You're not passing in a release callback. The documentation does not say that you can pass NULL.
Try something like this:
size_t srcPlaneSize = frame->linesize[1]*frame->height;
size_t dstPlaneSize = srcPlaneSize *2;
uint8_t *dstPlane = malloc(dstPlaneSize);
void *planeBaseAddress[2] = { frame->data[0], dstPlane };
// This loop is very naive and assumes that the line sizes are the same.
// It also copies padding bytes.
assert(frame->linesize[1] == frame->linesize[2]);
for(size_t i = 0; i<srcPlaneSize; i++){
// These might be the wrong way round.
dstPlane[2*i ]=frame->data[2][i];
dstPlane[2*i+1]=frame->data[1][i];
}
// This assumes the width and height are even (it's 420 after all).
assert(!frame->width%2 && !frame->height%2);
size_t planeWidth[2] = {frame->width, frame->width/2};
size_t planeHeight[2] = {frame->height, frame->height/2};
// I'm not sure where you'd get this.
size_t planeBytesPerRow[2] = {frame->linesize[0], frame->linesize[1]*2};
int ret = CVPixelBufferCreateWithPlanarBytes(
NULL,
frame->width,
frame->height,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
2,
planeBaseAddress,
planeWidth,
planeHeight,
planeBytesPerRow,
YOUR_RELEASE_CALLBACK,
YOUR_RELEASE_CALLBACK_CONTEXT,
NULL,
&cvPixelBufferSample);
Memory management is left as an exercise to the reader, but for test code you might get away with passing in NULL instead of a release callback.
I'm working on an app that should do some audio signal processing. I need to measure the audio level in each one of the buffers I get (through the Callback function). I've been searching the web for some time, and I found that there is a build-in property called Current level metering:
AudioQueueGetProperty(recordState->queue,kAudioQueueProperty_CurrentLevelMeter,meters,&dlen);
This property gets me the average or peak audio level, but it's not synchronised to the current buffer.
I figured out I need to calculate the audio level from the buffer data by myself, so I had this:
double calcAudioRMS (SInt16 * audioData, int numOfSamples)
{
double RMS, adPercent;
RMS = 0;
for (int i=0; i<numOfSamples; i++)
{
adPercent=audioData[i]/32768.0f;
RMS += adPercent*adPercent;
}
RMS = sqrt(RMS / numOfSamples);
return RMS;
}
This function gets the audio data (casted into Sint16) and the number of samples in the current buffer. The numbers I get are indeed between 0 and 1, but they seem to be rather random and low comparing to the numbers I got from the built-in audio level metering.
The recording audio format is:
format->mSampleRate = 8000.0;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = 1;
format->mChannelsPerFrame = 1;
format->mBytesPerFrame = 2;
format->mBytesPerPacket = 2;
format->mBitsPerChannel = 16;
format->mReserved = 0;
format->mFormatFlags = kLinearPCMFormatFlagIsSignedInteger |kLinearPCMFormatFlagIsPacked;
My question is how to get the right values from the buffer? Is there a built-in function \ property for this? Or should I calculate the audio level myself, and how to do it?
Thanks in advance.
Your calculation for RMS power is correct. I'd be inclined to say that you have a fewer number of samples than Apple does, or something similar, and that would explain the difference. You can check by inputting a loud sine wave, and checking that Apple (and you) calculate RMS power at 1/sqrt(2).
Unless there's a good reason, I would use Apple's power calculations. I've used them, and they seem good to me. Additionally, generally you don't want RMS power, you want RMS power as decibels, or use the kAudioQueueProperty_CurrentLevelMeterDB constant. (This depends on if you're trying to build an audio meter, or truly display the audio power)