Can't get AudioStreamBasicDescription from AudioUnit inside MTAudioProcessingTap - ios

Since there's very little (more like none, really) documentation on MTAudioProcessingTap, I'm using Apple's demo app from WWDC 2012.
I am trying to have an Audio Graph inside the MTAudioProcessingTap, so I need to set different stream formats for different units that require specific . But every time I try to use AudioUnitGetProperty to get the AudioUnit's ASBD I get an EXC_BAD_ADDRESS error.
Here's the relevant code which results in EXC_BAD_ACCESS. You can try by yourself by downloading Apple's app and adding this to tap_PrepareCallback
OSStatus status = noErr;
AudioStreamBasicDescription testStream;
// Set audio unit input/output stream format to processing format.
if (noErr == status)
{
status = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &testStream, sizeof(AudioStreamBasicDescription));
}

AudioUnitGetProperty takes a pointer to a UInt32 for its size argument, in your sample code you gave a value. Here is the header:
AudioUnitGetProperty( AudioUnit inUnit,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
void * outData,
UInt32 * ioDataSize)
You should be getting it like this:
AudioStreamBasicDescription testStream = {0};
UInt32 sizeTestStream = sizeof(AudioStreamBasicDescription);
OSStatus status = AudioUnitGetProperty(audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,0,&testStream,&sizeTestStream);
if(status){
//handle error
}

Related

Set timestamp in CMsampleBuffer using AVAssetWriter not working

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

Getting or setting the audio format that AUGraphAddRenderNotify receives

Is it possible to set the audio format for an AUGraphAddRenderNotify callback? If not, is it possible just to see what the format is at init time?
I have a very simple AUGraph which plays audio from a kAudioUnitSubType_AudioFilePlayer to a kAudioUnitSubType_RemoteIO. I'm doing some live processing on the audio so I've added a AUGraphAddRenderNotify callback to the graph to do it there. This all works fine, but when I initialise the graph, I need to set up a couple buffers and some other data for my processing, and I need to know what format will be supplied in the callback. (On some devices it's interleaved, on others it's not — this is fine I just need to know).
Here's the setup:
NewAUGraph(&audioUnitGraph);
AUNode playerNode;
AUNode outputNode;
AudioComponentDescription playerDescription = {
.componentType = kAudioUnitType_Generator,
.componentSubType = kAudioUnitSubType_AudioFilePlayer,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AudioComponentDescription outputDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioUnitGraph, &playerDescription, &playerNode);
AUGraphAddNode(audioUnitGraph, &outputDescription, &outputNode);
AUGraphOpen(audioUnitGraph);
AUGraphNodeInfo(audioUnitGraph, playerNode, NULL, &playerAudioUnit);
AUGraphNodeInfo(audioUnitGraph, outputNode, NULL, &outputAudioUnit);
// Tried adding all manner of AudioUnitSetProperty() calls here to set the AU formats
AUGraphConnectNodeInput(audioUnitGraph, playerNode, 0, outputNode, 0);
AUGraphAddRenderNotify(audioUnitGraph, render, (__bridge void *)self);
AUGraphInitialize(audioUnitGraph);
// Some time later...
// - Set up audio file in the file player
// - Start the graph with AUGraphStart()
I can understand that altering the formats used by the two audio units may not have any effect on the format 'seen' at the point the AUGraph renders into its callback (as this is downstream of them), but surely there is a way to know at init time what that format will be?

Chapter 4 of Learning Core Audio not working due to AudioQueueNewInput failing with fmt?

I am trying to get the Recording program from Chapter 4 of Learning Core Audio by Adamson and Avila to work. Both typing it in by hand and the unmodified version downloaded from the informit website fail in the same way. It always fails with this at the point of queue creation.
Error: AudioQueueNewInput failed ('fmt?')
Has anyone else tried this sample program on Mavericks and XCode5? Here's the one from the download site up to the point of failure. When I tried LPCM with some hardcoded parameters, then it's ok but I cannot get MPEG4AAC to work. Seems like AppleLossless works though.
// Code from download
int main(int argc, const char *argv[])
{
MyRecorder recorder = {0};
AudioStreamBasicDescription recordFormat = {0};
memset(&recordFormat, 0, sizeof(recordFormat));
// Configure the output data format to be AAC
recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mChannelsPerFrame = 2;
// get the sample rate of the default input device
// we use this to adapt the output data format to match hardware capabilities
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
// ProTip: Use the AudioFormat API to trivialize ASBD creation.
// input: at least the mFormatID, however, at this point we already have
// mSampleRate, mFormatID, and mChannelsPerFrame
// output: the remainder of the ASBD will be filled out as much as possible
// given the information known about the format
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL,
&propSize, &recordFormat), "AudioFormatGetProperty failed");
// create a input (recording) queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat, // ASBD
MyAQInputCallback, // Callback
&recorder, // user data
NULL, // run loop
NULL, // run loop mode
0, // flags (always 0)
// &recorder.queue), // output: reference to AudioQueue object
&queue),
"AudioQueueNewInput failed");
I faced up with same problem. Check the sample rate. In your case it will be huge (96000). Just try to set it manually to 44100.

iOS AudioUnit settings to save mic input to raw PCM file

I'm currently working on a VOIP project for iOS.
I use AudioUnits to get data from the mic and play sounds.
My main app is written in C# (Xamarin) and uses a C++ library for faster audio and codec processing.
To test the input/output result I'm currently testing recording & playback on the same device
- store the mic audio data in a buffer in the recordingCallback
- play the data from the buffer in the playbackCallback
That works as expected, the voice quality is good.
I need to save the incoming audio data from the mic to a raw PCM file.
I have done that, but the resulting file only contains some short "beep" signals.
So my question is:
What Audio settings do I need, that I can hear my voice (real audio signals) in the resulting raw PCM file instead of short beep sounds?
Has anyone an idea what could be wrong or what I have to do that I'm able to replay the resulting PCM file correctly?
My current format settings are (C# code):
int framesPerPacket = 1;
int channelsPerFrame = 1;
int bitsPerChannel = 16;
int bytesPerFrame = bitsPerChannel / 8 * channelsPerFrame;
int bytesPerPacket = bytesPerFrame * framesPerPacket;
AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription ()
{
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked | AudioFormatFlags.LinearPCMIsAlignedHigh,
BitsPerChannel = bitsPerChannel,
ChannelsPerFrame = channelsPerFrame,
BytesPerFrame = bytesPerFrame,
FramesPerPacket = framesPerPacket,
BytesPerPacket = bytesPerPacket,
Reserved = 0
};
Additional C# settings (here in short without error checking):
AVAudioSession session = AVAudioSession.SharedInstance();
NSError error = null;
session.SetCategory(AVAudioSession.CategoryPlayAndRecord, out error);
session.SetPreferredIOBufferDuration(Config.packetLength, out error);
session.SetPreferredSampleRate(Format.samplingRate,out error);
session.SetActive(true,out error);
My current recording callback in short (only for PCM file saving) (C++ code):
OSStatus
NotSoAmazingAudioEngine::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
std::pair<BufferData*, int> bufferInfo = _sendBuffer.getNextEmptyBufferList();
AudioBufferList* bufferList = new AudioBufferList();
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mData = NULL;
OSStatus status = AudioUnitRender(_instance->_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, bufferList);
if(_instance->checkStatus(status))
{
if(fout != NULL) //fout is a "FILE*"
{
fwrite(bufferList->mBuffers[0].mData, sizeof(short), bufferList->mBuffers[0].mDataByteSize/sizeof(short), fout);
}
}
delete bufferList;
return noErr;
}
Background info why I need a raw PCM file:
To compress the audio data I'd like to use the Opus codec.
With the codec I have the problem that there is a tiny "tick" at the end of each frame:
With a frame size of 60ms I nearly can't hear them, at 20ms its annoying, at 10 ms frame sizes my own voice can't be heared because of the ticking (for the VOIP application I try to get 10ms frames).
I don't encode & decode in the callback functions (I encode/decode the data in the functions which I use to transfer audio data from the "micbuffer" to the "playbuffer").
And everytime the playbackCallback wants to play some data, there is a frame in my buffer.
I also eliminate my Opus encoding/decoding functions as error source, because if I read PCM data from a raw PCM file, encode & decode it afterwards, and save it to a new raw PCM file, the ticking does not appear (if I play the result file with "Softe Audio Tools", the output file audio is OK).
To find out what causes the ticking, I'd like to save the raw PCM data from the mic to a file to make further investigations on that issue.
I found the solution myself:
My PCM player expected 44100 Hz stereo, but my file only had 8000 Hz mono and therefore my saved file was played about 10x too fast.

Encoding recording from AudioUnit for sending over NSOutputStream in iOs

I am trying to stream some recording from the mic in my iPhone in real-time over NSSocket.
I have callbacks for recording and playback. I am trying to read directly from the buffer using following code:
SInt16 *intFromBuffer;
intFromBuffer = audioBufferList->mBuffers[0].mData;
When I'm trying:
NSLog(#"0x%X", intFromBuffer);
I get for instance: 0xA182E00
And now the important part - playback callback:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
intFromBuffer = (SInt16)([NSString stringWithFormat:#"0x%#", data.output]);
memcpy(buffer.mData, intFromBuffer, size); //this works fine!
}
return noErr;
}
But now - how do I get this intFromBuffer to work over the NSStream?
When I try to send it directly as NSString and send it back as NSString and parse it back to SInt16 all I get is noise.
Any ideas? I'm running out of ideas...
Ok guys, I figured it out.
The problem was related with string encoding. My server was sending UTF8 encoded text, but my code was trying to interpret it with ASCII, which caused problem.
Another problem I found in my code was wrong data type as well. I was sending it as NSMutableData, but when I received it I converted the data in a wrong way back to NSMutableData, so my AudioProcessor was not able to get access to actual data which seemed to compiler as NSString object, and it's been throwing exception while creating frames from bytes:
SInt16 *frames = (SInt16 *)[data.input bytes];
The last problem I have with my code is actually very lossy transmition, so when data comes back from server the quality is very poor, but I believe it's buffer size related problem. I think I will figure it out soon.
Thanks a lot to StackOverflow community. I am new here, but I see you are a big help, even with simple comments that help with heading the right direction.
Thanks again.

Resources