Getting or setting the audio format that AUGraphAddRenderNotify receives - ios

Is it possible to set the audio format for an AUGraphAddRenderNotify callback? If not, is it possible just to see what the format is at init time?
I have a very simple AUGraph which plays audio from a kAudioUnitSubType_AudioFilePlayer to a kAudioUnitSubType_RemoteIO. I'm doing some live processing on the audio so I've added a AUGraphAddRenderNotify callback to the graph to do it there. This all works fine, but when I initialise the graph, I need to set up a couple buffers and some other data for my processing, and I need to know what format will be supplied in the callback. (On some devices it's interleaved, on others it's not — this is fine I just need to know).
Here's the setup:
NewAUGraph(&audioUnitGraph);
AUNode playerNode;
AUNode outputNode;
AudioComponentDescription playerDescription = {
.componentType = kAudioUnitType_Generator,
.componentSubType = kAudioUnitSubType_AudioFilePlayer,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AudioComponentDescription outputDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioUnitGraph, &playerDescription, &playerNode);
AUGraphAddNode(audioUnitGraph, &outputDescription, &outputNode);
AUGraphOpen(audioUnitGraph);
AUGraphNodeInfo(audioUnitGraph, playerNode, NULL, &playerAudioUnit);
AUGraphNodeInfo(audioUnitGraph, outputNode, NULL, &outputAudioUnit);
// Tried adding all manner of AudioUnitSetProperty() calls here to set the AU formats
AUGraphConnectNodeInput(audioUnitGraph, playerNode, 0, outputNode, 0);
AUGraphAddRenderNotify(audioUnitGraph, render, (__bridge void *)self);
AUGraphInitialize(audioUnitGraph);
// Some time later...
// - Set up audio file in the file player
// - Start the graph with AUGraphStart()
I can understand that altering the formats used by the two audio units may not have any effect on the format 'seen' at the point the AUGraph renders into its callback (as this is downstream of them), but surely there is a way to know at init time what that format will be?

Related

Set timestamp in CMsampleBuffer using AVAssetWriter not working

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

How to get the current captured timestamp of Camera data from CMSampleBufferRef in iOS

I developed and iOS application which will save captured camera data into a file and I used
(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
to capture CMSampleBufferRef and this will encode into H264 format, and frames will be saved to a file using AVAssetWriter.
I followed the sample source code to create this app:
Now I want to get the timestamp of saved video frames to create a new movie file. For this, I have done the following things
Locate the file and create AVAssestReader to read the file
CMSampleBufferRef sample = [asset_reader_output copyNextSampleBuffer];
CMSampleBufferRef buffer;
while ([assestReader status] == AVAssetReaderStatusReading) {
buffer = [asset_reader_output copyNextSampleBuffer];
// CMSampleBufferGetPresentationTimeStamp(buffer);
CMTime presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(buffer);
UInt32 timeStamp = (1000 * presentationTimeStamp.value) / presentationTimeStamp.timescale;
NSLog(#"timestamp %u", (unsigned int) timeStamp);
NSLog(#"reading");
// CFRelease(buffer);
}
printed value gives me a wrong timestamp and I need to get frame's captured time.
Is there any way to get frame captured timestamp?
I've read an answer to get it to timestamp but it does not properly elaborate my question above.
Update:
I read the sample time-stamp before it writes to a file, it gave me xxxxx value (33333.23232). After I tried to read the file it gave me different value. Any specific reason for this??
The file timestamps are different to the capture timestamps because they are relative to the beginning of the file. This means they are the capture timestamps you want, minus the timestamp of the very first frame captured:
presentationTimeStamp = fileFramePresentationTime + firstFrameCaptureTime
So when reading from the file, this should calculate the capture timestamp you want:
CMTime firstCaptureFrameTimeStamp = // the first capture timestamp you see
CMTime presentationTimeStamp = CMTimeAdd(CMSampleBufferGetPresentationTimeStamp(buffer), firstCaptureFrameTimeStamp);
If you do this calculation between launches of your app, you'll need to serialise and deserialise the first frame capture time, which you can do with CMTimeCopyAsDictionary and CMTimeMakeFromDictionary.
You could store this in the output file, via AVAssetWriter's metadata property.

Can't get AudioStreamBasicDescription from AudioUnit inside MTAudioProcessingTap

Since there's very little (more like none, really) documentation on MTAudioProcessingTap, I'm using Apple's demo app from WWDC 2012.
I am trying to have an Audio Graph inside the MTAudioProcessingTap, so I need to set different stream formats for different units that require specific . But every time I try to use AudioUnitGetProperty to get the AudioUnit's ASBD I get an EXC_BAD_ADDRESS error.
Here's the relevant code which results in EXC_BAD_ACCESS. You can try by yourself by downloading Apple's app and adding this to tap_PrepareCallback
OSStatus status = noErr;
AudioStreamBasicDescription testStream;
// Set audio unit input/output stream format to processing format.
if (noErr == status)
{
status = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &testStream, sizeof(AudioStreamBasicDescription));
}
AudioUnitGetProperty takes a pointer to a UInt32 for its size argument, in your sample code you gave a value. Here is the header:
AudioUnitGetProperty( AudioUnit inUnit,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
void * outData,
UInt32 * ioDataSize)
You should be getting it like this:
AudioStreamBasicDescription testStream = {0};
UInt32 sizeTestStream = sizeof(AudioStreamBasicDescription);
OSStatus status = AudioUnitGetProperty(audioUnit,kAudioUnitProperty_StreamFormat,kAudioUnitScope_Input,0,&testStream,&sizeTestStream);
if(status){
//handle error
}

Chapter 4 of Learning Core Audio not working due to AudioQueueNewInput failing with fmt?

I am trying to get the Recording program from Chapter 4 of Learning Core Audio by Adamson and Avila to work. Both typing it in by hand and the unmodified version downloaded from the informit website fail in the same way. It always fails with this at the point of queue creation.
Error: AudioQueueNewInput failed ('fmt?')
Has anyone else tried this sample program on Mavericks and XCode5? Here's the one from the download site up to the point of failure. When I tried LPCM with some hardcoded parameters, then it's ok but I cannot get MPEG4AAC to work. Seems like AppleLossless works though.
// Code from download
int main(int argc, const char *argv[])
{
MyRecorder recorder = {0};
AudioStreamBasicDescription recordFormat = {0};
memset(&recordFormat, 0, sizeof(recordFormat));
// Configure the output data format to be AAC
recordFormat.mFormatID = kAudioFormatMPEG4AAC;
recordFormat.mChannelsPerFrame = 2;
// get the sample rate of the default input device
// we use this to adapt the output data format to match hardware capabilities
MyGetDefaultInputDeviceSampleRate(&recordFormat.mSampleRate);
// ProTip: Use the AudioFormat API to trivialize ASBD creation.
// input: at least the mFormatID, however, at this point we already have
// mSampleRate, mFormatID, and mChannelsPerFrame
// output: the remainder of the ASBD will be filled out as much as possible
// given the information known about the format
UInt32 propSize = sizeof(recordFormat);
CheckError(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL,
&propSize, &recordFormat), "AudioFormatGetProperty failed");
// create a input (recording) queue
AudioQueueRef queue = {0};
CheckError(AudioQueueNewInput(&recordFormat, // ASBD
MyAQInputCallback, // Callback
&recorder, // user data
NULL, // run loop
NULL, // run loop mode
0, // flags (always 0)
// &recorder.queue), // output: reference to AudioQueue object
&queue),
"AudioQueueNewInput failed");
I faced up with same problem. Check the sample rate. In your case it will be huge (96000). Just try to set it manually to 44100.

How to play MIDI with bassmidi? (ios)

I'm trying to play midi but it's not working, I can play an mp3 but when I change the code to midi and build - there is no sound. (with "bassmidi" plugin).
my code:
NSString *fileName = #"1";
NSString *fileType = #"mid"; // mid
BASS_Init(-1, 44100, 0, 0, 0); // initialize output device
NSString *respath=[[NSBundle mainBundle]pathForResource:fileName ofType:fileType]; // get path of audio file
HSTREAM stream=BASS_MIDI_StreamCreateFile(0, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP, 1);
BASS_ChannelPlay(stream, FALSE); // play the stream
For those who would be interested in using the BASSMIDI player on iOS, we implemented an AUv3 Audio Unit wrapped around the bassmidi library.
The main advantage is that this audio unit can be inserted into a graph of audio nodes handled by the iOS Audio Engine (just like you would do with the AVAudioUnitSampler).
The code is available on a public repository:
https://github.com/newzik/BassMidiAudioUnit
Feel free to use it!

Resources