IOS MusicSequence & MusicPlayer external midi clock sync - ios

I'm using MusicPlayer to play notes in MusicSequence:
NewMusicSequence(&sequence);
MusicSequenceFileLoad(sequence, (__bridge CFURLRef) midiFileURL, 0, 0);
// Set the endpoint of the sequence to be our virtual endpoint
MusicSequenceSetMIDIEndpoint(sequence, virtualEndpoint);
// Create a new music player
MusicPlayer p;
// Initialise the music player
NewMusicPlayer(&p);
// Load the sequence into the music player
MusicPlayerSetSequence(self.player, sequence);
// Called to do some MusicPlayer setup. This just
// reduces latency when MusicPlayerStart is called
MusicPlayerPreroll(self.player);
-(void)play {
MusicPlayerStart(self.player);
}
It's working well, I would say very well, but I do not want to use the internal clock.
How can I use the external midi clock?
Or maybe I can somehow move the playing cursor with a clock.

You can use MusicSequenceSetMIDIEndpoint(sequence,endpointRef);
then create a midi clock
CAClockRef mtcClockRef;
OSStatus err;
err = CAClockNew(0, &mtcClockRef);
if (err != noErr) {
NSLog(#"\t\terror %ld at CAClockNew()", err);
}
else {
CAClockTimebase timebase = kCAClockTimebase_HostTime;
UInt32 size = 0;
size = sizeof(timebase);
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_InternalTimebase, size, &timebase);
if (err)
NSLog(#"Error setting clock timebase");
set the sync mode
UInt32 tSyncMode = kCAClockSyncMode_MIDIClockTransport;
size = sizeof(tSyncMode);
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_SyncMode, size, &tSyncMode);
then set the clock to use the midi end point
err = CAClockSetProperty(mtcClockRef, kCAClockProperty_SyncSource, sizeof(endpointRef), endpointRef);
There's some reference code VVMIDINode here - > https://github.com/mrRay/vvopensource/blob/master/VVMIDI/FrameworkSrc/VVMIDINode.h

Related

Playing audio using ffmpeg and AVAudioPlayer

I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.

AudioSession Deprecated, how do I play my sounds without disrupting music on iOS?

OSStatus e;
// initialize the audio session. don't know if you already do this.
e = AudioSessionInitialize(NULL, NULL, NULL, NULL);
if (e)
{
NSLog(#"failed to start audiosession: %i", (int)e);
}
// ***********************************************************
// * THIS IS THE CODE THAT MAKES OTHER AUDIO BE ABLE TO PLAY *
// ***********************************************************
UInt32 category = kAudioSessionCategory_AmbientSound;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
// ***********************************************************
// * HOORAY *
// ***********************************************************
// activate the audio session. don't know if you already do this.
e = AudioSessionSetActive(YES);
if (e)
{
NSLog(#"failed to make audiosession active: %i", (int)e);
}
This is the code I use to play sounds in my games without disrupting music in the background. However, most of this is deprecated now. What do I replace it with in iOS 7?

Core Audio - Interapp Audio - How to Retrieve output audio packets from Node app inside Host App?

I am writing an HOST app that uses Core Audio's new iOS 7 Inter App Audio technology to pull audio from a single NODE "generator" app and route it into my host app. I am using the Audio Components Services and Audio Unit Component Services C frameworks to achieve this.
What I want to achieve is to establish a connection to an external node app who can generate sound. I want that sound to be routed into my host app and for my host app to be able to directly access the audio packet data as a stream of raw audio data.
I have written the code inside my HOST app that does the following sequentially:
Sets up and activates an audio session with the correct session category.
Refreshes a list of interapp audio compatible apps who are of typekAudioUnitType_RemoteGenerator or kAudioUnitType_RemoteInstrument (I'm not interested in effects apps).
Pulls out the last object from that list and attempts to establish a connection using AudioComponentInstanceNew()
Sets the Audio Stream Basic Description that my host app needs the audio format in.
Sets up audio unit properties and callbacks as well as an audio unit render callback on the output scope (bus).
Initializes the audio unit.
So far so good, I have been able to successfully establish a connection, but my problem is that my render callback is not being called at all. What I am having trouble understanding is how exactly to pull the audio from the node application? I have read that I need to call AudioUnitRender() in order to initiate a rendering cycle on the node app, but how exactly does this need to be setup in my situation? I have seen other examples where AudioUnitRender() is called from inside the rendering callback, but this isnt going to work for me because my render callback isn't being called currently. Do I need to setup up my own audio processing thread and periodically call AudioUnitRender() on my 'node'?
The following is the code described above inside my HOST app.
static OSStatus MyAURenderCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//Do something here with the audio data?
//This method is never being called?
//Do I need to puts AudioUnitRender() in here?
}
- (void)start
{
[self configureAudioSession];
[self refreshAUList];
}
- (void)configureAudioSession
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredSampleRate: _graphSampleRate error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
self.graphSampleRate = [mySession sampleRate];
}
- (void)refreshAUList
{
_audioUnits = #[].mutableCopy;
AudioComponentDescription searchDesc = { 0, 0, 0, 0, 0 }, foundDesc;
AudioComponent comp = NULL;
while (true) {
comp = AudioComponentFindNext(comp, &searchDesc);
if (comp == NULL) break;
if (AudioComponentGetDescription(comp, &foundDesc) != noErr) continue;
if (foundDesc.componentType == kAudioUnitType_RemoteGenerator || foundDesc.componentType == kAudioUnitType_RemoteInstrument) {
RemoteAU *rau = [[RemoteAU alloc] init];
rau->_desc = foundDesc;
rau->_comp = comp;
AudioComponentCopyName(comp, &rau->_name);
rau->_image = AudioComponentGetIcon(comp, 48);
rau->_lastActiveTime = AudioComponentGetLastActiveTime(comp);
[_audioUnits addObject:rau];
}
}
[self connect];
}
- (void)connect {
if ([_audioUnits count] <= 0) {
return;
}
RemoteAU *rau = [_audioUnits lastObject];
AudioUnit myAudioUnit;
//Node application will get launched in background
Check(AudioComponentInstanceNew(rau->_comp, &myAudioUnit));
AudioStreamBasicDescription format = {0};
format.mChannelsPerFrame = 2;
format.mSampleRate = [[AVAudioSession sharedInstance] sampleRate];
format.mFormatID = kAudioFormatMPEG4AAC;
UInt32 propSize = sizeof(format);
Check(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &propSize, &format));
//Output format from node to host
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &format, sizeof(format)));
//Setup a render callback to the output scope of the audio unit representing the node app
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &callbackStruct, sizeof(callbackStruct)));
//setup call backs
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioUnitProperty_IsInterAppConnected, IsInterappConnected, NULL));
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioOutputUnitProperty_HostTransportState, AudioUnitPropertyChangeDispatcher, NULL));
//intialize the audio unit representing the node application
Check(AudioUnitInitialize(myAudioUnit));
}

Audioqueue callback not being called

So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)

Can I use AVCaptureSession to encode an AAC stream to memory?

I'm writing an iOS app that streams video and audio over the network.
I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264. This works great.
I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput's audio frames:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// get the audio samples into a common buffer _pcmBuffer
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
// use AudioConverter to
UInt32 ouputPacketsCount = 1;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = sizeof(_aacBuffer);
bufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, converter_callback, (__bridge void *) self, &ouputPacketsCount, &bufferList, NULL);
if (0 == st) {
// ... send bufferList.mBuffers[0].mDataByteSize bytes from _aacBuffer...
}
}
In this case the callback function for the audio converter is pretty simple (assuming packet sizes and counts are setup properly):
- (void) putPcmSamplesInBufferList:(AudioBufferList *)bufferList withCount:(UInt32 *)count
{
bufferList->mBuffers[0].mData = _pcmBuffer;
bufferList->mBuffers[0].mDataByteSize = _pcmBufferSize;
}
And the setup for the audio converter looks like this:
{
// ...
AudioStreamBasicDescription pcmASBD = {0};
pcmASBD.mSampleRate = ((AVAudioSession *) [AVAudioSession sharedInstance]).currentHardwareSampleRate;
pcmASBD.mFormatID = kAudioFormatLinearPCM;
pcmASBD.mFormatFlags = kAudioFormatFlagsCanonical;
pcmASBD.mChannelsPerFrame = 1;
pcmASBD.mBytesPerFrame = sizeof(AudioSampleType);
pcmASBD.mFramesPerPacket = 1;
pcmASBD.mBytesPerPacket = pcmASBD.mBytesPerFrame * pcmASBD.mFramesPerPacket;
pcmASBD.mBitsPerChannel = 8 * pcmASBD.mBytesPerFrame;
AudioStreamBasicDescription aacASBD = {0};
aacASBD.mFormatID = kAudioFormatMPEG4AAC;
aacASBD.mSampleRate = pcmASBD.mSampleRate;
aacASBD.mChannelsPerFrame = pcmASBD.mChannelsPerFrame;
size = sizeof(aacASBD);
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &aacASBD);
AudioConverterNew(&pcmASBD, &aacASBD, &_converter);
// ...
}
This seems pretty straight forward only the IT DOES NOT WORK. Once the AVCaptureSession is running, the audio converter (specifically AudioConverterFillComplexBuffer) returns an 'hwiu' (hardware in use) error. Conversion works fine if the session is stopped but then I can't capture anything...
I was wondering if there was a way to get an AAC stream out of AVCaptureSession. The options I'm considering are:
Somehow using AVAssetWriterInput to encode audio samples into AAC and then get the encoded packets somehow (not through AVAssetWriter, which would only write to a file).
Reorganizing my app so that it uses AVCaptureSession only on the video side and uses Audio Queues on the audio side. This will make flow control (starting and stopping recording, responding to interruptions) more complicated and I'm afraid that it might cause synching problems between the audio and video. Also, it just doesn't seem like a good design.
Does anyone know if getting the AAC out of AVCaptureSession is possible? Do I have to use Audio Queues here? Could this get me into synching or control problems?
I ended up asking Apple for advice (it turns out you can do that if you have a paid developer account).
It seems that AVCaptureSession grabs a hold of the AAC hardware encoder but only lets you use it to write directly to file.
You can use the software encoder but you have to ask for it specifically instead of using AudioConverterNew:
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (!description) {
return false;
}
// see the question as for setting up pcmASBD and arc ASBD
OSStatus st = AudioConverterNewSpecific(&pcmASBD, &aacASBD, 1, description, &_converter);
if (st) {
NSLog(#"error creating audio converter: %s", OSSTATUS(st));
return false;
}
with
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(#"error getting audio format propery info: %s", OSSTATUS(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(#"error getting audio format propery: %s", OSSTATUS(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
The software encoder will take up CPU resources, of course, but will get the job done.

Resources