I've been working to migrate an older Core MIDI sending implementation to send MIDI 1.0 messages using Apple's newer UMP-aware MIDI Event List API methods.
I've figured out code that runs and should output MIDI clock messages, but when I send it with MIDISendEventList(...) I see nothing being output from my MIDI interface; there's also no error returned from that method to indicate what the problem is.
Here is the code I'm using:
const ByteCount clockMessageSize = 1;
const UInt32 clockMessage[clockMessageSize] = { (UInt32)0xF8 }; // MIDI clock tick
const MIDITimeStamp timeStamp = mach_absolute_time();
MIDIEventList clockMessageEventList = {};
MIDIEventPacket* clockMessageEventListEndPacket = nullptr;
clockMessageEventListEndPacket = MIDIEventListInit(&clockMessageEventList, kMIDIProtocol_1_0);
clockMessageEventListEndPacket = MIDIEventListAdd(&clockMessageEventList, sizeof(MIDIEventList::packet), clockMessageEventListEndPacket, timeStamp, clockMessageSize, clockMessage);
for (NSUInteger endpointRefIndex = 0; endpointRefIndex < endPointRefsCount; ++endpointRefIndex) {
MIDIObjectRef destinationEndpoint = endPointRefs[endpointRefIndex];
OSStatus midiSendError = MIDISendEventList(outputPortRef, destinationEndpoint, &clockMessageEventList);
if (midiSendError != noErr) {
printf("MIDISendEventList error: %i", (int)midiSendError);
}
}
Inspecting clockMessageEventList.packet after it has been configured but before it is sent shows:
(248, 0, 0, [... all zeros to index 63])
Does anyone know where I'm going wrong?
Related
Im fairly new to IOS programming and objective-c. I have an embedded system that runs a program written in C that is sending UDP packet to iPhone app I am working on.
I am able to read the packet data (NSData) if it only contains a string but, cannot if the data is structured with additional markup.
Here is the C code that sends the packet.
typedef struct s_msg_temp_report {
uint8_t id0;
uint8_t id1;
uint8_t name[9];
uint8_t led;
uint32_t temp;
} t_msg_temp_report;
static t_msg_temp_report msg_temp_report =
{
.id0 = 0,
.id1 = 2,
.name = DEMO_PRODUCT_NAME,
.led = 0,
.temp = 0,
};
/* Send client report. */
msg_temp_report.temp = (uint32_t)(at30tse_read_temperature() * 100);
msg_temp_report.led = !port_pin_get_output_level(LED_0_PIN);
ret = sendto(tx_socket, &msg_temp_report, sizeof(t_msg_temp_report),
0,(struct sockaddr *)&addr, sizeof(addr));
if (ret == M2M_SUCCESS) {
puts("Assignment 3.3: sensor report sent");
} else {
puts("Assignment 3.3: failed to send status report !");
}
What is the best way to to process (NSData) object data into a usable object for string conversion?
I was trying to implement a function to stretch the sound speed, without changing it's pitch and time scale.
I try the method to set the frequency of channel to slow of fast the speed.
Then use FMOD_DSP_PITCHSHIFT to correct the pitch sounds as default.
I was using wav format sound file for test and build function.
I'm trying to intergrate product resource which sound file was encoded as MP3.
PITCHSHIFT DSP doesn't work at MP3 sound channel. console log looks fine with no exception & error.
Same project and setting everything works fine in iOS Simulator.
After some research and experiments, results indicates even m4a works fine at iOS.
I wonder is this some kind of bug? or I missed something at configuration.
sample code was based on FMOD Sample project Play stream.
`/*==============================================================================
Play Stream Example
Copyright (c), Firelight Technologies Pty, Ltd 2004-2015.
This example shows how to simply play a stream such as an MP3 or WAV. The stream
behaviour is achieved by specifying FMOD_CREATESTREAM in the call to
System::createSound. This makes FMOD decode the file in realtime as it plays,
instead of loading it all at once which uses far less memory in exchange for a
small runtime CPU hit.
==============================================================================*/
#include "fmod.hpp"
#include "common.h"
int FMOD_Main()
{
FMOD::System *system;
FMOD::Sound *sound, *sound_to_play;
FMOD::Channel *channel = 0;
FMOD_RESULT result;
FMOD::DSP * pitch_shift;
unsigned int version;
void *extradriverdata = 0;
int numsubsounds;
Common_Init(&extradriverdata);
/*
Create a System object and initialize.
*/
result = FMOD::System_Create(&system);
ERRCHECK(result);
result = system->getVersion(&version);
ERRCHECK(result);
if (version < FMOD_VERSION)
{
Common_Fatal("FMOD lib version %08x doesn't match header version %08x", version, FMOD_VERSION);
}
result = system->init(32, FMOD_INIT_NORMAL, extradriverdata);
ERRCHECK(result);
result = system->createDSPByType(FMOD_DSP_TYPE_PITCHSHIFT, &pitch_shift);
ERRCHECK(result);
/*
This example uses an FSB file, which is a preferred pack format for fmod containing multiple sounds.
This could just as easily be exchanged with a wav/mp3/ogg file for example, but in this case you wouldnt need to call getSubSound.
Because getNumSubSounds is called here the example would work with both types of sound file (packed vs single).
*/
result = system->createSound(Common_MediaPath("aaa.m4a"), FMOD_LOOP_NORMAL | FMOD_2D, 0, &sound);
ERRCHECK(result);
result = sound->getNumSubSounds(&numsubsounds);
ERRCHECK(result);
if (numsubsounds)
{
sound->getSubSound(0, &sound_to_play);
ERRCHECK(result);
}
else
{
sound_to_play = sound;
}
/*
Play the sound.
*/
result = system->playSound(sound_to_play, 0, false, &channel);
ERRCHECK(result);
result = channel->addDSP(0, pitch_shift);
ERRCHECK(result);
float pitch = 1.f;
result = pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
ERRCHECK(result);
pitch_shift->setActive(true);
ERRCHECK(result);
float defaultFrequency;
result = channel->getFrequency(&defaultFrequency);
ERRCHECK(result);
/*
Main loop.
*/
do
{
Common_Update();
if (Common_BtnPress(BTN_ACTION1))
{
bool paused;
result = channel->getPaused(&paused);
ERRCHECK(result);
result = channel->setPaused(!paused);
ERRCHECK(result);
}
if (Common_BtnPress(BTN_DOWN)) {
char valuestr;
int valuestrlen;
pitch_shift->getParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, &pitch, &valuestr, valuestrlen);
pitch+=0.1f;
pitch = pitch>2.0f?2.0f:pitch;
pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
channel->setFrequency(defaultFrequency/pitch);
}
if (Common_BtnPress(BTN_UP)) {
char valuestr;
int valuestrlen;
pitch_shift->getParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, &pitch, &valuestr, valuestrlen);
pitch-=0.1f;
pitch = pitch<0.5f?0.5f:pitch;
pitch_shift->setParameterFloat(FMOD_DSP_PITCHSHIFT_PITCH, pitch);
channel->setFrequency(defaultFrequency/pitch);
}
result = system->update();
ERRCHECK(result);
{
unsigned int ms = 0;
unsigned int lenms = 0;
bool playing = false;
bool paused = false;
if (channel)
{
result = channel->isPlaying(&playing);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = channel->getPaused(&paused);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = channel->getPosition(&ms, FMOD_TIMEUNIT_MS);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
result = sound_to_play->getLength(&lenms, FMOD_TIMEUNIT_MS);
if ((result != FMOD_OK) && (result != FMOD_ERR_INVALID_HANDLE))
{
ERRCHECK(result);
}
}
Common_Draw("==================================================");
Common_Draw("Play Stream Example.");
Common_Draw("Copyright (c) Firelight Technologies 2004-2015.");
Common_Draw("==================================================");
Common_Draw("");
Common_Draw("Press %s to toggle pause", Common_BtnStr(BTN_ACTION1));
Common_Draw("Press %s to quit", Common_BtnStr(BTN_QUIT));
Common_Draw("");
Common_Draw("Time %02d:%02d:%02d/%02d:%02d:%02d : %s", ms / 1000 / 60, ms / 1000 % 60, ms / 10 % 100, lenms / 1000 / 60, lenms / 1000 % 60, lenms / 10 % 100, paused ? "Paused " : playing ? "Playing" : "Stopped");
Common_Draw("Pitch %02f",pitch);
}
Common_Sleep(50);
} while (!Common_BtnPress(BTN_QUIT));
/*
Shut down
*/
result = sound->release(); /* Release the parent, not the sound that was retrieved with getSubSound. */
ERRCHECK(result);
result = system->close();
ERRCHECK(result);
result = system->release();
ERRCHECK(result);
Common_Close();
return 0;
}
`
After some more experiments , i can approach the destination through switch sound file format to m4a on iOS Device.
MP3 still not working .
My app does huge data processing on audio coming from the mic input.
In order to get a "demo mode", I want to do the same thing based on a local .caf audio file.
I managed to get the audio file.
Now I am trying to use ExtAudioFileRead to read the .caf file and then do the data processing.
void readFile()
{
OSStatus err = noErr;
// Audio file
NSURL *path = [[NSBundle mainBundle] URLForResource:#"output" withExtension:#"caf"];
ExtAudioFileOpenURL((__bridge CFURLRef)path, &audio->audiofile);
assert(audio->audiofile);
// File's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// tell the ExtAudioFile API what format we want samples back in
//bzero(&audio->clientFormat, sizeof(audio->clientFormat));
audio->clientFormat.mSampleRate = SampleRate;
audio->clientFormat.mFormatID = kAudioFormatLinearPCM;
audio->clientFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audio->clientFormat.mFramesPerPacket = 1;
audio->clientFormat.mChannelsPerFrame = 1;
audio->clientFormat.mBitsPerChannel = 16;//sizeof(AudioSampleType) * 8;
audio->clientFormat.mBytesPerPacket = 2 * audio->clientFormat.mChannelsPerFrame;
audio->clientFormat.mBytesPerFrame = 2 * audio->clientFormat.mChannelsPerFrame;
err = ExtAudioFileSetProperty(audio->audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(audio->clientFormat), &audio->clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (audio->clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = audio->clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * (int)numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
UInt32 maxReadFrames = 1024;
UInt32 rFrames = (UInt32)numFrames;
while(rFrames > 0)
{
UInt32 framesToRead = (maxReadFrames > rFrames) ? rFrames : maxReadFrames;
err = ExtAudioFileRead(audio->audiofile, &framesToRead, bufferList);
[audio processAudio:bufferList];
if (rFrames % SampleRate == 0)
[audio realtimeUpdate:nil];
rFrames = rFrames - maxReadFrames;
}
// Close the file
ExtAudioFileDispose(audio->audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
There is clearly something that i did not understand or that I am doing wrong with ExtAudioFileRead because this code does not work at all. I have two main problems :
The file is played instantaneously. I mean that 44'100 samples are clearly not equal to 1 second. My 3 minutes audio file processing is done in a few seconds...
During the processing, I need to update the UI. So I have a few dispatch_sync in processaudio and realtimeUpdate. This seems to be really not appreciated by ExtAudioFileRead and it freezes.
Thanks for you help.
The code you wrote is just reading samples from the file and then calling processAudio. This will be done as fast as possible. As soon as processAudio is finished the next batch of samples is read and processAudio is called again. You shouldn't assume that reading from an audio file (which is a low level and non blocking os call) takes the same time the audio read would take to play.
If you want to process the audio in the file according to the sample rate you should probably use an AUFilePlayer audio unit. This can play back the file at the right speed and you can use a callback to process the samples in real audio time instead of "as fast as possible".
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)
I'm writing an iOS app that streams video and audio over the network.
I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264. This works great.
I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput's audio frames:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// get the audio samples into a common buffer _pcmBuffer
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
// use AudioConverter to
UInt32 ouputPacketsCount = 1;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = sizeof(_aacBuffer);
bufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, converter_callback, (__bridge void *) self, &ouputPacketsCount, &bufferList, NULL);
if (0 == st) {
// ... send bufferList.mBuffers[0].mDataByteSize bytes from _aacBuffer...
}
}
In this case the callback function for the audio converter is pretty simple (assuming packet sizes and counts are setup properly):
- (void) putPcmSamplesInBufferList:(AudioBufferList *)bufferList withCount:(UInt32 *)count
{
bufferList->mBuffers[0].mData = _pcmBuffer;
bufferList->mBuffers[0].mDataByteSize = _pcmBufferSize;
}
And the setup for the audio converter looks like this:
{
// ...
AudioStreamBasicDescription pcmASBD = {0};
pcmASBD.mSampleRate = ((AVAudioSession *) [AVAudioSession sharedInstance]).currentHardwareSampleRate;
pcmASBD.mFormatID = kAudioFormatLinearPCM;
pcmASBD.mFormatFlags = kAudioFormatFlagsCanonical;
pcmASBD.mChannelsPerFrame = 1;
pcmASBD.mBytesPerFrame = sizeof(AudioSampleType);
pcmASBD.mFramesPerPacket = 1;
pcmASBD.mBytesPerPacket = pcmASBD.mBytesPerFrame * pcmASBD.mFramesPerPacket;
pcmASBD.mBitsPerChannel = 8 * pcmASBD.mBytesPerFrame;
AudioStreamBasicDescription aacASBD = {0};
aacASBD.mFormatID = kAudioFormatMPEG4AAC;
aacASBD.mSampleRate = pcmASBD.mSampleRate;
aacASBD.mChannelsPerFrame = pcmASBD.mChannelsPerFrame;
size = sizeof(aacASBD);
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &aacASBD);
AudioConverterNew(&pcmASBD, &aacASBD, &_converter);
// ...
}
This seems pretty straight forward only the IT DOES NOT WORK. Once the AVCaptureSession is running, the audio converter (specifically AudioConverterFillComplexBuffer) returns an 'hwiu' (hardware in use) error. Conversion works fine if the session is stopped but then I can't capture anything...
I was wondering if there was a way to get an AAC stream out of AVCaptureSession. The options I'm considering are:
Somehow using AVAssetWriterInput to encode audio samples into AAC and then get the encoded packets somehow (not through AVAssetWriter, which would only write to a file).
Reorganizing my app so that it uses AVCaptureSession only on the video side and uses Audio Queues on the audio side. This will make flow control (starting and stopping recording, responding to interruptions) more complicated and I'm afraid that it might cause synching problems between the audio and video. Also, it just doesn't seem like a good design.
Does anyone know if getting the AAC out of AVCaptureSession is possible? Do I have to use Audio Queues here? Could this get me into synching or control problems?
I ended up asking Apple for advice (it turns out you can do that if you have a paid developer account).
It seems that AVCaptureSession grabs a hold of the AAC hardware encoder but only lets you use it to write directly to file.
You can use the software encoder but you have to ask for it specifically instead of using AudioConverterNew:
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (!description) {
return false;
}
// see the question as for setting up pcmASBD and arc ASBD
OSStatus st = AudioConverterNewSpecific(&pcmASBD, &aacASBD, 1, description, &_converter);
if (st) {
NSLog(#"error creating audio converter: %s", OSSTATUS(st));
return false;
}
with
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(#"error getting audio format propery info: %s", OSSTATUS(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(#"error getting audio format propery: %s", OSSTATUS(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
The software encoder will take up CPU resources, of course, but will get the job done.