H264 streaming to iOS not working - ios

I am trying to get a video stream from a camera and show it on an iOS app. The camera sends the stream via udp port. For that is simple, I am just opening a GCDAsyncUDPSocket and start listening on the delegate. My problem starts when the data starts to arrive.
I have the NSData object that arrives to the delegate and, as the stream is H264 and following the info from WWDC2014 I strip the NSData into smaller pieces, separating it by 0x000001. That should be the NALUs and for each one I get the type:
- (int)getNALUType:(NSData *)NALU {
uint8_t * bytes = (uint8_t *) NALU.bytes;
return bytes[0] & 0x1F;
}
If type is 7 or 8 I create the SPS and PPS objects and when I have both of them I create the format description with CMVideoFormatDescriptionCreateFromH264ParameterSets.
Finally, having the format description, I start adding the objects with type 1 and 5 (IDR and non-IDR pictures) to the AVSampleBufferDisplayLayer block buffer.
I am using the code from here to process the NALUs:
https://github.com/niswegmann/H264Streamer/blob/master/H264Streamer/ViewController.m
and this is how I process the data that arrives from the stream:
-(void)udpSocket:(GCDAsyncUdpSocket *)sock didReceiveData:(NSData *)data fromAddress:(NSData *)address withFilterContext:(id)filterContext{
UInt8 TxDataBytes[10];
int TxDataIndex = 0;
TxDataBytes[TxDataIndex++] = 0x00;
TxDataBytes[TxDataIndex++] = 0x00;
TxDataBytes[TxDataIndex++] = 0x01;
NSData *NALUStartData = [NSData dataWithBytes:&TxDataBytes length:TxDataIndex];
NSArray* NALUArray = [data componentsSplitByDataBytes:NALUStartData];
for (NSData* NALU in NALUArray) {
[self parseNALU:NALU];
}
}
componentSplitByDataBytes is from this repo:
https://github.com/watr/NSData-SplitByData
I am pretty sure that my problem is how I get the NALUs from the NSData, but I don't know what else to try.

Related

Video streaming via NSInputStream and NSOutputStream

Right now I'm investigating possibility to implement video streaming through MultipeerConnectivity framework. For that purpose I'm using NSInputStream and NSOutputStream.
The problem is: I can't receive any picture so far. Right now I'm trying to pass simple picture and show it on the receiver. Here's a little snippet of my code:
Sending picture via NSOutputStream:
- (void)sendMessageToStream
{
NSData *imgData = UIImagePNGRepresentation(_testImage);
int img_length = (int)[imgData length];
NSMutableData *msgData = [[NSMutableData alloc] initWithBytes:&img_length length:sizeof(img_length)];
[msgData appendData:imgData];
int msg_length = (int)[msgData length];
uint8_t *readBytes = (uint8_t *)[msgData bytes];
uint8_t buf[msg_length];
(void)memcpy(buf, readBytes, msg_length);
int stream_len = [_stream writeData:(uint8_t*)buf maxLength:msg_length];
//int stream_len = [_stream writeData:(uint8_t *)buf maxLength:data_length];
//NSLog(#"stream_len = %d", stream_len);
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Sent: %ld", (long)_tmpCounter];
});
}
The code above works totally fine. stream_len parameter after writing equals to 29627 bytes which is expected value, because image's size is around 25-26 kb.
Receiving picture via NSinputStream:
- (void)readDataFromStream
{
UInt32 length;
if (_currentFrameSize == 0) {
uint8_t frameSize[4];
length = [_stream readData:frameSize maxLength:sizeof(int)];
unsigned int b = frameSize[3];
b <<= 8;
b |= frameSize[2];
b <<= 8;
b |= frameSize[1];
b <<= 8;
b |= frameSize[0];
_currentFrameSize = b;
}
uint8_t bytes[1024];
length = [_stream readData:bytes maxLength:1024];
[_frameData appendBytes:bytes length:length];
if ([_frameData length] >= _currentFrameSize) {
UIImage *img = [UIImage imageWithData:_frameData];
NSLog(#"SETUP IMAGE!");
_imgView.image = img;
_currentFrameSize = 0;
[_frameData setLength:0];
}
_tmpCounter++;
dispatch_async(dispatch_get_main_queue(), ^{
_lblOperationsCounter.text = [NSString stringWithFormat:#"Received: %ld", (long)_tmpCounter];
});
}
As you can see I'm trying to receive picture in several steps, and here's why. When I'm trying to read data from stream, it's always reading maximum 1095 bytes no matter what number I put in maxLength: parameter. But when I send the picture in the first snippet of code, it's sending absolutely ok (29627 bytes . Btw, image's size is around 29 kb.
That's the place where my question come up - why is that? Why is sending 29 kb via NSOutputStream works totally fine when receiving is causing problems? And is there a solid way to make video streaming work through NSInputStream and NSOutputStream? I just didn't find much information about this technology, all I found were some simple things which I knew already.
Here's an app I wrote that shows you how:
https://app.box.com/s/94dcm9qjk8giuar08305qspdbe0pc784
Build the project with Xcode 9 and run the app on two iOS 11 devices.
To stream live video, touch the Camera icon on one of two devices.
If you don't have two devices, you can run one app in the Simulator; however, you can only use the camera on the real device (the Simulator will display the video broadcasted).
Just so you know: this is not the ideal way to stream real-time video between devices (it should probably be your last choice). Data packets (versus streaming) are way more efficient and faster.
Regardless, I'm really confused by your NSInputStream-related code. Here's something that makes a little more sense, I think:
case NSStreamEventHasBytesAvailable: {
// len is a global variable set to a non-zero value;
// mdata is a NSMutableData object that is reset when a new input
// stream is created.
// displayImage is a block that accepts the image data and a reference
// to the layer on which the image will be rendered
uint8_t * buf[len];
len = [aStream read:(uint8_t *)buf maxLength:len];
if (len > 0) {
[mdata appendBytes:(const void *)buf length:len];
} else {
displayImage(mdata, wLayer);
}
break;
}
The output stream code should look something like this:
// data is an NSData object that contains the image data from the video
// camera;
// len is a global variable set to a non-zero value
// byteIndex is a global variable set to zero each time a new output
// stream is created
if (data.length > 0 && len >= 0 && (byteIndex <= data.length)) {
len = (data.length - byteIndex) < DATA_LENGTH ? (data.length - byteIndex) : DATA_LENGTH;
uint8_t * bytes[len];
[data getBytes:&bytes range:NSMakeRange(byteIndex, len)];
byteIndex += [oStream write:(const uint8_t *)bytes maxLength:len];
}
There's a lot more to streaming video than setting up the NSStream classes correctly—a lot more. You'll notice in my app, I created a cache for the input and output streams. This solved a myriad of issues that you would likely encounter if you don't do the same.
I have never seen anyone successfully use NSStreams for video streaming...ever. It's highly complex, for one reason.
There are many different (and better) ways to stream video; I wouldn't go this route. I just took it on because no one else has been able to do it successfully.
I think that the problem is in your assumption that all data will be available in NSInputStream all the time while you are reading it. NSInputStream made from NSURL object has an asynchronous nature and it should be accessed accordingly using NSStreamDelegate. You can look at example in the README of POSInputStreamLibrary.

NSMutableData encryption in place using NSInputStream

I am trying to use CommonCrypto to encrypt an NSMutableData object in place (copying the resulting bytes to itself, without duplicating it). Previously, I was using CCCrypt() "one-shot" method, mainly because it seemed simple. I noticed that my data object got duplicated in memory.
To avoid this, I tried using an NSInputStream object with a buffer size of 2048 bytes. I am reading my NSMutableData object, and continuously call CCCryptorUpdate(), to handle the encryption. The problem is, that it still seems to be duplicated. Here's my current code (please note that it's a category on NSMutableData - mainly because of historical reasons - thus the "self" references):
- (BOOL)encryptWithKey:(NSString *)key
{
// Key creation - not relevant to the dercribed problem
char * keyPtr = calloc(1, kCCKeySizeAES256+1);
[key getCString: keyPtr maxLength: sizeof(keyPtr) encoding: NSUTF8StringEncoding];
// Create cryptographic context for encryption
CCCryptorRef cryptor;
CCCryptorStatus status = CCCryptorCreate(kCCEncrypt, kCCAlgorithmAES128, kCCOptionECBMode, keyPtr, kCCKeySizeAES256, NULL, &cryptor);
if (status != kCCSuccess)
{
MCLog(#"Failed to create a cryptographic context (%d CCCryptorStatus status).", status);
}
// Initialize the input stream
NSInputStream *inStream = [[NSInputStream alloc] initWithData:self];
[inStream open];
NSInteger result;
// BUFFER_LEN is a define 2048
uint8_t buffer[BUFFER_LEN];
size_t bytesWritten;
while ([inStream hasBytesAvailable])
{
result = [inStream read:buffer maxLength:BUFFER_LEN];
if (result > 0)
{
// Encryption goes here
status = CCCryptorUpdate(
cryptor, // Previously created cryptographic context
&result, // Input data
BUFFER_LEN, // Length of the input data
[self mutableBytes], // Result is written here
[self length], // Size of result
&bytesWritten // Number of bytes written
);
if (status != kCCSuccess)
{
MCLog(#"Error during data encryption (%d CCCryptorStatus status)", status);
}
}
else
{
// Error
}
}
// Cleanup
[inStream close];
CCCryptorRelease(cryptor);
free(keyPtr);
return ( status == kCCSuccess );
}
I am definitely missing something obvious here, encryption, and even using input streams is a bit new to me..
As long as you only call CCUpdate() one time, you can encrypt into the same buffer you read from without using a stream. See RNCryptManager.m for an example. Study applyOperation:fromStream:toStream:password:error:. I did use streams here, but there's no requirement that you do that if you already have an NSData.
You must ensure that CCUpdate() is only called one time, however. If you call it multiple times it will corrupt its own buffer. This is an open bug in CommonCryptor (radar://9930555).
As a side note: your key generation is extremely insecure, and use of ECB mode for this kind of data barely qualifies as encryption. It leaves patterns in the ciphertext which can be used to decrypt the data, in some cases just by looking at it. I do not recommend this approach if you actually intend to secure this data. If you want to study how to use these tools well, see Properly Encrypting With AES With CommonCrypto. If you want a prepackaged solution, see RNCryptor. (RNCryptor does not currently have a convenient method for encrypting in-place, however.)
In the line:
result = [inStream read:buffer maxLength:BUFFER_LEN];
the data is read into buffer and result is set to the outcome of the execution.
in lines:
status = CCCryptorUpdate(cryptor, &result, ...
You should be using buffer for the input data, not the status
status = CCCryptorUpdate(cryptor, buffer, ...
Using better names would help eliminate the simple error. If instead of result the variable had been named readStatus the error would most likely not occurred. Likewise instead of naming rthe data variable buffer it had been named streamData things would also have been more clear. Poor naming really can cause errors.

Audioqueue callback not being called

So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)

RTMP Streaming from ios to server?

Right now, I am using a static librtmp.a library to open a RTMP connection between my iphone and server. When the record button is pressed, the camera starts taking input, and on captureoutput, uses AVAssetWriter's in different threads to encode the videos to h264/AAC. The videos are then saved to a specific URL. I'm trying to take these processed frames and send them over my RTMP client using librtmp.
-(void)writeURL:(NSURL*)segmentURL {
NSLog(#"Utilities are writing...");
NSData *segmentData = [NSData dataWithContentsOfURL:segmentURL];
const char* body = [segmentData bytes];
NSLog(#"%i", [segmentData length]);
NSLog(#"%s", body);
RTMPPacket packet = _rtmp->m_write;
RTMPPacket_Alloc(&packet, [segmentData length]);
packet.m_headerType = RTMP_PACKET_SIZE_MEDIUM;
packet.m_packetType = RTMP_PACKET_TYPE_VIDEO;
packet.m_body = (char*)body;
RTMPPacket_Dump(&packet);
RTMP_SendPacket(_rtmp, &packet, TRUE);
//RTMP_Write(_rtmp, packet.m_body, packet.m_nBodySize);
}
crashes everytime on the RTMPPacket_alloc call and I'm unsure what to do. Is this the right way to approach sending the data over the network?
EDIT:
sample output
2013-03-31 22:53:16.163 videoAppPrototype[2567:907] Switching encoders
2013-03-31 22:53:16.179 videoAppPrototype[2567:1703] Encoder switch finished
2013-03-31 22:53:16.220 videoAppPrototype[2567:1703] Upload public.mpeg-4
2013-03-31 22:53:16.223 videoAppPrototype[2567:1703] Utilities are writing...
2013-03-31 22:53:16.230 videoAppPrototype[2567:1703] 339171
DEBUG: RTMP PACKET: packet type: 0x09. channel: 0x00. info 1: 0 info 2: 0. Body size: 0. body: 0x00
EDIT 2:
I changed my code to use RTMP_Write() instead of RTMP_SendPacket().
New Method:
-(void)writeURL:(NSURL*)segmentURL {
NSLog(#"Utilities are writing...");
NSData *segmentData = [NSData dataWithContentsOfURL:segmentURL];
NSUInteger len = [segmentData length] / sizeof(unsigned char);
Byte *byteData = (Byte*)malloc(len);
memcpy(byteData, [segmentData bytes], len);
free(byteData);
NSLog(#"%i", [segmentData length]);
NSLog(#"First write attempt...");
RTMP_Write(_rtmp, (char *)byteData, len);
NSLog(#"Successful?");
}
Which now crashes at RTMP_Write, as shown in stack trace:
If someone has any idea or needs anymore information, lemme know!

Can I use AVCaptureSession to encode an AAC stream to memory?

I'm writing an iOS app that streams video and audio over the network.
I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264. This works great.
I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput's audio frames:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// get the audio samples into a common buffer _pcmBuffer
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
// use AudioConverter to
UInt32 ouputPacketsCount = 1;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = sizeof(_aacBuffer);
bufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, converter_callback, (__bridge void *) self, &ouputPacketsCount, &bufferList, NULL);
if (0 == st) {
// ... send bufferList.mBuffers[0].mDataByteSize bytes from _aacBuffer...
}
}
In this case the callback function for the audio converter is pretty simple (assuming packet sizes and counts are setup properly):
- (void) putPcmSamplesInBufferList:(AudioBufferList *)bufferList withCount:(UInt32 *)count
{
bufferList->mBuffers[0].mData = _pcmBuffer;
bufferList->mBuffers[0].mDataByteSize = _pcmBufferSize;
}
And the setup for the audio converter looks like this:
{
// ...
AudioStreamBasicDescription pcmASBD = {0};
pcmASBD.mSampleRate = ((AVAudioSession *) [AVAudioSession sharedInstance]).currentHardwareSampleRate;
pcmASBD.mFormatID = kAudioFormatLinearPCM;
pcmASBD.mFormatFlags = kAudioFormatFlagsCanonical;
pcmASBD.mChannelsPerFrame = 1;
pcmASBD.mBytesPerFrame = sizeof(AudioSampleType);
pcmASBD.mFramesPerPacket = 1;
pcmASBD.mBytesPerPacket = pcmASBD.mBytesPerFrame * pcmASBD.mFramesPerPacket;
pcmASBD.mBitsPerChannel = 8 * pcmASBD.mBytesPerFrame;
AudioStreamBasicDescription aacASBD = {0};
aacASBD.mFormatID = kAudioFormatMPEG4AAC;
aacASBD.mSampleRate = pcmASBD.mSampleRate;
aacASBD.mChannelsPerFrame = pcmASBD.mChannelsPerFrame;
size = sizeof(aacASBD);
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &aacASBD);
AudioConverterNew(&pcmASBD, &aacASBD, &_converter);
// ...
}
This seems pretty straight forward only the IT DOES NOT WORK. Once the AVCaptureSession is running, the audio converter (specifically AudioConverterFillComplexBuffer) returns an 'hwiu' (hardware in use) error. Conversion works fine if the session is stopped but then I can't capture anything...
I was wondering if there was a way to get an AAC stream out of AVCaptureSession. The options I'm considering are:
Somehow using AVAssetWriterInput to encode audio samples into AAC and then get the encoded packets somehow (not through AVAssetWriter, which would only write to a file).
Reorganizing my app so that it uses AVCaptureSession only on the video side and uses Audio Queues on the audio side. This will make flow control (starting and stopping recording, responding to interruptions) more complicated and I'm afraid that it might cause synching problems between the audio and video. Also, it just doesn't seem like a good design.
Does anyone know if getting the AAC out of AVCaptureSession is possible? Do I have to use Audio Queues here? Could this get me into synching or control problems?
I ended up asking Apple for advice (it turns out you can do that if you have a paid developer account).
It seems that AVCaptureSession grabs a hold of the AAC hardware encoder but only lets you use it to write directly to file.
You can use the software encoder but you have to ask for it specifically instead of using AudioConverterNew:
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (!description) {
return false;
}
// see the question as for setting up pcmASBD and arc ASBD
OSStatus st = AudioConverterNewSpecific(&pcmASBD, &aacASBD, 1, description, &_converter);
if (st) {
NSLog(#"error creating audio converter: %s", OSSTATUS(st));
return false;
}
with
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(#"error getting audio format propery info: %s", OSSTATUS(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(#"error getting audio format propery: %s", OSSTATUS(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
The software encoder will take up CPU resources, of course, but will get the job done.

Resources