I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.
Related
I get a libyuv crash recently.
I try a lot, but no use.
Please help or try to give some ideas how to achieve this. Thanks!
I have a iOS project(Objective C). One of the functions is encode the video stream.
My idea is
Step 1: Start a timer(20 FPS)
Step 2: Copy and get the bitmap data
Step 3: Transfer the bitmap data to YUV I420 (libyuv)
Step 4: Encode to the h264 format (Openh264)
Step 5: Send the h264 data with RTSP
All of function run on the foreground.
It works well for 3~4hr.
BUT it always will be crashed after 4hr+.
Check the CPU(39%), Memory(140MB), it is stable(No memory leak, CPU busy, etc.).
I try a lot, but no use ( Include add try-catch in my project, detect the data size before run in this line )
I figure out it will run more if decrease the FPS time(20FPS -> 15FPS)
Does it need to add something after encode each frame?
Could someone help me or give some idea for this? Thanks!
// This function runs in a GCD timer
- (void)processSDLFrame:(NSData *)_frameData {
if (mH264EncoderPtr == NULL) {
[self initEncoder];
return;
}
int argbSize = mMapWidth * mMapHeight * 4;
NSData *frameData = [[NSData alloc] initWithData:_frameData];
if ([frameData length] == 0 || [frameData length] != argbSize) {
NSLog(#"Incorrect frame with size : %ld\n", [frameData length]);
return;
}
SFrameBSInfo info;
memset(&info, 0, sizeof (SFrameBSInfo));
SSourcePicture pic;
memset(&pic, 0, sizeof (SSourcePicture));
pic.iPicWidth = mMapWidth;
pic.iPicHeight = mMapHeight;
pic.uiTimeStamp = [[NSDate date] timeIntervalSince1970];
#try {
libyuv::ConvertToI420(
static_cast<const uint8 *>([frameData bytes]), // sample
argbSize, // sample_size
mDstY, // dst_y
mStrideY, // dst_stride_y
mDstU, // dst_u
mStrideU, // dst_stride_u
mDstV, // dst_v
mStrideV, // dst_stride_v
0, // crop_x
0, // crop_y
mMapWidth, // src_width
mMapHeight, // src_height
mMapWidth, // crop_width
mMapHeight, // crop_height
libyuv::kRotateNone, // rotation
libyuv::FOURCC_ARGB); // fourcc
} #catch (NSException *exception) {
NSLog(#"libyuv::ConvertToI420 - exception:%#", exception.reason);
return;
}
pic.iColorFormat = videoFormatI420;
pic.iStride[0] = mStrideY;
pic.iStride[1] = mStrideU;
pic.iStride[2] = mStrideV;
pic.pData[0] = mDstY;
pic.pData[1] = mDstU;
pic.pData[2] = mDstV;
if (mH264EncoderPtr == NULL) {
NSLog(#"OpenH264Manager - encoder not initialized");
return;
}
int rv = -1;
#try {
rv = mH264EncoderPtr->EncodeFrame(&pic, &info);
} #catch (NSException *exception) {
NSLog( #"NSException caught - mH264EncoderPtr->EncodeFrame" );
NSLog( #"Name: %#", exception.name);
NSLog( #"Reason: %#", exception.reason );
[self deinitEncoder];
return;
}
if (rv != cmResultSuccess) {
NSLog(#"OpenH264Manager - encode failed : %d", rv);
[self deinitEncoder];
return;
}
if (info.eFrameType == videoFrameTypeSkip) {
NSLog(#"OpenH264Manager - drop skipped frame");
return;
}
// handle buffer data
int size = 0;
int layerSize[MAX_LAYER_NUM_OF_FRAME] = { 0 };
for (int layer = 0; layer < info.iLayerNum; layer++) {
for (int i = 0; i < info.sLayerInfo[layer].iNalCount; i++) {
layerSize[layer] += info.sLayerInfo[layer].pNalLengthInByte[i];
}
size += layerSize[layer];
}
uint8 *output = (uint8 *)malloc(size);
size = 0;
for (int layer = 0; layer < info.iLayerNum; layer++) {
memcpy(output + size, info.sLayerInfo[layer].pBsBuf, layerSize[layer]);
size += layerSize[layer];
}
// alloc new buffer for streaming
NSData *newData = [NSData dataWithBytes:output length:size];
// Send the data with RTSP
sendData( newData );
// free output buffer data
free(output);
}
[Jan/08/2020 Update]
I report this ticket on the Google Issue Report
https://bugs.chromium.org/p/libyuv/issues/detail?id=853
The Googler give me a feedback.
ARGBToI420 does no allocations. Its similar to a memcpy with a source and destination and number of pixels to convert.
The most common issues with it are
1. the destination buffer has been deallocated. Try adding validation that the YUV buffer is valid. Write to the first and last byte of each layer.
This often occurs on shutdown and threads dont shut down in the order you were hoping. A mutex to guard the memory could help.
2. the destination is an odd size and the allocator did not allocate enough memory. When alllocating the UV plane, use (width + 1) / 2 for width/stride and (height + 1) / 2 for height of UV. Allocate stride * height bytes. You could also use an allocator that verifies there are no overreads or overwrites, or a sanitizer like asan / msan.
When screen casting, usually windows are a multiple of 2 pixels on Windows and Linux, but I have seen MacOS use odd pixel count.
As a test you could wrap the function with temporary buffers. Copy the ARGB to a temporary ARGB buffer.
Call ARGBToI420 to a temporary I420 buffer.
Copy the I420 result to the final I420 buffer.
That should give you a clue which buffer/function is failing.
I will try them.
I'm trying to perform a deep copy of a CMSampleBufferRef for audio and video connection ? I need to use this buffer for delayed processing. Can somebody helper here by point to a sample code.
Thanks
I solve this problem
I needs access to the sample data for a long period of time.
try many way:
CVPixelBufferRetain -----> program broken
CVPixelBufferPool -----> program broken
CVPixelBufferCreateWithBytes ----> it can solve this program,but this will reduce performance,Apple is not recommended to do so
CMSampleBufferCreateCopy --->it is ok, and apple recommended it.
List : To maintain optimal performance, some sample buffers directly reference pools of memory that may need to be reused by the device system and other capture inputs. This is frequently the case for uncompressed device native capture where memory blocks are copied as little as possible. If multiple sample buffers reference such pools of memory for too long, inputs will no longer be able to copy new samples into memory and those samples will be dropped. If your application is causing samples to be dropped by retaining the provided CMSampleBuffer objects for too long, but it needs access to the sample data for a long period of time, consider copying the data into a new buffer and then calling CFRelease on the sample buffer (if it was previously retained) so that the memory it references can be reused.
REF:https://developer.apple.com/reference/avfoundation/avcapturefileoutputdelegate/1390096-captureoutput
that might be what you need:
pragma mark -captureOutput
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (connection == m_videoConnection) {
/* if you did not read m_sampleBuffer ,here you must CFRelease m_sampleBuffer, it is causing samples to be dropped
*/
if (m_sampleBuffer) {
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, sampleBuffer, &m_sampleBuffer);
if (noErr != status) {
m_sampleBuffer = nil;
}
NSLog(#"m_sampleBuffer = %p sampleBuffer= %p",m_sampleBuffer,sampleBuffer);
}
}
pragma mark -get CVPixelBufferRef to use for a long period of time
- (ACResult) readVideoFrame: (CVPixelBufferRef *)pixelBuffer{
while (1) {
dispatch_sync(m_readVideoData, ^{
if (!m_sampleBuffer) {
_readDataSuccess = NO;
return;
}
CMSampleBufferRef sampleBufferCopy = nil;
OSStatus status = CMSampleBufferCreateCopy(kCFAllocatorDefault, m_sampleBuffer, &sampleBufferCopy);
if ( noErr == status)
{
CVPixelBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBufferCopy);
*pixelBuffer = buffer;
_readDataSuccess = YES;
NSLog(#"m_sampleBuffer = %p ",m_sampleBuffer);
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
else{
_readDataSuccess = NO;
CFRelease(m_sampleBuffer);
m_sampleBuffer = nil;
}
});
if (_readDataSuccess) {
_readDataSuccess = NO;
return ACResultNoErr;
}
else{
usleep(15*1000);
continue;
}
}
}
then you can use it such this:
-(void)getCaptureVideoDataToEncode{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(){
while (1) {
CVPixelBufferRef buffer = NULL;
ACResult result= [videoCapture readVideoFrame:&buffer];
if (ACResultNoErr == result) {
ACResult error = [videoEncode encoder:buffer outputPacket:&streamPacket];
if (buffer) {
CVPixelBufferRelease(buffer);
buffer = NULL;
}
if (ACResultNoErr == error) {
NSLog(#"encode success");
}
}
}
});
}
I do this. CMSampleBufferCreateCopy can indeed deep copy
but a new problem is appear
captureOutput delegate doesn't work
I have a project where I need to decode h264 video from a live network stream and eventually end up with a texture I can display in another framework (Unity3D) on iOS devices. I can successfully decode the video using VTDecompressionSession and then grab the texture with CVMetalTextureCacheCreateTextureFromImage (or the OpenGL variant). It works great when I use a low-latency encoder and the image buffers come out in display order, however, when I use the regular encoder the image buffers do not come out in display order and reordering the image buffers is apparently far more difficult that I expected.
The first attempt was to set the VTDecodeFrameFlags with kVTDecodeFrame_EnableAsynchronousDecompression and kVTDecodeFrame_EnableTemporalProcessing... However, it turns out that VTDecompressionSession can choose to ignore the flag and do whatever it wants... and in my case, it chooses to ignore the flag and still outputs the buffer in encoder order (not display order). Essentially useless.
The next attempt was to associate the image buffers with the presentation time stamp and then throw them into a vector which would allow me to grab the image buffer I needed when I create the texture. The problem seems to be that the image buffer that goes into the VTDecompressionSession, which is associated with a time stamp, is no longer the same buffer that comes out, essentially making the time stamp useless.
For example, going into the decoder...
VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
VTDecodeInfoFlags flagOut;
// Presentation time stamp to be passed with the buffer
NSNumber *nsPts = [NSNumber numberWithDouble:pts];
VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags,
(void*)CFBridgingRetain(nsPts), &flagOut);
On the callback side...
void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration)
{
// The presentation time stamp...
// No longer seems to be associated with the buffer that it went in with!
NSNumber* pts = CFBridgingRelease(sourceFrameRefCon);
}
When ordered, the time stamps on the callback side increase monotonically at the expected rate, but the buffers are not in the right order. Does anyone see where I am making an error here? Or know how to determine the order of the buffers on the callback side? At this point I have tried just about everything I can think of.
In my case, the problem wasn't with VTDecompressionSession, it was a problem with the demuxer getting the wrong PTS. While I couldn't get VTDecompressionSession to put out the frames in temporal (display) order with the kVTDecodeFrame_EnableAsynchronousDecompression and kVTDecodeFrame_EnableTemporalProcessing flags, I could sort the frames myself based on PTS with a small vector.
First, make sure you associate all of your timing information with your CMSampleBuffer along with the block buffer so you receive it in the VTDecompressionSession callback.
// Wrap our CMBlockBuffer in a CMSampleBuffer...
CMSampleBufferRef sampleBuffer;
CMTime duration = ...;
CMTime presentationTimeStamp = ...;
CMTime decompressTimeStamp = ...;
CMSampleTimingInfo timingInfo{duration, presentationTimeStamp, decompressTimeStamp};
_sampleTimingArray[0] = timingInfo;
_sampleSizeArray[0] = nalLength;
// Wrap the CMBlockBuffer...
status = CMSampleBufferCreate(kCFAllocatorDefault, blockBuffer, true, NULL, NULL, _formatDescription, 1, 1, _sampleTimingArray, 1, _sampleSizeArray, &sampleBuffer);
Then, decode the frame... It is worth trying to get the frames out in display order with the flags.
VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression | kVTDecodeFrame_EnableTemporalProcessing;
VTDecodeInfoFlags flagOut;
VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags,
(void*)CFBridgingRetain(NULL), &flagOut);
On the callback side of things, we need a way of sorting the CVImageBufferRefs we receive. I use a struct that contains the CVImageBufferRef and the PTS. Then a vector with a size of two that will do the actual sorting.
struct Buffer
{
CVImageBufferRef imageBuffer = NULL;
double pts = 0;
};
std::vector <Buffer> _buffer;
We also need a way to sort the Buffers. Always writing to and reading from the index with the lowest PTS works well.
-(int) getMinIndex
{
if(_buffer[0].pts > _buffer[1].pts)
{
return 1;
}
return 0;
}
In the callback, we need to fill the vector with Buffers...
void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration)
{
StreamManager *streamManager = (__bridge StreamManager *)decompressionOutputRefCon;
#synchronized(streamManager)
{
if (status != noErr)
{
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
NSLog(#"Decompressed error: %#", error);
}
else
{
// Get the PTS
double pts = CMTimeGetSeconds(presentationTimeStamp);
// Fill our buffer initially
if(!streamManager->_bufferReady)
{
Buffer buffer;
buffer.pts = pts;
buffer.imageBuffer = imageBuffer;
CVBufferRetain(buffer.imageBuffer);
streamManager->_buffer[streamManager->_bufferIndex++] = buffer;
}
else
{
// Push new buffers to the index with the lowest PTS
int index = [streamManager getMinIndex];
// Release the old CVImageBufferRef
CVBufferRelease(streamManager->_buffer[index].imageBuffer);
Buffer buffer;
buffer.pts = pts;
buffer.imageBuffer = imageBuffer;
// Retain the new CVImageBufferRef
CVBufferRetain(buffer.imageBuffer);
streamManager->_buffer[index] = buffer;
}
// Wrap around the buffer when initialized
// _bufferWindow = 2
if(streamManager->_bufferIndex == streamManager->_bufferWindow)
{
streamManager->_bufferReady = YES;
streamManager->_bufferIndex = 0;
}
}
}
}
Finally we need to drain the Buffers in temporal (display) order...
- (void)drainBuffer
{
#synchronized(self)
{
if(_bufferReady)
{
// Drain buffers from the index with the lowest PTS
int index = [self getMinIndex];
Buffer buffer = _buffer[index];
// Do something useful with the buffer now in display order
}
}
}
I would like to improve upon that answer a bit. While the outlined solution works, it requires knowledge of the number of frames needed to produce an output frame. The example uses a buffer size of 2, but in my case I needed a buffer size of 3.
To avoid having to specify this in advance one can make use of the fact, that frames (in display order) align exactly in terms of pts/duration. I.e. the end of one frame is exactly the beginning of the next. Thus one can simply accumulate frames until there is no "gap" at the beginning, then pop the first frame, and so on. Also one can take the pts of the first frame (which is always an I-frame) as the initial "head" (as it does not have to be zero...).
Here is some code that does this:
#include <CoreVideo/CVImageBuffer.h>
#include <boost/container/flat_set.hpp>
inline bool operator<(const CMTime& left, const CMTime& right)
{
return CMTimeCompare(left, right) < 0;
}
inline bool operator==(const CMTime& left, const CMTime& right)
{
return CMTimeCompare(left, right) == 0;
}
inline CMTime operator+(const CMTime& left, const CMTime& right)
{
return CMTimeAdd(left, right);
}
class reorder_buffer_t
{
public:
struct entry_t
{
CFGuard<CVImageBufferRef> image;
CMTime pts;
CMTime duration;
bool operator<(const entry_t& other) const
{
return pts < other.pts;
}
};
private:
typedef boost::container::flat_set<entry_t> buffer_t;
public:
reorder_buffer_t()
{
}
void push(entry_t entry)
{
if (!_head)
_head = entry.pts;
_buffer.insert(std::move(entry));
}
bool empty() const
{
return _buffer.empty();
}
bool ready() const
{
return !empty() && _buffer.begin()->pts == _head;
}
entry_t pop()
{
assert(ready());
auto entry = *_buffer.begin();
_buffer.erase(_buffer.begin());
_head = entry.pts + entry.duration;
return entry;
}
void clear()
{
_buffer.clear();
_head = boost::none;
}
private:
boost::optional<CMTime> _head;
buffer_t _buffer;
};
Here's a solution that works with any required buffer size, and also does not need any 3rd party libraries. My C++ code might not be the best, but it works.
We create a Buffer struct to identify the buffers by pts:
struct Buffer
{
CVImageBufferRef imageBuffer = NULL;
uint64_t pts = 0;
};
In our decoder, we need to keep track of the buffers, and what pts we want to release next:
#property (nonatomic) std::vector <Buffer> buffers;
#property (nonatomic, assign) uint64_t nextExpectedPts;
Now we are ready to handle the buffers coming in. In my case the buffers were provided asynchronously. Make sure you provide the correct duration and presentation timestamp values to the decompressionsession to be able to sort them properly:
-(void)handleImageBuffer:(CVImageBufferRef)imageBuffer pts:(CMTime)presentationTimeStamp duration:(uint64_t)duration {
//Situation 1, we can directly pass over this buffer
if (self.nextExpectedPts == presentationTimeStamp.value || duration == 0) {
[self sendImageBuffer:imageBuffer duration:duration];
return;
}
//Situation 2, we got this buffer too fast. We will store it, but first we check if we have already stored the expected buffer
Buffer futureBuffer = [self bufferWithImageBuffer:imageBuffer pts:presentationTimeStamp.value];
int smallestPtsInBufferIndex = [self getSmallestPtsBufferIndex];
if (smallestPtsInBufferIndex >= 0 && self.nextExpectedPts == self.buffers[smallestPtsInBufferIndex].pts) {
//We found the next buffer, lets store the current buffer and return this one
Buffer bufferWithSmallestPts = self.buffers[smallestPtsInBufferIndex];
[self sendImageBuffer:bufferWithSmallestPts.imageBuffer duration:duration];
CVBufferRelease(bufferWithSmallestPts.imageBuffer);
[self setBuffer:futureBuffer atIndex:smallestPtsInBufferIndex];
} else {
//We dont have the next buffer yet, lets store this one to a new slot
[self setBuffer:futureBuffer atIndex:self.buffers.size()];
}
}
-(Buffer)bufferWithImageBuffer:(CVImageBufferRef)imageBuffer pts:(uint64_t)pts {
Buffer futureBuffer = Buffer();
futureBuffer.pts = pts;
futureBuffer.imageBuffer = imageBuffer;
CVBufferRetain(futureBuffer.imageBuffer);
return futureBuffer;
}
- (void)sendImageBuffer:(CVImageBufferRef)imageBuffer duration:(uint64_t)duration {
//Send your buffer to wherever you need it here
self.nextExpectedPts += duration;
}
-(int) getSmallestPtsBufferIndex
{
int minIndex = -1;
uint64_t minPts = 0;
for(int i=0;i<_buffers.size();i++) {
if (_buffers[i].pts < minPts || minPts == 0) {
minPts = _buffers[i].pts;
minIndex = i;
}
}
return minIndex;
}
- (void)setBuffer:(Buffer)buffer atIndex:(int)index {
if (_buffers.size() <= index) {
_buffers.push_back(buffer);
} else {
_buffers[index] = buffer;
}
}
Do not forget to release all the buffers in the vector when deallocating your decoder, and if you're working with a looping file for example, keep track of when the file has fully looped to reset the nextExpectedPts and such.
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)
I'm writing an iOS app that streams video and audio over the network.
I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264. This works great.
I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput's audio frames:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// get the audio samples into a common buffer _pcmBuffer
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
// use AudioConverter to
UInt32 ouputPacketsCount = 1;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = sizeof(_aacBuffer);
bufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, converter_callback, (__bridge void *) self, &ouputPacketsCount, &bufferList, NULL);
if (0 == st) {
// ... send bufferList.mBuffers[0].mDataByteSize bytes from _aacBuffer...
}
}
In this case the callback function for the audio converter is pretty simple (assuming packet sizes and counts are setup properly):
- (void) putPcmSamplesInBufferList:(AudioBufferList *)bufferList withCount:(UInt32 *)count
{
bufferList->mBuffers[0].mData = _pcmBuffer;
bufferList->mBuffers[0].mDataByteSize = _pcmBufferSize;
}
And the setup for the audio converter looks like this:
{
// ...
AudioStreamBasicDescription pcmASBD = {0};
pcmASBD.mSampleRate = ((AVAudioSession *) [AVAudioSession sharedInstance]).currentHardwareSampleRate;
pcmASBD.mFormatID = kAudioFormatLinearPCM;
pcmASBD.mFormatFlags = kAudioFormatFlagsCanonical;
pcmASBD.mChannelsPerFrame = 1;
pcmASBD.mBytesPerFrame = sizeof(AudioSampleType);
pcmASBD.mFramesPerPacket = 1;
pcmASBD.mBytesPerPacket = pcmASBD.mBytesPerFrame * pcmASBD.mFramesPerPacket;
pcmASBD.mBitsPerChannel = 8 * pcmASBD.mBytesPerFrame;
AudioStreamBasicDescription aacASBD = {0};
aacASBD.mFormatID = kAudioFormatMPEG4AAC;
aacASBD.mSampleRate = pcmASBD.mSampleRate;
aacASBD.mChannelsPerFrame = pcmASBD.mChannelsPerFrame;
size = sizeof(aacASBD);
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &aacASBD);
AudioConverterNew(&pcmASBD, &aacASBD, &_converter);
// ...
}
This seems pretty straight forward only the IT DOES NOT WORK. Once the AVCaptureSession is running, the audio converter (specifically AudioConverterFillComplexBuffer) returns an 'hwiu' (hardware in use) error. Conversion works fine if the session is stopped but then I can't capture anything...
I was wondering if there was a way to get an AAC stream out of AVCaptureSession. The options I'm considering are:
Somehow using AVAssetWriterInput to encode audio samples into AAC and then get the encoded packets somehow (not through AVAssetWriter, which would only write to a file).
Reorganizing my app so that it uses AVCaptureSession only on the video side and uses Audio Queues on the audio side. This will make flow control (starting and stopping recording, responding to interruptions) more complicated and I'm afraid that it might cause synching problems between the audio and video. Also, it just doesn't seem like a good design.
Does anyone know if getting the AAC out of AVCaptureSession is possible? Do I have to use Audio Queues here? Could this get me into synching or control problems?
I ended up asking Apple for advice (it turns out you can do that if you have a paid developer account).
It seems that AVCaptureSession grabs a hold of the AAC hardware encoder but only lets you use it to write directly to file.
You can use the software encoder but you have to ask for it specifically instead of using AudioConverterNew:
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (!description) {
return false;
}
// see the question as for setting up pcmASBD and arc ASBD
OSStatus st = AudioConverterNewSpecific(&pcmASBD, &aacASBD, 1, description, &_converter);
if (st) {
NSLog(#"error creating audio converter: %s", OSSTATUS(st));
return false;
}
with
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(#"error getting audio format propery info: %s", OSSTATUS(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(#"error getting audio format propery: %s", OSSTATUS(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
The software encoder will take up CPU resources, of course, but will get the job done.