Okay, here's the scenario: I have a real-time recording app using ExtAudioFileWriteAsync targeted for iOS 4.3. The first time I record with the app, it works perfectly. If I press stop, then record again, better than half the time I will get an EXC_BAD_ACCESS in AudioRingBuffer::GetTimeBounds right when recording starts.
That is to say that ExtAudioFileWriteAsync fails on GetTimeBounds when starting the second recording. Here is the bit of code that is fired when recording starts, which creates the ExtAudioFile reference:
- (void) setActive:(NSString *) file
{
if (mExtAFRef) {
ExtAudioFileDispose(mExtAFRef);
mExtAFRef = nil;
NSLog(#"mExtAFRef Disposed.");
}
if (mOutputAudioFile)
{
ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = nil;
NSLog(#"mOutputAudioFile Disposed.");
}
NSURL *outUrl = [NSURL fileURLWithPath:file];
OSStatus setupErr = ExtAudioFileCreateWithURL((CFURLRef)outUrl, kAudioFileWAVEType, &mOutputFormat, NULL, kAudioFileFlags_EraseFile, &mOutputAudioFile);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mOutputAudioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mOutputAudioFile, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");
isActive = TRUE;
}
Does anyone have any thoughts whatsoever on what may be causing this? I assume, given EXC_BAD_ACCESS, that it is a memory leak or something's ref count getting knocked to zero, but I can't for the life of me figure out what it might be, and the Googles are drawing a complete blank. I posted this same thing on the Apple dev forum for CoreAudio, but not a soul took pity on me, even to make a pithy comment. HALP!
EDIT: Found the problem. The error was happening when ExtAudioFileWriteAsync was trying to write a new file before the old file was "optimized." A little mutex love solved the problem.
I'm having almost the same issue on a recording app, can anyone please explain how to solve it with "A little mutex love"?
EDIT
Tnx to Chris Randall I did manage to solve my problems. This is how I implemented the mutex:
#include <pthread.h>
static pthread_mutex_t outputAudioFileLock;
then in my init:
pthread_mutex_init(&outputAudioFileLock,NULL);
and in the callback:
if (THIS.mIsRecording) {
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(THIS.mRecordFile, inNumberFrames, THIS.recordingBufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
}
finally in the stopRecord method:
if (mRecordFile) {
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mRecordFile);
mRecordFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
NSLog(#"Stopping Record");
mIsRecording = NO;
}
Tnx again for the help, hope this saves someone's time.
Include pthread.h, and define pthread_mutex_t outputAudioFileLock in your constructor. Then, in your audio callback, when you want to write, do something like this (adjusting the variables according to what you're using):
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(mOutputAudioFile, frames, bufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
The pthread_mutex_trylock checks to see if the thread is locked (and thus "optimizing"). If it is not, it then allows the write. I then wrap both the audio file setup (as seen above) and the audio file cleanup like so, so that the thread is locked when the file system is doing anything that would cause the AudioRingBuffer BAD_ACCESS error:
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
This locks the setup and cleanup threads so that you can't write to a file that is being "optimized," which is the source of the error. Hope this helps!
EDIT: I do my audio callback in the Obj-C part of the audio controller; if you're doing it in the C++ part, this would be structured quite a bit differently; perhaps someone else can answer that?
Related
Implement audio call using pjsip working proper but not working video call.
i applied following changes :
//Sip init
pj_status_t sip_startup(app_config_t *app_config)
{
pjsua_config cfg;
pjsua_config_default (&cfg);
cfg.cb.on_incoming_call = &on_incoming_call;
cfg.cb.on_call_media_state = &on_call_media_state;
cfg.cb.on_call_state = &on_call_state;
cfg.cb.on_reg_state2 = &on_reg_state2;
cfg.cb.on_call_media_event = &on_call_media_event;
// Init the logging config structure
pjsua_logging_config log_cfg;
pjsua_logging_config_default(&log_cfg);
log_cfg.console_level = 4;
// Init PJ Media
pjsua_media_config me_cfg;
pjsua_media_config_default(&me_cfg);
// Init the pjsua
status = pjsua_init(&cfg, &log_cfg, &me_cfg);
if (status != PJ_SUCCESS) error_exit("Error in pjsua_init()", status);
}
//following code add when apply sip connection
pjsua_call_setting _call_setting;
pjsua_call_setting_default(&_call_setting);
_call_setting.aud_cnt = 1;
_call_setting.vid_cnt = 1;
//when press call button from app call this funtion for video call.
pj_status_t sip_dial(pjsua_acc_id acc_id, const char *number,
pjsua_call_id *call_id)
{
pj_status_t status;
pj_str_t uri = pj_str(destUri);
status = pjsua_call_make_call(_acc_id, &uri, &(_call_setting),
NULL, NULL, NULL);
if (status != PJ_SUCCESS)
error_exit("Error making call", status);
}
//Apply changes related to video code
static void on_call_media_state(pjsua_call_id call_id)
{
pjsua_call_info ci;
unsigned mi;
pjsua_call_get_info(call_id, &ci);
sip_ring_stop([SharedAppDelegate.aVoipManager pjsipConfig]);
if(ci.media_status == PJMEDIA_TYPE_VIDEO)
{
NSLog(#"windows id : %d",ci.media[mi].stream.vid.win_in);
NSLog(#"media id : %d",mi);
if (ci.media_status != PJSUA_CALL_MEDIA_ACTIVE)
return;
[[XCPjsua sharedXCPjsua]
displayWindow:ci.media[mi].stream.vid.win_in];
}
}
i applied above code but not place video call using pjsip.
Any one have idea or steps related to video call then please help me.
Thank you
This subject is too large, I think you need to refine your questions to a smaller more specific question if you wish to get a good answer.
Make sure you have read and understood the pjsip video support:
PJSip Video_Users_Guide
PJSIP IOS Video Support
I would look for what other people have done (even if it's on another platform, e.g. Android, Windows, etc.) and the look into the pjsip pjsua sample which I believe has video support but I'm not sure if it support ios video.
Get a known good examples of pjsip video calls going so you know that it looks like and what the logs look like when it works.
Then try against your ios code against the known good example clients to see where they differ. If you can't figure it out at least you should have enough info to be able to ask a more specific question about a specific situation that is not working for you.
I am trying to read an audio file (that is not supported by iOS) with ffmpeg and then play it using AVAudioPlayer. It took me a while to get ffmpeg built inside an iOS project, but I finally did using kewlbear/FFmpeg-iOS-build-script.
This is the snippet I have right now, after a lot of searching on the web, including stackoverflow. One of the best examples I found was here.
I believe this is all the relevant code. I added comments to let you know what I'm doing and where I need something clever to happen.
#import "FFmpegWrapper.h"
#import <AVFoundation/AVFoundation.h>
AVFormatContext *formatContext = NULL;
AVStream *audioStream = NULL;
av_register_all();
avformat_network_init();
avcodec_register_all();
// this is a file locacted on my NAS
int opened = avformat_open_input(&formatContext, #"http://192.168.1.70:50002/m/NDLNA/43729.flac", NULL, NULL);
// can't open file
if(opened == 1) {
avformat_close_input(&formatContext);
}
int streamInfoValue = avformat_find_stream_info(formatContext, NULL);
// can't open stream
if (streamInfoValue < 0)
{
avformat_close_input(&formatContext);
}
// number of streams available
int inputStreamCount = formatContext->nb_streams;
for(unsigned int i = 0; i<inputStreamCount; i++)
{
// I'm only interested in the audio stream
if(formatContext->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
{
// found audio stream
audioStream = formatContext->streams[i];
}
}
if(audioStream == NULL) {
// no audio stream
}
AVFrame* frame = av_frame_alloc();
AVCodecContext* codecContext = audioStream->codec;
codecContext->codec = avcodec_find_decoder(codecContext->codec_id);
if (codecContext->codec == NULL)
{
av_free(frame);
avformat_close_input(&formatContext);
// no proper codec found
}
else if (avcodec_open2(codecContext, codecContext->codec, NULL) != 0)
{
av_free(frame);
avformat_close_input(&formatContext);
// could not open the context with the decoder
}
// this is displaying: This stream has 2 channels and a sample rate of 44100Hz
// which makes sense
NSLog(#"This stream has %d channels and a sample rate of %dHz", codecContext->channels, codecContext->sample_rate);
AVPacket packet;
av_init_packet(&packet);
// this is where I try to store in the sound data
NSMutableData *soundData = [[NSMutableData alloc] init];
while (av_read_frame(formatContext, &packet) == 0)
{
if (packet.stream_index == audioStream->index)
{
// Try to decode the packet into a frame
int frameFinished = 0;
avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet);
// Some frames rely on multiple packets, so we have to make sure the frame is finished before
// we can use it
if (frameFinished)
{
// this is where I think something clever needs to be done
// I need to store some bytes, but I can't figure out what exactly and what length?
// should the length be multiplied by the of the number of channels?
NSData *frameData = [[NSData alloc] initWithBytes:packet.buf->data length:packet.buf->size];
[soundData appendData: frameData];
}
}
// You *must* call av_free_packet() after each call to av_read_frame() or else you'll leak memory
av_free_packet(&packet);
}
// first try to write it to a file, see if that works
// this is indeed writing bytes, but it is unplayable
[soundData writeToFile:#"output.wav" atomically:YES];
NSError *error;
// this is my final goal, playing it with the AVAudioPlayer, but this is giving unclear errors
AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:soundData error:&error];
if(player == nil) {
NSLog(error.description); // Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
} else {
[player prepareToPlay];
[player play];
}
// Some codecs will cause frames to be buffered up in the decoding process. If the CODEC_CAP_DELAY flag
// is set, there can be buffered up frames that need to be flushed, so we'll do that
if (codecContext->codec->capabilities & CODEC_CAP_DELAY)
{
av_init_packet(&packet);
// Decode all the remaining frames in the buffer, until the end is reached
int frameFinished = 0;
while (avcodec_decode_audio4(codecContext, frame, &frameFinished, &packet) >= 0 && frameFinished)
{
}
}
av_free(frame);
avcodec_close(codecContext);
avformat_close_input(&formatContext);
Not really found a solution to this specific problem, but ended up using ap4y/OrigamiEngine instead.
My main reason I wanted to use FFmpeg is to play unsupported audio files (FLAC/OGG) on iOS and tvOS and OrigamiEngine does the job just fine.
I've got some code that plays a MIDI file using the AudioToolbox framework's MusicPlayer, MusicSequence, and AUGraph.
Some time after playback is complete, the code below is used to tidy up. This code runs without issues in iOSs 6–8.
However, in iOS 9, the call to DisposeAUGraph fails, returning the error code kAUGraphErr_CannotDoInCurrentContext.
The documentation for DisposeAUGraph is almost non-existent, but the documentation for the return code itself states:
To avoid spinning or waiting in the render thread (a bad idea!), many of the calls to AUGraph can return: kAUGraphErr_CannotDoInCurrentContext. This result is only generated when you call an AUGraph API from its render callback. It means that the lock that it required was held at that time, by another thread. If you see this result code, you can generally attempt the action again - typically the NEXT render cycle (so in the mean time the lock can be cleared), or you can delegate that call to another thread in your app. You should not spin or put-to-sleep the render thread.
The code below is not being called from the AUGraph's render callback — indeed, no such callback exists — the code is (currently, in my debug code) manually initiated by the user.
What is causing this error, and is there any way I can avoid it?
OSStatus result = MusicPlayerStop(g_player);
if (result != noErr)
DebugLog("Error calling MusicPlayerStop");
UInt32 trackCount;
result = MusicSequenceGetTrackCount(g_sequence, &trackCount);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetTrackCount.");
while(trackCount > 0)
{
MusicTrack track;
result = MusicSequenceGetIndTrack (g_sequence, 0, &track);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetIndTrack.");
result = MusicSequenceDisposeTrack(g_sequence, track);
if (result != noErr)
DebugLog("Error calling MusicSequenceDisposeTrack.");
result = MusicSequenceGetTrackCount(g_sequence, &trackCount);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetTrackCount.");
}
result = DisposeMusicPlayer(g_player);
if (result != noErr)
DebugLog("Error calling DisposeMusicPlayer.");
result = DisposeMusicSequence(g_sequence);
if (result != noErr)
DebugLog("Error calling DisposeMusicSequence.");
result = DisposeAUGraph(g_processingGraph);
if (result != noErr)
DebugLog("Error calling DisposeAUGraph.");
Worked around this problem by rewriting our playback code to use the newer AVMIDIPlayer* instead of MusicPlayer et al, when running on iOS 9.
* available as of iOS 8
According to documentation here https://developer.apple.com/library/mac/documentation/MusicAudio/Reference/AudioQueueReference/#//apple_ref/c/func/AudioQueueDispose
err = AudioQueueDispose(queue, true);
I use true so dispose of AudioQueue happens immediately, although it does dispose queue immediately sometimes , other times it has delay 3-4 seconds up to 13 seconds on the device. err = AudioQueueStop(queue, true) has the same problem as well.
My understanding is that both functions try to flush-release buffers already and about to be enqueued...
so I even help my call-back function to flush the buffers if AudioQueueDispose is going to be called.
static void MyAQOutputCallBack(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
if (player.shouldDispose) {
printf("player shouldDispose !!!!!!!!!!!\n\n\n\n\n\n");
OSStatus dispose = AudioQueueFlush (inAQ);
return;
}
}
Since I am going to record something using AudioQueues after playing a track, I need this functions returned without delays. couple hundred milliseconds is okay but 3-4 seconds? that is unacceptable.
Other AudioQueue functions also being called on the same thread and they seem working fine.
I have also tried to call this on main thread to make sure if it is going to change anything or not
[self performSelectorOnMainThread:#selector(tryOnMain) withObject:nil waitUntilDone:NO];
or
dispatch_sync(dispatch_get_main_queue(),^{ didnt do any difference
Any idea what might be happening?
I successfully immediately stop my audio playback by:
-(void)stopAudio
{
#synchronized(audioLock) {
audioLock=[NSNumber numberWithBool:false];
OSStatus err;
err=AudioQueueReset (_audioQueue);
if (err != noErr)
{
NSLog(#"AudioQueueReset() error: %d", (int)err);
}
err=AudioQueueStop (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueStop() error: %d", (int)err);
}
err=AudioQueueDispose (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueDispose() error: %d", (int)err);
}
}
}
And in my:
void audioCallback(void *custom_data, AudioQueueRef queue, AudioQueueBufferRef buffer)
I only put more stuff in my queue if:
myObject *weakSelf = (__bridge myObject *)custom_data;
#synchronized(weakSelf -> audioLock) {
if ([weakSelf -> audioLock boolValue]) {
Put_more_stuff_on_queue
}
In my particular case I playback AAC-LC audio.
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)