I've got some code that plays a MIDI file using the AudioToolbox framework's MusicPlayer, MusicSequence, and AUGraph.
Some time after playback is complete, the code below is used to tidy up. This code runs without issues in iOSs 6–8.
However, in iOS 9, the call to DisposeAUGraph fails, returning the error code kAUGraphErr_CannotDoInCurrentContext.
The documentation for DisposeAUGraph is almost non-existent, but the documentation for the return code itself states:
To avoid spinning or waiting in the render thread (a bad idea!), many of the calls to AUGraph can return: kAUGraphErr_CannotDoInCurrentContext. This result is only generated when you call an AUGraph API from its render callback. It means that the lock that it required was held at that time, by another thread. If you see this result code, you can generally attempt the action again - typically the NEXT render cycle (so in the mean time the lock can be cleared), or you can delegate that call to another thread in your app. You should not spin or put-to-sleep the render thread.
The code below is not being called from the AUGraph's render callback — indeed, no such callback exists — the code is (currently, in my debug code) manually initiated by the user.
What is causing this error, and is there any way I can avoid it?
OSStatus result = MusicPlayerStop(g_player);
if (result != noErr)
DebugLog("Error calling MusicPlayerStop");
UInt32 trackCount;
result = MusicSequenceGetTrackCount(g_sequence, &trackCount);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetTrackCount.");
while(trackCount > 0)
{
MusicTrack track;
result = MusicSequenceGetIndTrack (g_sequence, 0, &track);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetIndTrack.");
result = MusicSequenceDisposeTrack(g_sequence, track);
if (result != noErr)
DebugLog("Error calling MusicSequenceDisposeTrack.");
result = MusicSequenceGetTrackCount(g_sequence, &trackCount);
if (result != noErr)
DebugLog("Error calling MusicSequenceGetTrackCount.");
}
result = DisposeMusicPlayer(g_player);
if (result != noErr)
DebugLog("Error calling DisposeMusicPlayer.");
result = DisposeMusicSequence(g_sequence);
if (result != noErr)
DebugLog("Error calling DisposeMusicSequence.");
result = DisposeAUGraph(g_processingGraph);
if (result != noErr)
DebugLog("Error calling DisposeAUGraph.");
Worked around this problem by rewriting our playback code to use the newer AVMIDIPlayer* instead of MusicPlayer et al, when running on iOS 9.
* available as of iOS 8
Related
I'm working on an application for tvOS platform for playing back audio using WebRTC (https://webrtc.org/). WebRTC uses AudioUnit for audio playout (https://chromium.googlesource.com/external/webrtc/+/7a82467d0db0d61f466a1da54b94f6a136726a3c/sdk/objc/native/src/audio/voice_processing_audio_unit.mm). It works perfectly on iOS, but produces errors on tvOS.
First of all I've disabled audio capturing at all. The first error happens when creating a Voice Processing IO audio unit:
// Create an audio component description to identify the Voice Processing
// I/O audio unit.
AudioComponentDescription vpio_unit_description;
vpio_unit_description.componentType = kAudioUnitType_Output;
vpio_unit_description.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
vpio_unit_description.componentManufacturer = kAudioUnitManufacturer_Apple;
vpio_unit_description.componentFlags = 0;
vpio_unit_description.componentFlagsMask = 0;
// Obtain an audio unit instance given the description.
AudioComponent found_vpio_unit_ref =
AudioComponentFindNext(nullptr, &vpio_unit_description);
// Create a Voice Processing IO audio unit.
OSStatus result = noErr;
result = AudioComponentInstanceNew(found_vpio_unit_ref, &vpio_unit_);
if (result != noErr) {
vpio_unit_ = nullptr;
RTCLogError(#"AudioComponentInstanceNew failed. Error=%ld.", (long)result);
return false;
}
AudioComponentInstanceNew returns -3000 OSStatus (I assume it means an invalid component ID). This issue can be fixed by replacing kAudioUnitSubType_VoiceProcessingIO → kAudioUnitSubType_GenericOutput (I'm not sure this is a correct replacement, but the error is gone).
After that WebRTC is trying to enable output
// Enable output on the output scope of the output element.
UInt32 enable_output = 1;
result = AudioUnitSetProperty(vpio_unit_, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output, kOutputBus,
&enable_output, sizeof(enable_output));
if (result != noErr) {
DisposeAudioUnit();
RTCLogError(#"Failed to enable output on output scope of output element. "
"Error=%ld.",
(long)result);
return false;
}
and this doesn't work as well: it returns -10879 OSStatus (I assume it means an invalid property). I think the problem is in providing kAudioOutputUnitProperty_EnableIO property, but have no idea why should be utilized instead.
Any ideas pr advices are very much appreciated. Thanks in advance.
I am working on audio and video call feature in my application I got success to make call as a audio but I am stuck on video calling. For video calling I am using following code.
pjsua_call_setting opt;
pjsua_call_setting_default(&opt);
opt.aud_cnt = 1;
opt.vid_cnt = 1;
char *destUri = "sip:XXXXXX#sipserver";
pj_status_t status;
pj_str_t uri = pj_str(destUri);
status = pjsua_call_make_call(voipManager._sip_acc_id, &uri,&opt,
NULL, NULL, NULL);
if (status != PJ_SUCCESS)
NSLog(#"%d",status);
else
NSLog(#"%d",status);
When the pjsua_call_make_call function is perform it shows me the error which is:
Assertion failed: (opt->vid_cnt == 0), function apply_call_setting, file ../src/pjsua-lib/pjsua_call.c, line 606.
You must build the lib for video support.
To enable video, append this into config_site.h:
#define PJMEDIA_HAS_VIDEO 1
What you are getting is assertion error for checking video support
According to documentation here https://developer.apple.com/library/mac/documentation/MusicAudio/Reference/AudioQueueReference/#//apple_ref/c/func/AudioQueueDispose
err = AudioQueueDispose(queue, true);
I use true so dispose of AudioQueue happens immediately, although it does dispose queue immediately sometimes , other times it has delay 3-4 seconds up to 13 seconds on the device. err = AudioQueueStop(queue, true) has the same problem as well.
My understanding is that both functions try to flush-release buffers already and about to be enqueued...
so I even help my call-back function to flush the buffers if AudioQueueDispose is going to be called.
static void MyAQOutputCallBack(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
if (player.shouldDispose) {
printf("player shouldDispose !!!!!!!!!!!\n\n\n\n\n\n");
OSStatus dispose = AudioQueueFlush (inAQ);
return;
}
}
Since I am going to record something using AudioQueues after playing a track, I need this functions returned without delays. couple hundred milliseconds is okay but 3-4 seconds? that is unacceptable.
Other AudioQueue functions also being called on the same thread and they seem working fine.
I have also tried to call this on main thread to make sure if it is going to change anything or not
[self performSelectorOnMainThread:#selector(tryOnMain) withObject:nil waitUntilDone:NO];
or
dispatch_sync(dispatch_get_main_queue(),^{ didnt do any difference
Any idea what might be happening?
I successfully immediately stop my audio playback by:
-(void)stopAudio
{
#synchronized(audioLock) {
audioLock=[NSNumber numberWithBool:false];
OSStatus err;
err=AudioQueueReset (_audioQueue);
if (err != noErr)
{
NSLog(#"AudioQueueReset() error: %d", (int)err);
}
err=AudioQueueStop (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueStop() error: %d", (int)err);
}
err=AudioQueueDispose (_audioQueue, YES);
if (err != noErr)
{
NSLog(#"AudioQueueDispose() error: %d", (int)err);
}
}
}
And in my:
void audioCallback(void *custom_data, AudioQueueRef queue, AudioQueueBufferRef buffer)
I only put more stuff in my queue if:
myObject *weakSelf = (__bridge myObject *)custom_data;
#synchronized(weakSelf -> audioLock) {
if ([weakSelf -> audioLock boolValue]) {
Put_more_stuff_on_queue
}
In my particular case I playback AAC-LC audio.
My pthread_detach calls fail with a "Bad file descriptor" error. The calls are in the destructor for my class and look like this -
if(pthread_detach(get_sensors) != 0)
printf("\ndetach on get_sensors failed with error %m", errno);
if(pthread_detach(get_real_velocity) != 0)
printf("\ndetach on get_real_velocity failed with error %m", errno);
I have only ever dealt with this error when using sockets. What could be causing this to happen in a pthread_detach call that I should look for? Or is it likely something in the thread callback that could be causing it? Just in case, the callbacks look like this -
void* Robot::get_real_velocity_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_real_velocity_thread_i();
}
inline void Robot::get_real_velocity_thread_i() {
while(1) {
usleep(14500);
sensor_packet temp = get_sensor_value(REQUESTED_VELOCITY);
real_velocity = temp.values[0];
if(temp.values[1] != -1)
real_velocity += temp.values[1];
} //end while
}
/*Callback for get sensors thread*/
void* Robot::get_sensors_thread(void* threadid) {
Robot* r = (Robot*)threadid;
r->get_sensors_thread_i();
} //END GETSENSORS_THREAD
inline void Robot::get_sensors_thread_i() {
while(1) {
usleep(14500);
if(sensorsstreaming) {
unsigned char receive;
int read = 0;
read = connection.PollComport(port, &receive, sizeof(unsigned char));
if((int)receive == 19) {
read = connection.PollComport(port, &receive, sizeof(unsigned char));
unsigned char rest[54];
read = connection.PollComport(port, rest, 54);
/* ***SET SENSOR VALUES*** */
//bump + wheel drop
sensor_values[0] = (int)rest[1];
sensor_values[1] = -1;
//wall
sensor_values[2] = (int)rest[2];
sensor_values[3] = -1;
...
...
lots more setting just like the two above
} //end if header == 19
} //end if sensors streaming
} //end while
} //END GET_SENSORS_THREAD_I
Thank you for any help.
The pthread_* functions return an error code; they do not set errno. (Well, they may of course, but not in any way that is documented.)
Your code should print the value returned by pthread_detach and print that.
Single Unix Spec documents two return values for this function: ESRCH (no thread by that ID was found) and EINVAL (the thread is not joinable).
Detaching threads in the destructor of an object seems silly. Firstly, if they are going to be detached eventually, why not just create them that way?
If there is any risk that the threads can use the object that is being destroyed, they need to be stopped, not detached. I.e. you somehow indicate to the threads that they should shut down, and then wait for them to reach some safe place after which they will not touch the object any more. pthread_join is useful for this.
Also, it is a little late to be doing that from the destructor. A destructor should only be run when the thread executing it is the only thread with a reference to that object. If threads are still using the object, then you're destroying it from under them.
Okay, here's the scenario: I have a real-time recording app using ExtAudioFileWriteAsync targeted for iOS 4.3. The first time I record with the app, it works perfectly. If I press stop, then record again, better than half the time I will get an EXC_BAD_ACCESS in AudioRingBuffer::GetTimeBounds right when recording starts.
That is to say that ExtAudioFileWriteAsync fails on GetTimeBounds when starting the second recording. Here is the bit of code that is fired when recording starts, which creates the ExtAudioFile reference:
- (void) setActive:(NSString *) file
{
if (mExtAFRef) {
ExtAudioFileDispose(mExtAFRef);
mExtAFRef = nil;
NSLog(#"mExtAFRef Disposed.");
}
if (mOutputAudioFile)
{
ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = nil;
NSLog(#"mOutputAudioFile Disposed.");
}
NSURL *outUrl = [NSURL fileURLWithPath:file];
OSStatus setupErr = ExtAudioFileCreateWithURL((CFURLRef)outUrl, kAudioFileWAVEType, &mOutputFormat, NULL, kAudioFileFlags_EraseFile, &mOutputAudioFile);
NSAssert(setupErr == noErr, #"Couldn't create file for writing");
setupErr = ExtAudioFileSetProperty(mOutputAudioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
NSAssert(setupErr == noErr, #"Couldn't create file for format");
setupErr = ExtAudioFileWriteAsync(mOutputAudioFile, 0, NULL);
NSAssert(setupErr == noErr, #"Couldn't initialize write buffers for audio file");
isActive = TRUE;
}
Does anyone have any thoughts whatsoever on what may be causing this? I assume, given EXC_BAD_ACCESS, that it is a memory leak or something's ref count getting knocked to zero, but I can't for the life of me figure out what it might be, and the Googles are drawing a complete blank. I posted this same thing on the Apple dev forum for CoreAudio, but not a soul took pity on me, even to make a pithy comment. HALP!
EDIT: Found the problem. The error was happening when ExtAudioFileWriteAsync was trying to write a new file before the old file was "optimized." A little mutex love solved the problem.
I'm having almost the same issue on a recording app, can anyone please explain how to solve it with "A little mutex love"?
EDIT
Tnx to Chris Randall I did manage to solve my problems. This is how I implemented the mutex:
#include <pthread.h>
static pthread_mutex_t outputAudioFileLock;
then in my init:
pthread_mutex_init(&outputAudioFileLock,NULL);
and in the callback:
if (THIS.mIsRecording) {
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(THIS.mRecordFile, inNumberFrames, THIS.recordingBufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
}
finally in the stopRecord method:
if (mRecordFile) {
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mRecordFile);
mRecordFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
NSLog(#"Stopping Record");
mIsRecording = NO;
}
Tnx again for the help, hope this saves someone's time.
Include pthread.h, and define pthread_mutex_t outputAudioFileLock in your constructor. Then, in your audio callback, when you want to write, do something like this (adjusting the variables according to what you're using):
if (0 == pthread_mutex_trylock(&outputAudioFileLock)) {
OSStatus err = ExtAudioFileWriteAsync(mOutputAudioFile, frames, bufferList);
if (noErr != err) {
NSLog(#"ExtAudioFileWriteAsync Failed: %ld!!!", err);
} else {
}
pthread_mutex_unlock(&outputAudioFileLock);
}
The pthread_mutex_trylock checks to see if the thread is locked (and thus "optimizing"). If it is not, it then allows the write. I then wrap both the audio file setup (as seen above) and the audio file cleanup like so, so that the thread is locked when the file system is doing anything that would cause the AudioRingBuffer BAD_ACCESS error:
pthread_mutex_lock(&outputAudioFileLock);
OSStatus setupErr;
setupErr = ExtAudioFileDispose(mOutputAudioFile);
mOutputAudioFile = NULL;
pthread_mutex_unlock(&outputAudioFileLock);
NSAssert(setupErr == noErr, #"Couldn't dispose audio file");
This locks the setup and cleanup threads so that you can't write to a file that is being "optimized," which is the source of the error. Hope this helps!
EDIT: I do my audio callback in the Obj-C part of the audio controller; if you're doing it in the C++ part, this would be structured quite a bit differently; perhaps someone else can answer that?