I'm using AudioUnit to playback audio from a TeamSpeak server but when I call AudioUnitInitialize on the iOS Simulator, I'm getting constantly the macOS prompt to allow microphone access even if I want to playback only.
On a real device everything works fine without any native prompts but it is really annoying when running the app in the simulator because this prompts appear every time I run the app.
- (void)setupRemoteIO
{
AudioUnit audioUnit;
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Get component
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio unit
OSStatus status = AudioComponentInstanceNew(inputComponent, &audioUnit);
if (status != noErr)
{
printf("AudioIO could not create new audio component: status = %i\n", status);
}
UInt32 enableIO;
AudioUnitElement inputBus = 1;
AudioUnitElement outputBus = 0;
//Disabling IO for recording
enableIO = 0;
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, inputBus, &enableIO, sizeof(enableIO));
//Enabling IO for playback
enableIO = 1;
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, outputBus, &enableIO, sizeof(enableIO));
// initialize
status = AudioUnitInitialize(audioUnit);
if (status != noErr)
{
printf("AudioIO could not initialize audio unit: status = %i\n", status);
}
}
This is a known bug with Xcode (previous to 10.2) from macOS Mojave (I say known because it has happened to me a lot of time when playing video but also because when I was looking for it I found a lot of people having the same issue); althought I couldn't find any report from Apple.
Probably there can be some workaround depending on the environment, the way you launch the app, the version of Xcode and the version of macOS Mojave you have.
This will happen only in the simulator, and as you also said it won't happen on real device as most of the apps don't need microphone access for playing with Audio/Video features.
In the meantime this bug get resolved, you can try:
Going to "Security & Privacy" settings on your macOS
"Microphone" on the left panel
Then on the right panel disable the option for Xcode
Another thing you can try to get rid of the message is to change Hardware Audio Input to Internal Microphone:
Update in Xcode 10.2:
You’re now only prompted once to authorize microphone access to all simulator devices. (45715977)
Related
I have an app that has been published on the iTunes App Store, and it has background mode enabled for audio.
After updating to XCode 8, I published an update for my app, after which I've found that the app stops playing whenever the screen locks. I had not made any changes to background play otherwise. Not sure if the behavior or coding requirements changed for iOS 9+
Here's what my code does:
App plist file:
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
<string>remote-notification</string>
</array>
AudioController.m
-(void)setBackgroundPlay:(bool)backgroundPlay
{
NSLog(#"setBackgroundPlay %d", backgroundPlay);
AVAudioSession *mySession = [AVAudioSession sharedInstance];
NSError *audioSessionError = nil;
if (backgroundPlay) {
// Assign the Playback category to the audio session.
[mySession setCategory: AVAudioSessionCategoryPlayback
error: &audioSessionError];
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
if (propertySetError != 0) {
NSLog (#"Error setting audio property MixWithOthers");
}
} else {
// Assign the Playback category to the audio session.
[mySession setCategory: AVAudioSessionCategoryPlayback
error: &audioSessionError];
}
if (audioSessionError != nil) {
NSLog (#"Error setting audio session category.");
}
}
The audio does continue playing when I minimize the app, and it continues playing until the screen auto-locks. Whenever the screen turns on (like when a notification is received), audio resumes, and then shuts off when the screen goes black.
As mentioned, this stuff used to work, and seems to have changed behavior after update to Xcode 8/iOS 9.
I've tried searching the forum and other places for people experiences similar issues, but haven't been able to locate anything.
Any suggestions, or a fresh pair of eyes looking at this would be appreciated!
Thanks,
Sridhar
Ok, I found the problem! Everything was ok with regard to how I had setup background audio.
The key giveaway was looking at the console of the device when the screen lock had turned on:
Jan 17 11:03:59 My-iPad Talanome[1179] : kAudioUnitErr_TooManyFramesToProcess : inFramesToProcess=4096, mMaxFramesPerSlice=1156
A little searching led me to this Technical note - https://developer.apple.com/library/content/qa/qa1606/_index.html
The key is this --
// set the mixer unit to handle 4096 samples per slice since we want to keep rendering during screen lock
UInt32 maxFPS = 4096;
AudioUnitSetProperty(mMixer, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0,
&maxFPS, sizeof(maxFPS));
I had not set my maxFramesPerSlice, and so it was defaulting to 1156, which was too small for when the auto-lock is on (which is 4096). Setting the maxFramesPerSlice to 4096 in my audio initialization ensured that I have enough for when the screen locks.
Hope this helps others who may face similar issues!
-Sridhar
I have a simple requirement at this point, an iOS app that reads from an audio file and outputs to a speaker using AudioUnits. The reason behind not using high-level APIs is, at some point, I need to process the samples coming out of the audio file and eventually send it across network.
I have a code that works, reads the audio file and plays back to the speaker. The only issue here is, the render callback isn't working. The callback never gets called, neither do I receive any error while registering the same. Help is much appreciated (I am a beginner on Core Audio and this is my first question on stackoverflow, so please pardon any basic mistakes/overlooks). The piece of code I use for initializing the graph is attached.
void createMyAUGraph (MyAUGraphPlayerST *player) {
// Create a new AUGraph
CheckError(NewAUGraph(&player->graph), "New AUGraph failed");
// Generate description for output
AudioComponentDescription outputcd = {0};
outputcd.componentType = kAudioUnitType_Output;
outputcd.componentSubType = kAudioUnitSubType_RemoteIO;
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
outputcd.componentFlags = 0;
outputcd.componentFlagsMask = 0;
// Add new node
AUNode outputNode;
CheckError(AUGraphAddNode(player->graph, &outputcd, &outputNode), "Add output node failed");
// Node for file player
AudioComponentDescription fileplayercd = {0};
fileplayercd.componentType = kAudioUnitType_Generator;
fileplayercd.componentSubType = kAudioUnitSubType_AudioFilePlayer;
fileplayercd.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add new node
AUNode fileNode;
CheckError(AUGraphAddNode(player->graph, &fileplayercd, &fileNode), "Add file node failed");
// Open graph
CheckError(AUGraphOpen(player->graph), "Graph open failed");
// Retrive AudioUnit
CheckError(AUGraphNodeInfo(player->graph, outputNode, NULL, &player->outputAU), "file unit retrive failed");
CheckError(AUGraphNodeInfo(player->graph, fileNode, NULL, &player->fileAU), "file unit retrive failed");
// connect nodes
CheckError(AUGraphConnectNodeInput(player->graph, fileNode, 0, outputNode, 0), "failed to connect nodes");
// some other setup
UInt32 flag = 1;
CheckError(AudioUnitSetProperty(player->outputAU,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&flag,
sizeof (flag)), "Set io property failed");
// Register render callback
AURenderCallbackStruct output_cb;
output_cb.inputProc = recording_cb;
output_cb.inputProcRefCon = player;
CheckError(AudioUnitSetProperty(player->outputAU, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, 0, &output_cb, sizeof (output_cb)), "callback register failed");
// initialize graph
CheckError(AUGraphInitialize(player->graph), "graph initialization failed");
}
You told the graph to connect your RemoteIO's input to the file player node, not to your render callback. Then you initialized the graph, which overrode your render property.
If you want to pull samples from a file to process, your processing routine or render callback will have to do so, not the connection from the player output to the RemoteIO input. So don't let the graph make that connection.
Updated answer:
On any recent iOS version, you need to first use the Audio Session API to request microphone privacy permission before starting the Audio Units, otherwise you will only get silence from the microphone.
To use Audio Units to record and play thru as well as record, try putting callbacks on both the output and input of RemoteIO, then pass the sample data between the two callbacks using a circular buffer. Inside one or both of the callbacks, you can record or modify the samples as needed. Make sure to heed real-time restrictions inside the audio context (no locks or memory management, etc.)
MIDI noob in training here...
I have been using MusicPlayer/MusicSequence/MusicTrack to play MIDI notes on devices running iOS. The notes are playing fine. I am struggling to change the instrument being played. As far as I can figure this is how to do it:
-(void) setInstrument:(MIDIInstruments) program channel:(int) channel MusicTrack:(MusicTrack*) track time:(float) time {
if(channel < 0 || channel > 15 || program >=MIDI_INSTRUMENT_COUNT || time < 0) {
return;
}
MIDIChannelMessage programChange = { ((UInt8)0xC) << 4 | ((UInt8)channel), ((UInt8)program), 0, 0};
OSStatus result = MusicTrackNewMIDIChannelEvent(*track, time, &programChange);
if(result != noErr) {
[NSException raise:#"Set Instrument" format:#"Failed to set instrument error: %#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]];
}
}
In this case channel is 0 or 1, I tried several instruments through out the range of valid instrument enumerations, the time is 0.0, and the MusicTrack is valid, and has ~30 seconds of note events. The call to set the channel event passes back noErr. I am stumped...Anyone?
I had read in other posts that I would be able to generate midi using Music Player and friends. It provides for program changes. So, I had figured it was supported. After exhausting all theories, I turned to AUGraph. I added a *.sf2 file that I found online, instantiated the AUGraph, two AudioUnits, a MidiEndpointRef, and a MidiClientRef; according to this tutorial.
It was in the endpoint callback that I had to turn notes on and off using MusicDeviceMIDIEvent on the samplerUnit that seemed to allow for the program change. Whereas before I was just loading note events into a MusicTrack and playing/stoping the MusicPlayer.
I am implementing a C listener to Audio Session Interruption. When it is called for interruption, I would deactivate my audio session. Then when my app resumes, I would activate the audio session again. I have set a number of properties and category for my audio session, do I have to reset everything after re-activation?
Thanks in advance.
Some code for reference:
Initialization, setting category:
OSStatus error = AudioSessionInitialize(NULL, NULL, interuptListenerCallBack, (__bridge void *)(self));
UInt32 category = kAudioSessionCategory_PlayAndRecord;
error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
if (error) printf("couldn't set audio category!");
//use speaker as default
UInt32 doChangeDefaultOutput = 1;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultOutput), &doChangeDefaultOutput);
//allow bluethoothInput
UInt32 allowBluetoothInput = 1;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryEnableBluetoothInput, sizeof(allowBluetoothInput), &allowBluetoothInput);
The interuptListenerCallBack is where I deactivate and reactive the Audio Session because of the interruption, using
OSStatus error = AudioSessionSetActive(false);
if (error) printf("couldn't deactivate audio session!");
Or
OSStatus error = AudioSessionSetActive(true);
if (error) printf("AudioSessionSetActive (true) failed");
If you are correctly using the Audio session interruption listener, then no, you should not have to reset the properties. You just need to make sure that you actually call kAudioSessionBeginInterruption and kAudioSessionEndInterruption. I am not sure what your listener looks like, but if you are doing something like this:
if (inInterruptionState == kAudioSessionBeginInterruption) {
AudioSessionSetActive(NO);
}
if (inInterruptionState == kAudioSessionEndInterruption) {
AudioSessionSetActive(YES);
}
And are following the rules of Audio Session, Then theoretically, you should not have to reset your properties.
I don't know what you are using the Audio Session for, but you could also pause and resume playback by using the:
kAudioSessionInterruptionType_ShouldResume
and
kAudioSessionInterruptionType_ShouldNotResume.
You can use these as stated in the Docs:
kAudioSessionInterruptionType_ShouldResume
Indicates that the interruption that has just ended was one for
which it is appropriate to immediately resume playback; for example,
an incoming phone call was rejected by the user.
Available in iOS 4.0 and later.
Declared in AudioSession.h.
kAudioSessionInterruptionType_ShouldNotResume
Indicates that the interruption that has just ended was one for which it is not appropriate to resume playback; for example, your app
had been interrupted by iPod playback.
Available in iOS 4.0 and later.
Declared in AudioSession.h.
You should read the docs because there is a lot of info in there about pausing, resuming, and handling interruptions for the AudioSession.
NOTE:
AudioSession has been deprecated since iOS7. Use AVAudioSession methods instead, or set Pause and Resume option by setting the constant AVAudioSessionInterruptionOptions or AVAudioSessionInterruptionType.
(Available since iOS 6)
For some reason, it seems that stopping at a breakpoint during debugging will kill my audio queue playback.
AudioQueue will be playing audio
output.
Trigger a breakpoint to
pause my iPhone app.
Subsequent
resume, audio no longer gets played.
( However, AudioQueue callback
functions are still getting called.)
( No AudioSession or AudioQueue
errors are found.)
Since the debugger pauses the application (rather than an incoming phone call, for example) , it's not a typical iPhone interruption, so AudioSession interruption callbacks do not get triggered like in this solution.
I am using three AudioQueue buffers at 4096 samples at 22kHz and filling them in a circular manner.
Problem occurs for both multi-threaded and single-threaded mode.
Is there some known problem that you can't pause and resume AudioSessions or AudioQueues during a debugging session?
Is it running out of "queued buffers" and it's destroying/killing the AudioQueue object (but then my AQ callback shouldn't trigger).
Anyone have insight into inner workings of iPhone AudioQueues?
After playing around with it for the last several days, before posting to StackOverflow, I figured out the answer just today. Go figure!
Just recreate the AudioQueue again by calling my "preparation functions"
SetupNewQueue(mDataFormat.mSampleRate, mDataFormat.mChannelsPerFrame);
StartQueue(true);
So detect when your AudioQueue may have "died". In my case, I would be writing data into an input buffer to be "pulled" by AudioQueue callback. If it doesn't occur in a certain time, or after X number of bytes of input buffer have been filled, I then recreate the AudioQueue.
This seems to solve the issue where "halts/fails" audio when you hit a debugging breakpoint.
The simplified versions of these functions are the following:
void AQPlayer::SetupNewQueue(double inSampleRate, UInt32 inChannelsPerFrame)
{
//Prep AudioStreamBasicDescription
mDataFormat.mSampleRate = inSampleRate;
mDataFormat.SetCanonical(inChannelsPerFrame, YES);
XThrowIfError(AudioQueueNewOutput(&mDataFormat, AQPlayer::AQBufferCallback, this,
NULL, kCFRunLoopCommonModes, 0, &mQueue), "AudioQueueNew failed");
// adjust buffer size to represent about a half second of audio based on this format
CalculateBytesForTime(mDataFormat, kBufferDurationSeconds, &mBufferByteSize, &mNumPacketsToRead);
ctl->cmsg(CMSG_INFO, VERB_NOISY, "AQPlayer Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)mBufferByteSize, (int)mNumPacketsToRead);
mBufferWaitTime = mNumPacketsToRead / mDataFormat.mSampleRate * 0.9;
XThrowIfError(AudioQueueAddPropertyListener(mQueue, kAudioQueueProperty_IsRunning, isRunningProc, this), "adding property listener");
//Allocate AQ buffers (assume we are using CBR (constant bitrate)
for (int i = 0; i < kNumberBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, mBufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed");
}
...
}
OSStatus AQPlayer::StartQueue(BOOL inResume)
{
// if we are not resuming, we also should restart the file read index
if (!inResume)
mCurrentPacket = 0;
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i) {
mBuffers[i]->mAudioDataByteSize = mBuffers[i]->mAudioDataBytesCapacity;
memset( mBuffers[i]->mAudioData, 0, mBuffers[i]->mAudioDataByteSize );
XThrowIfError(AudioQueueEnqueueBuffer( mQueue,
mBuffers[i],
0,
NULL ),"AudioQueueEnqueueBuffer failed");
}
OSStatus status;
status = AudioSessionSetActive( true );
XThrowIfError(status, "\n\n*** AudioSession failed to become active *** \n\n");
status = AudioQueueStart(mQueue, NULL);
XThrowIfError(status, "\n\n*** AudioQueue failed to start *** \n\n");
return status;
}