In my app I need to switch between these 2 different AudioUnits.
Whenever I switch from VPIO to RemoteIO, there is a drop in my recording volume. Quite a significant drop.
No change in the playback volume though.Anyone experienced this?
Here's the code where I do the switch, which is triggered by a routing change. (I'm not too sure whether I did the change correctly, so am asking here as well.)
How do I solve the problem of the recording volume drop?
Thanks, appreciate any help I can get.
Pier.
- (void)switchInputBoxTo : (OSType) inputBoxSubType
{
OSStatus result;
if (!remoteIONode) return; // NULL check
// Get info about current output node
AudioComponentDescription outputACD;
AudioUnit currentOutputUnit;
AUGraphNodeInfo(theGraph, remoteIONode, &outputACD, ¤tOutputUnit);
if (outputACD.componentSubType != inputBoxSubType)
{
AUGraphStop(theGraph);
AUGraphUninitialize(theGraph);
result = AUGraphDisconnectNodeInput(theGraph, remoteIONode, 0);
NSCAssert (result == noErr, #"Unable to disconnect the nodes in the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
AUGraphRemoveNode(theGraph, remoteIONode);
// Re-init as other type
outputACD.componentSubType = inputBoxSubType;
// Add the RemoteIO unit node to the graph
result = AUGraphAddNode (theGraph, &outputACD, &remoteIONode);
NSCAssert (result == noErr, #"Unable to add the replacement IO unit to the audio processing graph. Error code: %d '%.4s'", (int) result, (const char *)&result);
result = AUGraphConnectNodeInput(theGraph, mixerNode, 0, remoteIONode, 0);
// Obtain a reference to the I/O unit from its node
result = AUGraphNodeInfo (theGraph, remoteIONode, 0, &_remoteIOUnit);
NSCAssert (result == noErr, #"Unable to obtain a reference to the I/O unit. Error code: %d '%.4s'", (int) result, (const char *)&result);
//result = AudioUnitUninitialize(_remoteIOUnit);
[self setupRemoteIOTest]; // reinit all that remoteIO/voiceProcessing stuff
[self configureAndStartAudioProcessingGraph:theGraph];
}
}
I used my apple developer support for this.
Here's what the support said :
The presence of the Voice I/O will result in the input/output being processed very differently. We don't expect these units to have the same gain levels at all, but the levels shouldn't be drastically off as it seems you indicate.
That said, Core Audio engineering indicated that your results may be related to when the voice block is created it is is also affecting the RIO instance. Upon further discussion, Core Audio engineering it was felt that since you say the level difference is very drastic it therefore it would be good if you could file a bug with some recordings to highlight the level difference that you are hearing between voice I/O and remote I/O along with your test code so we can attempt to reproduce in house and see if this is indeed a bug. It would be a good idea to include the results of the singe IO unit tests outlined above as well as further comparative results.
There is no API that controls this gain level, everything is internally setup by the OS depending on Audio Session Category (for example VPIO is expected to be used with PlayAndRecord always) and which IO unit has been setup. Generally it is not expected that both will be instantiated simultaneously.
Conclusion? I think it's a bug. :/
There is some talk about low volume issues if you don't dispose of your audio unit correctly. Basically, the first audio component stays in memory and any successive playback will be ducked under your or other apps, causing the volume drop.
Solution:
Audio units are AudioComponentInstance's and must be freed using AudioComponentInstanceDispose().
I've had success when I change the audio session category when going from voice processing io (PlayAndRecord) to Remote IO (SoloAmbient). Make sure you pause the Audio Session before changing this. You'll also have to uninitialize you're audio graph.
From a talk I had with an Apple AVAudioSession engineer.
VPIO - Is adding audio processing on the audio sample, which also creates the echo cancellation, this creats the drop in the audio level
RemoteIO - Wont do any audio processing so the volume level will remain high.
If you are lookign for echo cancellation while using the RemoteIO option, you should create you own audio processing in the render callback
Related
CoreAudio is always a mystery due to lack of documentations. Recently I hit some stone again:
In my program, I invoke RemoteIO and VoiceProcessingIO (VPIO) back and forth, and also change AVAudiosession in between. I tried to turn off AGC on VPIO with the follwing code:
if (ASBD.componentSubType == kAudioUnitSubType_VoiceProcessingIO) {
UInt32 turnOff = 0;
status = AudioUnitSetProperty(_myAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
0,
&turnOff,
sizeof(turnOff));
NSAssert1(status == noErr, #"Error setting AGC status: %d", (int)status);
}
Well I'm still not sure if this code disables AGC on the microphone side or the speaker side on VPIO, but anyways, let's continue. Here's the sequence to reproduce the problem:
Create a RemoteIO output audio unit with PlayAndRecord audio session category, work with it and destroy the unit;
Switch audio session to Playback only category;
Switch audio session to PlayAndRecord again and create another VPIO, work with it and destroy it;
Switch audio session to Playback and then PlayAndRecord category;
After these steps, then whatever RemoteIO/VPIO created later will bear this amplified microphone signal (as if a huge AGC is always applied) and there's no way to go back until manually kill the app and start over.
Maybe it's my particular sequence that triggered this, wonder if anyone seen this before and maybe know a correct workaround?
Try setting the mode AVAudioSessionModeMeasurement, or AVAudioSession.Mode .measurement, when configuring your app's Audio Session.
I have an app, where audio recording is the main and the most important part. However user can switch to table view controller where all records are displayed and no recording is performed.
The question is what approach is better: "start & stop audio system or just start it". It may seem obvious that the first one is more correct, like "allocate when you need it, deallocate when used it". I will show my thoughts on this question and I hope to find approval or disapproval with arguments among skilled people.
When I constructed AudioController.m the first time I implemented methods to open/close audio session and to start/stop audio unit. I wanted to stop audio system when recording is not active. I used the following code:
- (BOOL)startAudioSystem {
// open audio session
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *err = nil;
if (![audioSession setActive:YES error:&err] ) {
NSLog(#"Couldn't activate audio session: %#", err);
}
// start audio unit
OSStatus status;
status = AudioOutputUnitStart([self audioUnit]);
BOOL noErrors = err == nil && status == noErr;
return noErrors;
}
and
- (BOOL)stopAudioSystem {
// stop audio unit
BOOL result;
result = AudioOutputUnitStop([self audioUnit]) == noErr;
HANDLE_RESULT(result);
// close audio session
NSError *err;
HANDLE_RESULT([[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&err]);
HANDLE_ERROR(err);
BOOL noErrors = err == nil && result;
return noErrors;
}
I found this approach problematic because of the following reasons:
Audio system starts with delay. That means, recording_callback() not called for some time. I suspect it is AudioOutputUnitStart, which is responsible for that. I tried to comment out the line with this function call and move it to initialization. the delay was gone.
If user performs switching between recording view and table view very very fast (audio system's starts and stops are very fast too), it cause the death of media service (I know that observing AVAudioSessionMediaServicesWereResetNotification could help here, but it is not the point).
To resolve these issues I modified AudioController.m with other approach which I managed to discover: start audio system when application becomes active and do not stop it before the app is terminated In this case there are also several issues:
CPU usage
If audio category is set to recording only, then no other audio could be played when user explores table view controller.
The first one surprisingly is not a big deal, if cancel any kind of processing in recording_callback() like this:
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioController *input = (__bridge AudioController*)inRefCon;
if(!input->shouldPerformProcessing)
return noErr;
// processing
// ...
//
return noErr;
}
By doing this CPU usage equals to 0% on real device, when no recording is needed and no other actions are performed.
And the second issue can be solved by switching audio category to RecordAndPlay and enable mixing or just ignore the problem. For example in my case app requires mini Jack to be used by external device, so no headphones can be used in parallel.
Despite all this, the first approach is more close to me since I like to close/clean every stream/resource when it is no longer needed. And I want to be sure that there is indeed no other option than just start audio system. Please make me sure that I'm not the only one who came to this solution and it is the correct one.
The key to solving this problem is to note that the audio system actually runs in another (real-time) thread. And you can't really stop and deallocate something running in another thread exactly when you (or the app's main UI thread) "don't need it", but have to delay in order to allow the other thread to realize it needs to do something and then finish and clean up itself. This can potentially take up to many 100's of milliseconds for audio.
Given that, strategy 2 (just start) is safer and more realistic.
Or perhaps set a delay of many many seconds of non-use before attempting to stop audio, and possibly another short delay after that before attempting any restart.
What I'm trying to do:
Record up to a specified duration of audio/video, where the resulting output file will have a pre-defined background music from external audio-file added - without further encoding/exporting after recording.
As if you were recording video using the iPhones Camera-app, and all the recorded videos in 'Camera Roll' have background-songs. No exporting or loading after ending recording, and not in a separate AudioTrack.
How I'm trying to achieve this:
By using AVCaptureSession, in the delegate-method where the (CMSampleBufferRef)sample buffers are passed through, I'm pushing them to an AVAssetWriter to write to file. As I don't want multiple audio tracks in my output file, I can't pass the background-music through a separate AVAssetWriterInput, which means I have to add the background-music to each sample buffer from the recording while it's recording to avoid having to merge/export after recording.
The background-music is a specific, pre-defined audio file (format/codec: m4a aac), and will need no time-editing, just adding beneath the entire recording, from start to end. The recording will never be longer than the background-music-file.
Before starting the writing to file, I've also made ready an AVAssetReader, reading the specified audio-file.
Some pseudo-code(threading excluded):
-(void)startRecording
{
/*
Initialize writer and reader here: [...]
*/
backgroundAudioTrackOutput = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:
backgroundAudioTrack
outputSettings:nil];
if([backgroundAudioReader canAddOutput:backgroundAudioTrackOutput])
[backgroundAudioReader addOutput:backgroundAudioTrackOutput];
else
NSLog(#"This doesn't happen");
[backgroundAudioReader startReading];
/* Some more code */
recording = YES;
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
if(!recording)
return;
if(videoConnection)
[self writeVideoSampleBuffer:sampleBuffer];
else if(audioConnection)
[self writeAudioSampleBuffer:sampleBuffer];
}
The AVCaptureSession is already streaming the camera-video and microphone-audio, and is just waiting for the BOOL recording to be set to YES. This isn't exactly how I'm doing this, but a short, somehow equivalent representation. When the delegate-method receives a CMSampleBufferRef of type Audio, I call my own method writeAudioSamplebuffer:sampleBuffer. If this was to be done normally, without a background-track as I'm trying to do, I'd simply put something like this: [assetWriterAudioInput appendSampleBuffer:sampleBuffer]; instead of calling my method. In my case though, I need to overlap two buffers before writing it:
-(void)writeAudioSamplebuffer:(CMSampleBufferRef)recordedSampleBuffer
{
CMSampleBufferRef backgroundSampleBuffer =
[backgroundAudioTrackOutput copyNextSampleBuffer];
/* DO MAGIC HERE */
CMSampleBufferRef resultSampleBuffer =
[self overlapBuffer:recordedSampleBuffer
withBackgroundBuffer:backgroundSampleBuffer];
/* END MAGIC HERE */
[assetWriterAudioInput appendSampleBuffer:resultSampleBuffer];
}
The problem:
I have to add incremental sample buffers from a local file to the live buffers coming in. The method I have created named overlapBuffer:withBackgroundBuffer: isn't doing much right now. I know how to extract AudioBufferList, AudioBuffer and mData etc. from a CMSampleBufferRef, but I'm not sure how to actually add them together - however - I haven't been able to test different ways to do that, because the real problem happens before that. Before the Magic should happen, I am in possession of two CMSampleBufferRefs, one received from microphone, one read from file, and this is the problem:
The sample buffer received from the background-music-file is different than the one I receive from the recording-session. It seems like the call to [self.backgroundAudioTrackOutput copyNextSampleBuffer]; receives a large number of samples. I realize that this might be obvious to some people, but I've never before been at this level of media-technology. I see now that it was wishful thinking to call copyNextSampleBuffer each time I receive a sampleBuffer from the session, but I don't know when/where to put it.
As far as I can tell, the recording-session gives one audio-sample in each sample-buffer, while the file-reader gives multiple samples in each sample-buffer. Can I somehow create a counter to count each received recorded sample/buffers, and then use the first file-sampleBuffer to extract each sample, until the current file-sampleBuffer has no more samples 'to give', and then call [..copyNext..], and do the same to that buffer?
As I'm in full control of both the recording and the file's codecs, formats etc, I am hoping that such a solution wouldn't ruin the 'alignment'/synchronization of the audio. Given that both samples have the same sampleRate, could this still be a problem?
Note
I'm not even sure if this is possible, but I see no immediate reason why it shouldn't.
Also worth mentioning that when I try to use a Video-file instead of an Audio-file, and try to continually pull video-sampleBuffers, they align up perfectly.
I am not familiarized with AVCaptureOutput, since all my sound/music sessions were built using AudioToolbox instead of AVFoundation. However, I guess you should be able to set the size of the recording capturing buffer. If not, and you are still get just one sample, I would recommend you to store each individual data obtained from the capture output in an auxiliar buffer. When the auxiliar buffer reaches the same size as the file-reading buffer, then call [self overlapBuffer:auxiliarSampleBuffer withBackgroundBuffer:backgroundSampleBuffer];
I hope this would help you. If not, I can provide example about how to do this using CoreAudio. Using CoreAudio I have been able to obtain 1024 LCPM samples buffer from both microphone capturing and file reading. So the overlapping is immediate.
I have a multichannel mixer and a remote I/O in a graph, setup to play uncompressed caf files. So far so good.
Next, I am experimenting with doing weird stuff on the render callback (say, generate white noise, or play a sine wave, etc. - procedurally generated sounds).
Instead of adding conditionals to the existing render callback (which is assigned on setup to all the buses of the mixer), I would like to be able to switch the render callback attached to each bus, at runtime.
So far I'm trying this code, but it doesn't work: My alternative render callback does not get called.
- (void) playNoise
{
if (_noiseBus != -1) {
// Already playing
return;
}
_noiseBus = [self firstFreeMixerBus];
AUGraphDisconnectNodeInput(processingGraph,
mixerNode,
_noiseBus);
inputCallbackStructArray[_noiseBus].inputProc = &noiseRenderCallback;
inputCallbackStructArray[_noiseBus].inputProcRefCon = NULL;
OSStatus result = AUGraphSetNodeInputCallback(processingGraph,
mixerNode,
_noiseBus,
&inputCallbackStructArray[_noiseBus]);
if (result != noErr) {
return NSLog#"AUGraphSetNodeInputCallback");
}
result = AudioUnitSetParameter(_mixerUnit,
kMultiChannelMixerParam_Enable,
kAudioUnitScope_Input,
_noiseBus,
1,
0);
if (result != noErr) {
return NSLog(#"Failed to enable bus");
}
result = AudioUnitSetParameter (_mixerUnit,
kMultiChannelMixerParam_Volume,
kAudioUnitScope_Input,
_noiseBus,
0.5,
0);
if (result != noErr) {
return NSLog(#"AudioUnitSetParameter (set mixer unit input volume) Failed");
}
unsigned char updated;
AUGraphUpdate(processingGraph, &(updated));
// updated ends un being zero ('\0')
}
In the code above, none of the error conditions are met (no function call fails), but the boolean 'updated' remains false until the end.
Am I missing a step, or is it not possible to switch render callbacks after setup? Do I need to set aside dedicated buses to these alternative callbacks? I would like to be able to set custom callbacks from the client code (the side calling my sound engine)...
EDIT Actually it is working, but only after the second time: I must call -playNoise, then -stopNoise, and from then on it will play normally. I couldn't tell, because I was giving up at the first try...
BTW, The updated flag is still 0.
I added lots of audio unit calls out of desperation, but perhaps some are not necessary. I'll see which ones I can trim, then keep looking for the reason it needs two calls to work...
EDIT 2: After poking around, adding/removing calls and fixing bugs, I got to the point where the noise render callback works from the first time, but after playing the noise at least once, if I attempt to reuse that bus form playing PCM (caf file), it still uses the noise render callback (despite having disconnected it). I'm going with the solution suggested by #hotpaw2 in the comments and using a 'stub' callback and further function pointers...
Recently, I was watching aurioTouch.
But I can't understand this sentence:
OSStatus err = AudioUnitRender (THIS-> rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
According to apple documentaion explains:Initiates a rendering cycle for an audio unit.
But I feel are ambiguous.What is it to do?
Core Audio works on a "pull" model, where the output unit starts the process off by asking for audio samples from the unit connected to its input bus. Likewise, the unit connected to the output unit asks for samples connected to its input bus. Each of those "asks" is rendering cycle.
AudioUnitRender() typically passes in a buffer of samples that your audio unit can optionally process in some way. That buffer is the last argument in the function, ioData. inNumberFrames are the number of frames being passed in by ioData. 1 is the output element or 'bus' to render for (this could change depending on your configuration). rioUnit is the audio unit in question that is doing the processing.
Apple's Audio Unit Hosting Guide contains a section on rendering which I've found helpful.