I'm trying to use a file player audio unit (kAudioUnitSubType_AudioFilePlayer) to play multiple files (not at the same time, of course). That's on iOS.
So I've successfully opened the files and stored their details in an array of AudioFileID's that I set to the audio unit using kAudioUnitProperty_ScheduledFileIDs. Now I would like to define 2 ScheduledAudioFileRegion's, one per file, and used them with the file player...
But I can't seem to find out:
How to set the kAudioUnitProperty_ScheduledFileRegion property to store these 2 regions (actually, how to define the index of each region)?
How to trigger the playback of a specific region.. My guess is that the kAudioTimeStampSampleTimeValid parameter should enable this but how to define which region you want to play?
Maybe I'm just plain wrong about the way I should use this audio unit, but documentation is very difficult to get and I haven't found any example showing the playback of 2 regions on the same player!
Thanks in advance.
You need to schedule region every time you want play file. In ScheduledAudioFileRegion you must set AudioFileID to play. Playback begins when current time in unit (samples) are equal or greater than sample time in scheduled region.
Example:
// get current unit time
AudioTimeStamp timeStamp;
UInt32 propSize = sizeof(AudioTimeStamp);
AudioUnitGetProperty(m_playerUnit, kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &timeStamp, &propSize);
// when to start playback
timeStamp.mSampleTime += 100;
// schedule region
ScheduledAudioFileRegion region;
memset(®ion, 0, sizeof(ScheduledAudioFileRegion));
region.mAudioFile = ...; // your AudioFileID
region.mFramesToPlay = ...; // count of frames to play
region.mLoopCount = 1;
region.mStartFrame = 0;
region.mTimeStamp = timeStamp;
AudioUnitSetProperty(m_playerUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, ®ion,sizeof(region));
Related
Is there a way to cancel pre-processing like echo cancellation and noise suppression in audio recorder in iOS?
I'm using AVAudioRecorder with meteringEnabled=true, and I get the average decibel level using averagePowerForChannel (docs).
I am trying to measure ambient noise near the phone, and iPhone 8 seems to amplify low noises or cancel them out if I start to speak. For example, if background music has an absolute decibel level of 30 - iOS seems to amplify it. When I start to speak even quietly - the dB level drops significantly.
But since I want to measure ambient noise - I don't want this pre-processing.
I tried setInputGain (docs) but isInputGainSettable is always false - therefore, I can't take this approach.
Is there a way to cancel any amplification or pre-processing like echo cancellation and noise suppression?
You can enable and disable AEC,AGC - using the AudioUnitSetProperty
https://developer.apple.com/documentation/audiotoolbox/1440371-audiounitsetproperty
Here is some code snippet for the same.
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
What the app needs is access to unprocessed audio after disabling the AGC (Auto Gain Control) filters on the audio channel. To get access to raw and unprocessed audio, turn on Measurement mode in iOS.
As appears in the iOS documentation here, "Measurement" mode is a mode that indicates that your app is performing measurement of audio input or output.
This mode is intended for apps that need to minimize the amount of system-supplied signal processing to input and output signals. If recording on devices with more than one built-in microphone, the primary microphone is used.
The javascript code I used to modify this (using nativescript), before recording, is this:
// Disable AGC
const avSession = AVAudioSession.sharedInstance();
avSession.setCategoryModeOptionsError(AVAudioSessionCategoryRecord, AVAudioSessionModeMeasurement, null);
i was try above solution , but it didn't work for me.
below is my code :
var componentDesc: AudioComponentDescription
= AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_RemoteIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: UInt32(0),
componentFlagsMask: UInt32(0) )
var osErr: OSStatus = noErr
let component: AudioComponent! = AudioComponentFindNext(nil, &componentDesc)
var tempAudioUnit: AudioUnit?
osErr = AudioComponentInstanceNew(component, &tempAudioUnit)
var outUnit = tempAudioUnit!
var lFalse = UInt32(1)
let lInputBus = AudioUnitElement(1)
let outputBus = AudioUnitElement(0)
let lResult = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
0);
var flag = Int32(1)
let Result = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
outputBus,
&flag,
0);
I've written some code to play multi-instrument general MIDI files on iOS. It works fine in iOS 7, but stopped working on iOS 8.
I've stripped it down to its essence here. Instead of creating 16 channels for my multi-channel mixer, I just create one sampler node, and map all the tracks to that channel. It still exhibits the same problem as the multi-sampler version. None of the Audio Toolbox calls return an error code (they all return 0) in iOS 7 or iOS 8. The sequence plays through the speakers in iOS 7, on both the simulator and on iPhone/iPad devices. Run the exact same code on the iOS 8 simulator, or an iPhone/iPad device, and no sound is produced.
If you comment out the call to [self initGraphFromMIDISequence], it plays on iOS 8 with the default sine-wave sound.
#implementation MyMusicPlayer {
MusicPlayer _musicPlayer;
MusicSequence _musicSequence;
AUGraph _processingGraph;
}
- (void)playMidi:(NSURL*)midiFileURL {
NewMusicSequence(&_musicSequence);
MusicSequenceFileLoad(_musicSequence, CFBridgingRetain(midiFileURL), 0, 0);
NewMusicPlayer(&_musicPlayer);
MusicPlayerSetSequence(_musicPlayer, _musicSequence);
[self initGraphFromMIDISequence];
MusicPlayerPreroll(_musicPlayer);
MusicPlayerStart(_musicPlayer);
}
// Sets up an AUGraph with one channel whose instrument is loaded from a sound bank.
// Maps all the tracks of the MIDI sequence onto that channel. Basically this is a
// way to replace the default sine-wave sound with another (single) instrument.
- (void)initGraphFromMIDISequence {
NewAUGraph(&_processingGraph);
// Add one sampler unit to the graph.
AUNode samplerNode;
AudioComponentDescription cd = {};
cd.componentManufacturer = kAudioUnitManufacturer_Apple;
cd.componentType = kAudioUnitType_MusicDevice;
cd.componentSubType = kAudioUnitSubType_Sampler;
AUGraphAddNode(_processingGraph, &cd, &samplerNode);
// Add a Mixer unit node to the graph
cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_MultiChannelMixer;
AUNode mixerNode;
AUGraphAddNode(_processingGraph, &cd, &mixerNode);
// Add the Output unit node to the graph
cd.componentType = kAudioUnitType_Output;
cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers.
AUNode ioNode;
AUGraphAddNode(_processingGraph, &cd, &ioNode);
AUGraphOpen(_processingGraph);
// Obtain the mixer unit instance from its corresponding node, and set the bus count to 1.
AudioUnit mixerUnit;
AUGraphNodeInfo(_processingGraph, mixerNode, NULL, &mixerUnit);
UInt32 const numChannels = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&numChannels,
sizeof(numChannels));
// Connect the sampler node's output 0 to mixer node output 0.
AUGraphConnectNodeInput(_processingGraph, samplerNode, 0, mixerNode, 0);
// Connect the mixer unit to the output unit.
AUGraphConnectNodeInput(_processingGraph, mixerNode, 0, ioNode, 0);
// Obtain reference to the audio unit from its node.
AudioUnit samplerUnit;
AUGraphNodeInfo(_processingGraph, samplerNode, 0, &samplerUnit);
MusicSequenceSetAUGraph(_musicSequence, _processingGraph);
// Set the destination for each track to our single sampler node.
UInt32 trackCount;
MusicSequenceGetTrackCount(_musicSequence, &trackCount);
MusicTrack track;
for (int i = 0; i < trackCount; i++) {
MusicSequenceGetIndTrack(_musicSequence, i, &track);
MusicTrackSetDestNode(track, samplerNode);
}
// You can use either a DLS or an SF2 file bundled with your app; both work in iOS 7.
//NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"GeneralUserv1.44" ofType:#"sf2"];
NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"gs_instruments" ofType:#"dls"];
NSURL *bankURL = [NSURL fileURLWithPath:soundBankPath];
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = 0;
UInt8 instrumentNumber = 46; // pick any GM instrument 0-127
bpdata.presetID = instrumentNumber;
AudioUnitSetProperty(samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
}
I have some code, not included here, which polls to see if the sequence is still playing, by calling MusicPlayerGetTime on the MusicPlayer instance. In iOS 7, the result of that call each time is the number of seconds that have elapsed since it started playing. In iOS 8, the call always returns 0, which presumably means the MusicPlayer does not start playing the sequence on the call to MusicPlayerStart.
The code above is highly order-dependent -- you have to make certain calls before others; e.g., opening the graph before calling getInfo on a node, and not loading instruments until you've assigned the tracks to channels. I've followed all the advice in other StackOverflow threads, and have verified that getting the order correct makes error codes disappear.
Any iOS MIDI experts know what might have changed between iOS 7 and iOS 8 to make this code stop working?
In iOS 8 Apple introduced a slick Obj-C abstraction of the core audio API - AVAudioEngine.
You should probably check it out. https://developer.apple.com/videos/wwdc/2014/#502
I want to implement my equalizer on iOS device, and it seems to be achievable by kAudioUnitSubType_NBandEQ. However, after I set the gain of each band, I heard nothing changed on the sound.
I use the sample code of BandEQDemo(https://github.com/springlo/BandEQDemo)
OSStatus TOAUGraphAddNode(OSType inComponentType, OSType inComponentSubType, AUGraph inGraph, AUNode *outNode)
{
AudioComponentDescription desc;
desc.componentType = inComponentType;
desc.componentSubType = inComponentSubType;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
return AUGraphAddNode(inGraph, &desc, outNode);
}
TOAUGraphAddNode(kAudioUnitType_Effect,
kAudioUnitSubType_NBandEQ,
graph,
&eqNode);
AUGraphNodeInfo(graph,
eqNode,
NULL,
&equalizerUnit);
// #[ #32, #64, #128, #256, #512, #1025, #2048, #4096, #8192, #16384 ]
Then after I set the gain and check again by these two functions:
- (AudioUnitParameterValue)gainForBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterValue gain;
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitGetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
&gain));
return gain;
}
- (void)setGain:(AudioUnitParameterValue)gain forBandAtPosition:(NSUInteger)bandPosition
{
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
TOThrowOnError(AudioUnitSetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
gain,
0));
NSLog(#"After set gain#[%d]->%f", bandPosition, [self gainForBandAtPosition:bandPosition]);
}
When I get the gain value of each band after I set it(-96dB ~ 24dB), it does correspond to the value I set. But I cannot hear any different on the sound.
Hi fellow audio enthusiast!
I am also creating an app with an equaliser feature and have stumbled upon this web page:
http://www.deluge.co/?q=content/coreaudio-iphone-creating-graphic-equalizer
With a little of cross-referencing, I have found out that the code from your Git source lacks some key features. After line 142, additional parameter set up must be made.
For example, setNumBands method is not utilised, neither is setBands. Also, make sure to include the bypass setting loop - it really won't work without it! Most importantly, check what pieces of code is lacking in your implementation, based on the ling above.
I am sorry I can't give you more through explanation; I have started to learn Audio Units a few days ago; but your question (as well as the BandEQ project) and the page I stumbled upon have solved some of my problems, so I am trying to help you as well.
Nevertheless, this is an official guide for Audio Units and do study it, as so will I! Good luck!
i'm new in audiounit.
I'm confused to generate binaural tone filter, I was create two sound with left only and right only sound and add filter kAudioUnitSubType_LowPassFilter for each sound. When playing, i'm using UISlider to change kLowPassParam_CutoffFrequency for each player, this is a code :
Float32 value = slider.value; //only 160-190 hz
AEAudioUnitFilter *toneLeft = [self.sound objectForKey:#"binaural_left"];
AEAudioUnitFilter *toneRight = [self.sound objectForKey:#"binaural_right"];
if(toneLeft && toneRight){
Float32 leftFreq = value - self.rangeSlider.value; // i have two slider, for frequency and range
Float32 rightFreq = value + self.rangeSlider.value;
AudioUnitSetParameter(toneLeft.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
leftFreq,
0);
AudioUnitSetParameter(toneRight.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
rightFreq,
0);
}
but when sound played, i didn't hear a binaural, only the frequency changes.
I got an idea from : Idea
I'm using theamazingaudioengine.com framework.
Thanks for your help.
Can I get a little help with this?
In a test project, I have an AUSampler -> MixerUnit -> ioUnit and have a render callback set up. It all works. I am using the MusicDeviceMIDIEvent method as defined in MusicDevice.h to play a midi noteOn & noteOff. So in the hack test code below, a noteOn occurs for .5 sec. every 2 seconds.
MusicDeviceMIDIEvent (below) takes a param: inOffsetSampleFrame in order to schedule an event at a future time. What I would like to be able to do is play a noteOn and schedule the noteOff at the same time (without the hack time check I am doing below). I just don't understand what the inOffsetSampleFrame value should be (ex: to play a .5 sec or .2 second note. (in other words, I don't understand the basics of audio timing...).
So, if someone could walk me through the arithmetic to get proper values from the incoming AudioTimeStamp, that would be great! Also perhaps correct me/clarify any of these:
AudioTimeStamp->mSampleTime - sampleTime is the time of the
current sample "slice"? Is this in milliseconds?
AudioTimeStamp->mHostTime - ? host is the computer the app is running on and this is time (in milliseconds?) since computer started? This is a HUGE number. Doesn't it rollover and then cause problems?
inNumberFrames - seems like that is 512 on iOS5 (set through
kAudioUnitProperty_MaximumFramesPerSlice). So the sample is made
up of 512 frames?
I've seen lots of admonitions not to overload the render Callback
function - in particular to avoid Objective C calls - I understand
the reason, but how does one then message the UI or do other
processing?
I guess that's it. Thanks for bearing with me!
inOffsetSampleFrame
If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a
sample offset that the audio unit may apply when applying that event in its next audio unit render.
This allows you to schedule to the sample, the time when a MIDI command is applied and is particularly
important when starting new notes. If you are not scheduling in the audio unit's render thread,
then you should set this value to 0
// MusicDeviceMIDIEvent function def:
extern OSStatus
MusicDeviceMIDIEvent( MusicDeviceComponent inUnit,
UInt32 inStatus,
UInt32 inData1,
UInt32 inData2,
UInt32 inOffsetSampleFrame)
//my callback
OSStatus MyCallback( void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
Float64 sampleTime = inTimeStamp->mSampleTime;
UInt64 hostTime = inTimeStamp->mHostTime;
[(__bridge Audio*)inRefCon audioEvent:sampleTime andHostTime:hostTime];
return 1;
}
// OBJ-C method
- (void)audioEvent:(Float64) sampleTime andHostTime:(UInt64)hostTime
{
OSStatus result = noErr;
Float64 nowTime = (sampleTime/self.graphSampleRate); // sample rate: 44100.0
if (nowTime - lastTime > 2) {
UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0;
result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 120, 0);
lastTime = sampleTime/self.graphSampleRate;
}
if (nowTime - lastTime > .5) {
UInt32 noteCommand = kMIDIMessage_NoteOff << 4 | 0;
result = MusicDeviceMIDIEvent (mySynthUnit, noteCommand, 60, 0, 0);
}
}
The answer here is that I misunderstood the purpose of inOffsetSampleFrame despite it being aptly named. I thought I could use it to schedule a noteOff event at some arbitrary time in the future so I didn't have to manage noteOffs, but the scope of this is simply within the current sample frame. Oh well.