i'm new in audiounit.
I'm confused to generate binaural tone filter, I was create two sound with left only and right only sound and add filter kAudioUnitSubType_LowPassFilter for each sound. When playing, i'm using UISlider to change kLowPassParam_CutoffFrequency for each player, this is a code :
Float32 value = slider.value; //only 160-190 hz
AEAudioUnitFilter *toneLeft = [self.sound objectForKey:#"binaural_left"];
AEAudioUnitFilter *toneRight = [self.sound objectForKey:#"binaural_right"];
if(toneLeft && toneRight){
Float32 leftFreq = value - self.rangeSlider.value; // i have two slider, for frequency and range
Float32 rightFreq = value + self.rangeSlider.value;
AudioUnitSetParameter(toneLeft.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
leftFreq,
0);
AudioUnitSetParameter(toneRight.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
rightFreq,
0);
}
but when sound played, i didn't hear a binaural, only the frequency changes.
I got an idea from : Idea
I'm using theamazingaudioengine.com framework.
Thanks for your help.
Related
Is there a way to cancel pre-processing like echo cancellation and noise suppression in audio recorder in iOS?
I'm using AVAudioRecorder with meteringEnabled=true, and I get the average decibel level using averagePowerForChannel (docs).
I am trying to measure ambient noise near the phone, and iPhone 8 seems to amplify low noises or cancel them out if I start to speak. For example, if background music has an absolute decibel level of 30 - iOS seems to amplify it. When I start to speak even quietly - the dB level drops significantly.
But since I want to measure ambient noise - I don't want this pre-processing.
I tried setInputGain (docs) but isInputGainSettable is always false - therefore, I can't take this approach.
Is there a way to cancel any amplification or pre-processing like echo cancellation and noise suppression?
You can enable and disable AEC,AGC - using the AudioUnitSetProperty
https://developer.apple.com/documentation/audiotoolbox/1440371-audiounitsetproperty
Here is some code snippet for the same.
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
What the app needs is access to unprocessed audio after disabling the AGC (Auto Gain Control) filters on the audio channel. To get access to raw and unprocessed audio, turn on Measurement mode in iOS.
As appears in the iOS documentation here, "Measurement" mode is a mode that indicates that your app is performing measurement of audio input or output.
This mode is intended for apps that need to minimize the amount of system-supplied signal processing to input and output signals. If recording on devices with more than one built-in microphone, the primary microphone is used.
The javascript code I used to modify this (using nativescript), before recording, is this:
// Disable AGC
const avSession = AVAudioSession.sharedInstance();
avSession.setCategoryModeOptionsError(AVAudioSessionCategoryRecord, AVAudioSessionModeMeasurement, null);
i was try above solution , but it didn't work for me.
below is my code :
var componentDesc: AudioComponentDescription
= AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_RemoteIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: UInt32(0),
componentFlagsMask: UInt32(0) )
var osErr: OSStatus = noErr
let component: AudioComponent! = AudioComponentFindNext(nil, &componentDesc)
var tempAudioUnit: AudioUnit?
osErr = AudioComponentInstanceNew(component, &tempAudioUnit)
var outUnit = tempAudioUnit!
var lFalse = UInt32(1)
let lInputBus = AudioUnitElement(1)
let outputBus = AudioUnitElement(0)
let lResult = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
0);
var flag = Int32(1)
let Result = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
outputBus,
&flag,
0);
I am using AudioUnit class for recording and playback. During recording i can listen sound. Problem is that when i use sample rate 44100 then it working fine but if i use sample rate 8000 then it generating noise. After recording with 8000 sample rate when i play then there is no noise, there is actual sound.
Means only the time of recording it generate noise with actual sound.
My AudioStreamBasicDescription setting is-
audioStreamDescription.Format = AudioFormatType.LinearPCM;
audioStreamDescription.FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger |
AudioFormatFlags.LinearPCMIsPacked;
audioStreamDescription.SampleRate = 8000; // 44100;
audioStreamDescription.BitsPerChannel = 16;
audioStreamDescription.ChannelsPerFrame = 1;
audioStreamDescription.BytesPerFrame = (16 / 8);
audioStreamDescription.FramesPerPacket = 1;
audioStreamDescription.BytesPerPacket = audioStreamDescription.BytesPerFrame * audioStreamDescription.FramesPerPacket;
audioStreamDescription.Reserved = 0;
AudioUnit setting is-
public void prepareAudioUnit()
{
// Getting AudioComponent Remote output
_audioComponent = AudioComponent.FindComponent(AudioTypeOutput.Remote);
// creating an audio unit instance
audioUnit = new AudioUnit.AudioUnit(_audioComponent);
// turning on microphone
audioUnit.SetEnableIO(true, AudioUnitScopeType.Input, 1 );
audioUnit.SetEnableIO(true, AudioUnitScopeType.Output, 0 );
// setting audio format
var austat = audioUnit.SetFormat(audioStreamDescription, AudioUnitScopeType.Output, 1);
var austatInput = audioUnit.SetFormat(audioStreamDescription, AudioUnitScopeType.Input, 0);
//audioUnit.SetSampleRate(8000.0f, AudioUnitScopeType.Output, 0);
//audioUnit.SetSampleRate(8000.0f, AudioUnitScopeType.Input, 1);
// setting callback method
audioUnit.SetRenderCallback(render_CallBack, AudioUnitScopeType.Input, 0);
audioUnit.Initialize();
}
Now, my main question is how i can remove that noise which is comming with actual sound?
If i am not able to explain properly then please let me know.
I have an app that handles playing multiple midi instruments. Everything works great except for playing percussion instruments. I understand that in order to play percussion in General MIDI you must send the events to channel 10. I've tried a bunch of different things, and I can't figure out how to get it to work, here's an example of how I'm doing it for melodic instruments vs percussion.
// Melodic instrument
MusicDeviceMIDIEvent(self.samplerUnit, 0x90, (UInt8)pitch, 127, 0);
// Percussion Instruments
MusicDeviceMIDIEvent(self.samplerUnit, 0x99, (UInt8)pitch, 127, 0);
The sampler Unit is an AudioUnit and the pitch is given as an int through my UI.
Thanks in advance!
Assuming you have some sort of a General MIDI sound font or similar loaded, you need to set the correct status byte before sending pitch/velocity information. So in case of a Standard MIDI Drum Kit (channel 9), you'd do something like this in Swift:
var status = OSStatus(noErr)
let drumCommand = UInt32( 0xC9 | 0 )
let noteOnCommand = UInt32(0x90 | channel)
status = MusicDeviceMIDIEvent(self._samplerUnit, drumCommand, 0, 0, 0) // set device
status = MusicDeviceMIDIEvent(self._samplerUnit, noteOnCommand, noteNum, velocity, 0) // sends note ON message
No need to undertake anything special for MIDI note off messages.
Ok, so I got it working. I guess the way I load the sound font makes it so the channel stuff doesn't do anything. Instead I had to set the bankMSB property on AUSamplerBankPresetData to kAUSampler_DefaultPercussionBankMSB instead of kAUSampler_DefaultMelodicBankMSB
I added a different font loading method specifically for percussion:
- (OSStatus) loadPercussionWithSoundFont: (NSURL *)bankURL {
OSStatus result = noErr;
// fill out a bank preset data structure
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultPercussionBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = (UInt8) 32;
// set the kAUSamplerProperty_LoadPresetFromBank property
result = AudioUnitSetProperty(self.samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
// check for errors
NSCAssert (result == noErr,
#"Unable to set the preset property on the Sampler. Error code:%d '%.4s'",
(int) result,
(const char *)&result);
return result;
}
I'm trying to use a file player audio unit (kAudioUnitSubType_AudioFilePlayer) to play multiple files (not at the same time, of course). That's on iOS.
So I've successfully opened the files and stored their details in an array of AudioFileID's that I set to the audio unit using kAudioUnitProperty_ScheduledFileIDs. Now I would like to define 2 ScheduledAudioFileRegion's, one per file, and used them with the file player...
But I can't seem to find out:
How to set the kAudioUnitProperty_ScheduledFileRegion property to store these 2 regions (actually, how to define the index of each region)?
How to trigger the playback of a specific region.. My guess is that the kAudioTimeStampSampleTimeValid parameter should enable this but how to define which region you want to play?
Maybe I'm just plain wrong about the way I should use this audio unit, but documentation is very difficult to get and I haven't found any example showing the playback of 2 regions on the same player!
Thanks in advance.
You need to schedule region every time you want play file. In ScheduledAudioFileRegion you must set AudioFileID to play. Playback begins when current time in unit (samples) are equal or greater than sample time in scheduled region.
Example:
// get current unit time
AudioTimeStamp timeStamp;
UInt32 propSize = sizeof(AudioTimeStamp);
AudioUnitGetProperty(m_playerUnit, kAudioUnitProperty_CurrentPlayTime, kAudioUnitScope_Global, 0, &timeStamp, &propSize);
// when to start playback
timeStamp.mSampleTime += 100;
// schedule region
ScheduledAudioFileRegion region;
memset(®ion, 0, sizeof(ScheduledAudioFileRegion));
region.mAudioFile = ...; // your AudioFileID
region.mFramesToPlay = ...; // count of frames to play
region.mLoopCount = 1;
region.mStartFrame = 0;
region.mTimeStamp = timeStamp;
AudioUnitSetProperty(m_playerUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, ®ion,sizeof(region));
I've managed to add a reverb unit to my graph, more or less like so:
AudioComponentDescription auEffectUnitDescription;
auEffectUnitDescription.componentType = kAudioUnitType_Effect;
auEffectUnitDescription.componentSubType = kAudioUnitSubType_Reverb2;
auEffectUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AUGraphAddNode(
processingGraph,
&auEffectUnitDescription,
&auEffectNode),
Now how can I change some of the parameters on the reverb unit? I'd like to change the wet/dry ratio, and reduce the decay time.
First, you have to get a reference to the actual reverb Audio Unit:
AudioUnit reverbAU = NULL;
AUGraphNodeInfo(processingGraph, auEffectNode, NULL, &reverbAU);
Now that you have the Audio Unit you can set parameters on it, like
// set the decay time at 0 Hz to 5 seconds
AudioUnitSetParameter(reverbAU, kAudioUnitScope_Global, 0, kReverb2Param_DecayTimeAt0Hz, 5.f, 0);
// set the decay time at Nyquist to 2.5 seconds
AudioUnitSetParameter(reverbAU, kAudioUnitScope_Global, 0, kReverb2Param_DecayTimeAtNyquist, 5.f, 0);
You can find the parameters for the reverb unit (and all Apple-supplied Audio Units) in AudioUnit/AudioUnitParameters.h (Reverb param enum is on line 521)