AudioUnit generating noise with 8000 Sample rate. Xamarin.ios (Monotouch) - ios

I am using AudioUnit class for recording and playback. During recording i can listen sound. Problem is that when i use sample rate 44100 then it working fine but if i use sample rate 8000 then it generating noise. After recording with 8000 sample rate when i play then there is no noise, there is actual sound.
Means only the time of recording it generate noise with actual sound.
My AudioStreamBasicDescription setting is-
audioStreamDescription.Format = AudioFormatType.LinearPCM;
audioStreamDescription.FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger |
AudioFormatFlags.LinearPCMIsPacked;
audioStreamDescription.SampleRate = 8000; // 44100;
audioStreamDescription.BitsPerChannel = 16;
audioStreamDescription.ChannelsPerFrame = 1;
audioStreamDescription.BytesPerFrame = (16 / 8);
audioStreamDescription.FramesPerPacket = 1;
audioStreamDescription.BytesPerPacket = audioStreamDescription.BytesPerFrame * audioStreamDescription.FramesPerPacket;
audioStreamDescription.Reserved = 0;
AudioUnit setting is-
public void prepareAudioUnit()
{
// Getting AudioComponent Remote output
_audioComponent = AudioComponent.FindComponent(AudioTypeOutput.Remote);
// creating an audio unit instance
audioUnit = new AudioUnit.AudioUnit(_audioComponent);
// turning on microphone
audioUnit.SetEnableIO(true, AudioUnitScopeType.Input, 1 );
audioUnit.SetEnableIO(true, AudioUnitScopeType.Output, 0 );
// setting audio format
var austat = audioUnit.SetFormat(audioStreamDescription, AudioUnitScopeType.Output, 1);
var austatInput = audioUnit.SetFormat(audioStreamDescription, AudioUnitScopeType.Input, 0);
//audioUnit.SetSampleRate(8000.0f, AudioUnitScopeType.Output, 0);
//audioUnit.SetSampleRate(8000.0f, AudioUnitScopeType.Input, 1);
// setting callback method
audioUnit.SetRenderCallback(render_CallBack, AudioUnitScopeType.Input, 0);
audioUnit.Initialize();
}
Now, my main question is how i can remove that noise which is comming with actual sound?
If i am not able to explain properly then please let me know.

Related

iOS: Changing sample rate dynamically in Audio Unit

Is it possible to change/set sample rate in the middle of a running AudioSession/AudioUnit without stopping/restarting the current AudioSession/AudioUnit (Just like audio route) ?
I have an active audio session whose sample rate is 44.1 KHz
AudioStreamBasicDescription.mSampleRate = 44100
I want to change the sample rate to 8KHz without uninitializing [AudioUnitUninitialize(audioUnit)] or stopping [AudioOutputUnitStop(audioUnit)] or deactivating Audio Unit/Session.
This is my audio unit settings.
audioComponentDescription.componentType = kAudioUnitType_Output;
audioComponentDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
audioComponentDescription.componentFlags = 0;
audioComponentDescription.componentFlagsMask = 0;
audioComponentDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
audioStreamBasicDescription.mSampleRate = 44100;
audioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
audioStreamBasicDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioStreamBasicDescription.mFramesPerPacket = 1;
audioStreamBasicDescription.mChannelsPerFrame = 1;
audioStreamBasicDescription.mBitsPerChannel = 16;
audioStreamBasicDescription.mBytesPerPacket = 2;
audioStreamBasicDescription.mBytesPerFrame = 2;
Any help is highly appreciated.
No, as each sample rate requires some startup time involving flushing samples at the previous rate from the Audio Unit buffers and sample rate converters.
Best bet, if you need to process another sample rate is to resample in software inside your own app.
Yes, you can do it dynamically with kAudioUnitSubType_SpatialMixer Audio Unit.
In pseudocode:
AudioUnitSetParameter(mixerUnit, k3DMixerParam_PlaybackRate, kAudioUnitScope_Input, 0, sampleRateRatio(from 0.0 to 2.0), 0);

Why is my multi-channel mixer no longer playing in iOS 8?

I've written some code to play multi-instrument general MIDI files on iOS. It works fine in iOS 7, but stopped working on iOS 8.
I've stripped it down to its essence here. Instead of creating 16 channels for my multi-channel mixer, I just create one sampler node, and map all the tracks to that channel. It still exhibits the same problem as the multi-sampler version. None of the Audio Toolbox calls return an error code (they all return 0) in iOS 7 or iOS 8. The sequence plays through the speakers in iOS 7, on both the simulator and on iPhone/iPad devices. Run the exact same code on the iOS 8 simulator, or an iPhone/iPad device, and no sound is produced.
If you comment out the call to [self initGraphFromMIDISequence], it plays on iOS 8 with the default sine-wave sound.
#implementation MyMusicPlayer {
MusicPlayer _musicPlayer;
MusicSequence _musicSequence;
AUGraph _processingGraph;
}
- (void)playMidi:(NSURL*)midiFileURL {
NewMusicSequence(&_musicSequence);
MusicSequenceFileLoad(_musicSequence, CFBridgingRetain(midiFileURL), 0, 0);
NewMusicPlayer(&_musicPlayer);
MusicPlayerSetSequence(_musicPlayer, _musicSequence);
[self initGraphFromMIDISequence];
MusicPlayerPreroll(_musicPlayer);
MusicPlayerStart(_musicPlayer);
}
// Sets up an AUGraph with one channel whose instrument is loaded from a sound bank.
// Maps all the tracks of the MIDI sequence onto that channel. Basically this is a
// way to replace the default sine-wave sound with another (single) instrument.
- (void)initGraphFromMIDISequence {
NewAUGraph(&_processingGraph);
// Add one sampler unit to the graph.
AUNode samplerNode;
AudioComponentDescription cd = {};
cd.componentManufacturer = kAudioUnitManufacturer_Apple;
cd.componentType = kAudioUnitType_MusicDevice;
cd.componentSubType = kAudioUnitSubType_Sampler;
AUGraphAddNode(_processingGraph, &cd, &samplerNode);
// Add a Mixer unit node to the graph
cd.componentType = kAudioUnitType_Mixer;
cd.componentSubType = kAudioUnitSubType_MultiChannelMixer;
AUNode mixerNode;
AUGraphAddNode(_processingGraph, &cd, &mixerNode);
// Add the Output unit node to the graph
cd.componentType = kAudioUnitType_Output;
cd.componentSubType = kAudioUnitSubType_RemoteIO; // Output to speakers.
AUNode ioNode;
AUGraphAddNode(_processingGraph, &cd, &ioNode);
AUGraphOpen(_processingGraph);
// Obtain the mixer unit instance from its corresponding node, and set the bus count to 1.
AudioUnit mixerUnit;
AUGraphNodeInfo(_processingGraph, mixerNode, NULL, &mixerUnit);
UInt32 const numChannels = 1;
AudioUnitSetProperty(mixerUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&numChannels,
sizeof(numChannels));
// Connect the sampler node's output 0 to mixer node output 0.
AUGraphConnectNodeInput(_processingGraph, samplerNode, 0, mixerNode, 0);
// Connect the mixer unit to the output unit.
AUGraphConnectNodeInput(_processingGraph, mixerNode, 0, ioNode, 0);
// Obtain reference to the audio unit from its node.
AudioUnit samplerUnit;
AUGraphNodeInfo(_processingGraph, samplerNode, 0, &samplerUnit);
MusicSequenceSetAUGraph(_musicSequence, _processingGraph);
// Set the destination for each track to our single sampler node.
UInt32 trackCount;
MusicSequenceGetTrackCount(_musicSequence, &trackCount);
MusicTrack track;
for (int i = 0; i < trackCount; i++) {
MusicSequenceGetIndTrack(_musicSequence, i, &track);
MusicTrackSetDestNode(track, samplerNode);
}
// You can use either a DLS or an SF2 file bundled with your app; both work in iOS 7.
//NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"GeneralUserv1.44" ofType:#"sf2"];
NSString *soundBankPath = [[NSBundle mainBundle] pathForResource:#"gs_instruments" ofType:#"dls"];
NSURL *bankURL = [NSURL fileURLWithPath:soundBankPath];
AUSamplerBankPresetData bpdata;
bpdata.bankURL = (__bridge CFURLRef) bankURL;
bpdata.bankMSB = kAUSampler_DefaultMelodicBankMSB;
bpdata.bankLSB = kAUSampler_DefaultBankLSB;
bpdata.presetID = 0;
UInt8 instrumentNumber = 46; // pick any GM instrument 0-127
bpdata.presetID = instrumentNumber;
AudioUnitSetProperty(samplerUnit,
kAUSamplerProperty_LoadPresetFromBank,
kAudioUnitScope_Global,
0,
&bpdata,
sizeof(bpdata));
}
I have some code, not included here, which polls to see if the sequence is still playing, by calling MusicPlayerGetTime on the MusicPlayer instance. In iOS 7, the result of that call each time is the number of seconds that have elapsed since it started playing. In iOS 8, the call always returns 0, which presumably means the MusicPlayer does not start playing the sequence on the call to MusicPlayerStart.
The code above is highly order-dependent -- you have to make certain calls before others; e.g., opening the graph before calling getInfo on a node, and not loading instruments until you've assigned the tracks to channels. I've followed all the advice in other StackOverflow threads, and have verified that getting the order correct makes error codes disappear.
Any iOS MIDI experts know what might have changed between iOS 7 and iOS 8 to make this code stop working?
In iOS 8 Apple introduced a slick Obj-C abstraction of the core audio API - AVAudioEngine.
You should probably check it out. https://developer.apple.com/videos/wwdc/2014/#502

ios binaural audio unit

i'm new in audiounit.
I'm confused to generate binaural tone filter, I was create two sound with left only and right only sound and add filter kAudioUnitSubType_LowPassFilter for each sound. When playing, i'm using UISlider to change kLowPassParam_CutoffFrequency for each player, this is a code :
Float32 value = slider.value; //only 160-190 hz
AEAudioUnitFilter *toneLeft = [self.sound objectForKey:#"binaural_left"];
AEAudioUnitFilter *toneRight = [self.sound objectForKey:#"binaural_right"];
if(toneLeft && toneRight){
Float32 leftFreq = value - self.rangeSlider.value; // i have two slider, for frequency and range
Float32 rightFreq = value + self.rangeSlider.value;
AudioUnitSetParameter(toneLeft.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
leftFreq,
0);
AudioUnitSetParameter(toneRight.audioUnit,
kLowPassParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
rightFreq,
0);
}
but when sound played, i didn't hear a binaural, only the frequency changes.
I got an idea from : Idea
I'm using theamazingaudioengine.com framework.
Thanks for your help.

when audioqueue play lpcm decoded from ffmpeg, the elapsed time of audio queue exceeds the duraion of the media

When play the lpcm data decoded from ffmpeg with audioqueue, the elapsed time got by AudioQueueGetCurrentTime exceeds the duration of media. But when decode the same media with AVFoundation framework, the elapsed time equals duration of the media, and so when read media by ffmpeg with no decoded, then send the compressed media data to audioqueue, the elapsed time also equals duration of the media. The AudioStreamBasicDescription set as following:
asbd.mSampleRate = 44100;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagsCanonical;
asbd.mBytesPerPacket = 4;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 4;
asbd.mChannelsPerFrame = 2;
asbd.mBitsPerChannel = 16;
asbd.mReserved = 0;
When playing with data decoded from AVFoundation, the setting of AudioStreamBasicDescription is the same as above. By my test found that AudioTimeStamp.mSampleTime got by AudioQueueGetCurrentTime is different between ffmpeg and AVFoundation, the value of ffmpeg is greater than AVFoundation. So I want to know how this happen, and how to fix it?
Thanks!
Here the mistake is asbd.mSampleRate = 44100 is not always right, so sometimes the result is right, but others is wrong. Then you should set the asbd.mSampleRate = audioCodecCtx->sample_rate, this always work well!

Audio graph initialization error with kAudioUnitSubType_VoiceProcessingIO audio IO unit subtype

I am working on a iOS project that needs acoustic echo cancelation so the kAudioUnitSubType_VoiceProcessingIO subtype seems to be a good choice.
Below is my audio unit description
//io unit description
AudioComponentDescription ioUnitDescription;
ioUnitDescription.componentType = kAudioUnitType_Output;
ioUnitDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ioUnitDescription.componentFlags = 0;
ioUnitDescription.componentFlagsMask = 0;
And based on my experience with RemoteIO subtype, I enabled the input element:
UInt32 enable = 1;
AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &enable, sizeof(enable));
However, I got error when initializing the audio graph. The same audio graph works well if the VoiceProcessingIO is replaced by RemoteIO.
Is there any difference between RemoteIO and VoiceProcessingIO that needs special attention?
Thanks,
Chuankai
In my experience the VoiceProcessingIO audio unit is much more finicky regarding buffer size and sample rate. Try a sample rate below 32000 Hz (perhaps start with 8000 Hz and make your way upward) and a fairly large buffer size (say 2048 samples or so). This is not documented. rdar number to come once I have a chance to file one.
I use the following format during set-up:
size_t bytesPerSample = sizeof(AudioSampleType);
AudioStreamBasicDescription canonicalFormat;
canonicalFormat.mSampleRate = self.samplerate;
canonicalFormat.mFormatID = kAudioFormatLinearPCM;
canonicalFormat.mFormatFlags = kAudioFormatFlagsCanonical;
canonicalFormat.mFramesPerPacket = 1;
canonicalFormat.mChannelsPerFrame = 1;
canonicalFormat.mBitsPerChannel = 8 * bytesPerSample;
canonicalFormat.mBytesPerPacket = bytesPerSample;
canonicalFormat.mBytesPerFrame = bytesPerSample;

Resources