iOS AVAudioSession applying high-pass filter - ios

I have an app that uses AVAudioSession with AVAudioSessionCategoryPlayAndRecord to capture mic input. I'm looking to apply high pass (and possibly other) filters to the captured audio data, but cannot find any good documentation about this subject, especially with something that uses AVAudioSession for data capture. Any pointers appreciated.

I'm pretty sure this question is not actual for you 4 yeas later, but for future asking...
AudioSession is an intermediate between your application and iOS system media daemon, it does not record or play audio itself, it just sets up some audio preferences. For your purpose look at AudioUnit or AudioQueue. If all you need is just audio capturing, try high-level API like AVAudioRecord.
Hope it will help for someone.

Related

Using AVAudioRecorder inside of AudioKit

Is there a known way to convert an AVAudioRecorder object into an AKNode object?
My use case for this is that I have an application that is pulling an audio stream from a custom piece of bluetooth hardware. I've already written all the handlers for this, and the output of the hardware ends up as an AVAudioRecorder.
I'd like to make use of all the nicer visualisation of audio that AK offers - specifically the plotting of amplitude on a graph in my view as it is recorded, but to get it to work, it appears that I need to turn the AVAudioRecorder into an AKNode.
Is there an easy way to do this without going back through all the code that interfaces with the hardware and replacing it to use AKNode from the start?
I have gone through the documentation of AK and it doesn't seem possible at this time to use an existing AVAudioRecorder as a source node.
Thanks!
I don't believe so. AVAudioPlayer is also unavailable, but we do have AVAudioPlayerNode. There is not a corresponding AVAudioRecorderNode.

Meter audio level from Embedded YouTube video that plays inline in iOS

I'm trying to find a way to get the average power level for a channel, that comes out from the audio played in the embedded video. I'm using YouTube's iOS helper library for embedding the video https://developers.google.com/youtube/v3/guides/ios_youtube_helper
A lot of the answers I've found in StackOverflow refer to AVAudioPlayer, but that's not my case. I also looked in the docs of AudioKit framework to find something that can give the output level of the current audio, but I couldn't find anything related, maybe I missed something over there. I also looked in EZAudio framework even tough it's deprecated, and I also couldn't find something that relates to my case.
My direction of thinking was to find a way to get the actual level that's coming out from the device, but I found one answer in SO that's saying this is not allowed in iOS, although he didn't mention any source for this statement.
https://stackoverflow.com/a/12664340/4711172
So, any help would be much appreciated.
The iOS security sandbox blocks apps from seeing the device's digital audio output stream, or any other app's internal audio output (unless explicitly shared, e.g. inter-app audio, etc.) (when using Apple App store permitted public APIs.)
(Just a guess, but this was possibly and originally implemented in iOS to prevent apps from capturing samples of DRM'd music and/or recording phone call conversations.)
Might be a bit off/weird, but just in case -
Have you considered closing a loop? Meaning - record the incoming audio using 'AVAudio​Recorder' and get the audio levels from there?.
See Apple's documentation for AVAudioRecorder (in the overview they're specifying: "Obtain input audio-level data that you can use to provide level metering")
AVAudioRecorder documentation

IOS apply effects all over my audio engine sound result

is there a way in ios to put over my audio engine a kind of layer to apply DSP on all of my stuff?
In fact I want to reproduce something like add a dsp hardware between a mixer and my speaker to apply an echo for example to the sound result without dealing with the stream.
Just say for example get the global sound and apply an EQ High pass on it, that's it.
Thanks for your help
If you're using AVAudioPlayer you can use MTAudioProcessingTap to do this. It isn't the simplest task but here are some resources that should help:
MTAudioProcessingTap Audio Processor shows how to apply a bandpass filter to the audio data: https://developer.apple.com/library/ios/samplecode/AudioTapProcessor/Introduction/Intro.html
Processing AVPlayer’s audio with MTAudioProcessingTap is a fairly complete example showing how to create a tap: http://chritto.wordpress.com/2013/01/07/processing-avplayers-audio-with-mtaudioprocessingtap/

Reverb effect in iPhone app

Can anyone please give pointers how we can add re verb effect to a recording in an iPhone app?
Vocal live free on app store is a pretty good example of how I would want to include reverb effect.
Core Audio Overview in iOS documentation references reverb as an audio unit.
Any help beyond this will be helpful.
Yoy can use ObjectAL library. See link below
https://github.com/kstenerud/ObjectAL-for-iPhone.
If you have access to the raw audio data, you can simply convolute it with corresponding reverberation finite impulse response (FIR) filter kernel.

Reading raw audio in iphone SDK

Hi
I want to read and then want to perform some operations on raw audio. What is the best way to do this?
Audio File Services, Audio Converter Services, and Extended Audio File Services, all in Core Audio. AV Foundation + Core Media (specifically AVAssetReader) may also be an option, but it's really new, and therefore even less documented and less well understood than Core Audio at this point.
If you are looking for sample code, "Audio Graph" is a good starting point. The developer has provided a bit of his own documentation, that will help you quite a bit.
It will depend on the use for the audio. If latency is an issue, go for audio units. But if is not, a higher layer may be the one you require, such as AudioQueues.

Resources