Using AVAudioRecorder inside of AudioKit - ios

Is there a known way to convert an AVAudioRecorder object into an AKNode object?
My use case for this is that I have an application that is pulling an audio stream from a custom piece of bluetooth hardware. I've already written all the handlers for this, and the output of the hardware ends up as an AVAudioRecorder.
I'd like to make use of all the nicer visualisation of audio that AK offers - specifically the plotting of amplitude on a graph in my view as it is recorded, but to get it to work, it appears that I need to turn the AVAudioRecorder into an AKNode.
Is there an easy way to do this without going back through all the code that interfaces with the hardware and replacing it to use AKNode from the start?
I have gone through the documentation of AK and it doesn't seem possible at this time to use an existing AVAudioRecorder as a source node.
Thanks!

I don't believe so. AVAudioPlayer is also unavailable, but we do have AVAudioPlayerNode. There is not a corresponding AVAudioRecorderNode.

Related

IOS: Custom real time audio effect for audioEngine?

What is best way for creating custom real time audio effect for audioEngine in iOS ?
I want to process audio at a low level, how to do it right? Be sure to use AudioUnitExtension? By simpler, I meant, is it possible to inherit from Audio Unit and using C code to change audio Data and send it back to the audioUnit connection chain in audioEngine?
You may find that the framework AudioKit will let you do what you want, the problem with manipulating audio at a low level is you have to deal with a lot of complex stuff tangental to what you are trying to achieve, just changing the playback rate of a sample means you have to deal with interpolation and antialias filters, AudioKit handles all of that for you but it may mean you have to change your way of thinking about what you are trying to.
https://github.com/AudioKit/AudioKit

IOS: How to increase the Bass and Treble of a AVAudioPlayer in Swift?

Anyone knows how to increase the bass and treble of a track ?
Within the same track, if I do a split into 3 sections, can I adjust & have say 3 different level of reverb, ie one in each section ?
Thanks
I don't think it is possible to use EQ effects with an AVAudioPlayer.
A quick search gave me answers like this from StackOverflow:
can I use AVAudioPlayer to make an equalizer player?
Or this sadly unanswered question from Apple Developer Forums:
https://forums.developer.apple.com/thread/46998
Instead
What you can do instead is use the AVAudioEngine (https://developer.apple.com/reference/avfoundation/avaudioengine) which gives you the opportunity to add an EQ node (or other effect nodes) to your AVAudioPlayer node.
AVAudioEngine may seem daunting at first, but think of it as a mixer. You have some input nodes that generate sound (AVAudioPlayer nodes for instance), and you can then attach and connect those notes to your AVAudioEngine. The AVAudioEngine has a AVAudioMixerNode so you can control things like volume and so forth.
Between your input nodes and your mixer you can attach effect nodes, like an EQ node for instance and you can add a "tap" and record the final output to a file if so desired.
Reading material
This slideshare introduction helped me a great deal understanding what AVAudioEngine is (the code is Objective C though, but it should be understandable)
The AVAudioEngine in Practice from WWDC 2014 is a great introduction too.
So, I hope you are not frightened by the above, as said, it may seem daunting when you see it at first, but once you get it wired together it works fine and you have the option to add other effects than just EQ (pitch shifting, slowing down a file and so on).
Hope that helps you.
unfortunately, it doesn't allow you to stream remote URLs. The only way around that is to download a track, convert it from mp4 or m4a to LPCM format using audio services API and then schedule a buffer to run through the audio engine. AVPlayer, on the other hand, allows you to stream remote media but it's extremely hard to attach and eq to it ... you may be able to look into MpAudioProcessingTap but that only works with local files as well.
There is a good write up to do this through AVAudioEngine here

Getting Volume Output from AVAudioSynthesiser in IOS via swift

I am trying to add a Siri like button to a game I am working on. I am using AVSpeech Synthesiser and I am trying to create an animation that will move with the speech output volume.
I don't see a method to get the volume output of AVAudiosynthesiser. Is there a way to get it via another framework?
An easy solution might be to show some visuals by registering an AVSpeechSynthesizerDelegate and listening for calls to speechSynthesizer:willSpeakRangeOfSpeechString:utterance:
NOTE: This is not a solution but more a hint of what can be done based on experience.
A bit hard to do setup would be to route the sound through AVFoundation core buffers and make a simple Fourier transformation to get the Amplitudes and use the Amplitude as "Volume".
I use a similar technique to visualize in real-time the music played and live recording from microphone input for a karaoke app I made.

Modifying Low, Mid, High Frequencies Core Audio IOS

I see that the only effect unit on iOS is the ipod EQ. Is there any other way to change the high, mid and low frequencies of an audio unit on iOS?
Unfortunately, the iPhone doesn't really allow custom AudioUnits (ie. an AudioUnit's ID cannot be registered for use by an AUGraph). What you can do is register a render callback and process the raw PCM data yourself. Sites like musicdsp.org have sample DSP code that you can utilize to implement any effect you can imagine.
Also, here is a similar StackOverflow question for reference: How to make a simple EQ AudioUnit
There are a bunch of built-in Audio Units including a set of filters, delay and even reverb. A good clue is to look in AUComponent.h. You will need to get their ABSD's properly setup otherwise they throw an error or produce silence. But they do work.

Virtual Instrument App Recording Functionality With RemoteIO

I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.

Resources