Create polyphonic dsp node in audiokit 5 - signal-processing

I am searching for the resources to create a polyphonic dsp node in audiokit 5, so that I can connect and use it with AudioEngine. For c++ dsp, I am using faust.
AudioKit with faust single voice node is working for me by using faust2audiokit (audiokit 5.0.1), but didn't got any success with polyphonic node.

I'm not sure about the DSP nodes, but the AudioKit Oscillators are monophonic. For polyphonic synths they recommend using the DunneAudioKit Synth class. There is a polyphonic oscillator example in the AudioKit Cookbook but it basically is a round-robin Oscillator pool.

Related

How to make audiokit polyphonic node using STK physical models?

I have been looking for a way to convert STK physical models to audiokit polyphonic node. There are examples inside Audiokit repo, but complete process of triggering the notes are getting complex.
If someone provides the easy way to integrate STK model as audiokit node, it will be more helpful.

ios how to combine AVAudioEngine speech recognition with AudioKit microphone pitch detection?

I would like to use AVAudioFoundation for microphone input for speech detection as shown in this ios example and simultaneously detect the pitch of the user’s voice through the same microphone input, using AudioKit. The latter API is probably a wrapper around the first, but has its own classes and initialization. Is there a way to provide AudioKit with an existing microphone configuration like in the speech example, or some alternative way to use the Speech API and AudioKit’s microphone pitch detection API simultaneously? How might I achieve this?
EDIT: The question is a little more complex
I need to be able to synchronize 3 things: touch events, audio kit detection times, and speech detection times. Each of these operates on a different timebase. Speech gives me segment timestamps with respect to the beginning of audio recording. The timestamp for UITouch events will be different. I am not sure what AudioKit uses for its timestamps. There is some mention of host time and AV timestamps here, but I'm not sure this will get me anywhere.
Speech and audio synchronization is a little unclear. May I have a lead for how this might work?

Audio Kit - Frequency Modulated Tones at different DB levels?

Is it possible to generate frequency modulated tones using Audio Kit inside an IOS Application using Swift?
My goal is to have puretones AND frequency modulated tones(also known as "Warble Tones") be played on button press. I have managed to play different puretones at different frequencies although I am struggling to generate a warble tone.

How to do real-time audio convolution in Swift for iOS app?

I am trying to build an iOS app where I have one mono-channel input (real-time from mic) and double-channel impulse response which needs to be real-time convolved with mono channel input and impulse response and will provide an output which is double-channel output (stereo). Is there a way to do that on iOS with Apple's Audio Toolbox?
You should first decide whether you will be doing convolution in the time or frequency domain - there are benefits to both depending on the length of your signal/impulse response. This is somewhere you should do your own research.
For time domain, rolling your own convolution should be straightforward enough. For frequency domain you will be needing a FFT function, you could roll your own but more efficient versions will exist. For example the Accelerate framework has this implemented already.
But for basic I/O Audio Toolbox is a valid choice ..

difference between AKFrequencyTracker and AKMicrophoneTracker in AudioKit?

I found that there are AKFrequencyTracker and AKMicrophoneTracker in the lib. And both provides frequency as param.
The questions are:
What the difference between AKFrequencyTracker and
AKMicrophoneTracker?
What class is better to use for a real-time microphone singing detection?
Many thanks in advance.
AKMicrophoneTracker is a standalone class that just reads from the microphone and nothing more whereas the AKFrequencyTracker is a node that can be inserted at any point in your signal chain. They both use the same frequency detection algorithm, its just that the AKMicrophoneTracker is easier to use for the common case where all you need is pitch detection and nothing else AudioKit provides.

Resources