Audio Kit - Frequency Modulated Tones at different DB levels? - ios

Is it possible to generate frequency modulated tones using Audio Kit inside an IOS Application using Swift?
My goal is to have puretones AND frequency modulated tones(also known as "Warble Tones") be played on button press. I have managed to play different puretones at different frequencies although I am struggling to generate a warble tone.

Related

ios how to combine AVAudioEngine speech recognition with AudioKit microphone pitch detection?

I would like to use AVAudioFoundation for microphone input for speech detection as shown in this ios example and simultaneously detect the pitch of the user’s voice through the same microphone input, using AudioKit. The latter API is probably a wrapper around the first, but has its own classes and initialization. Is there a way to provide AudioKit with an existing microphone configuration like in the speech example, or some alternative way to use the Speech API and AudioKit’s microphone pitch detection API simultaneously? How might I achieve this?
EDIT: The question is a little more complex
I need to be able to synchronize 3 things: touch events, audio kit detection times, and speech detection times. Each of these operates on a different timebase. Speech gives me segment timestamps with respect to the beginning of audio recording. The timestamp for UITouch events will be different. I am not sure what AudioKit uses for its timestamps. There is some mention of host time and AV timestamps here, but I'm not sure this will get me anywhere.
Speech and audio synchronization is a little unclear. May I have a lead for how this might work?

How to do real-time audio convolution in Swift for iOS app?

I am trying to build an iOS app where I have one mono-channel input (real-time from mic) and double-channel impulse response which needs to be real-time convolved with mono channel input and impulse response and will provide an output which is double-channel output (stereo). Is there a way to do that on iOS with Apple's Audio Toolbox?
You should first decide whether you will be doing convolution in the time or frequency domain - there are benefits to both depending on the length of your signal/impulse response. This is somewhere you should do your own research.
For time domain, rolling your own convolution should be straightforward enough. For frequency domain you will be needing a FFT function, you could roll your own but more efficient versions will exist. For example the Accelerate framework has this implemented already.
But for basic I/O Audio Toolbox is a valid choice ..

How to amplify the voice recorded from far distance

When a person speaks far away from a mobile, the voice recorded is low.
When a person speaks near a mobile, the voice recorded is high. I want to is to play the human voice in equal volume no matter how far away (not infinite) he is from the phone when the voice is recorded.
What I have already tried:
adjust the volume based on the dB such as AVAudioPlayer But
the problem is that the dB contains all the environmental sound. So
it only works when the human voice vary heavily.
Then I thought I should find a way to sample the intensity of the
human voice in the media which leads me to voice recognition. But
this is a huge topic. I cannot narrow the areas which could
solve my problems.
The voice recorded from distance suffers from significant corruption. One problem is noise, another is echo. To amplify it you need to clean voice from echo and noise. Ideally you need to do that with a better microphone, but if only a single microphone is available you have to apply signal processing. The signal processing algorithms you are interested in are:
Noise cancellation. You can find many samples on Google from simple
to very advanced ones
Echo cancellation. Again you can find many implementations.
There is no ready library to do the above, you will have to implement a large part yourself, you can look on the WebRTC code which has both noise and echo cancellation, like described in this question:
Is it possible to reduce background noise while streaming audio on the iPhone?

Multiple audio and setting volume on Novocaine in iOS

How do I play multiple audio and change its volume using Novocaine?
thanks!
There is similar question which I wrote quite a lengthy response for
Using Novocaine in an audio app
Basically playing multiple sounds at once involves mixing the various sounds down sample by sample. Changing volume involves multiplying samples in the audio buffer by some amplitude value. That is if you want your output to be twice as loud simply multiply every sample by 2.0f. Accelerate framework can help you with this.

In iOS, is it possible to obtain waveform or spectrum data of the current track? eg: equalizer

Is it possible to read the spectrum data from the currently playing track in iOS? For example, to make an equalizer similar to the one in iTunes?
Apple has a sample program, "aurioTouch," which can display time and frequency domain waveforms.
http://developer.apple.com/library/ios/#samplecode/aurioTouch/Introduction/Intro.html

Resources