MTAudioProcessingTap - produce more output samples? - ios

Inside my iOS 8.0. App I need to apply some custom audio processing on (non-realtime) audio playback. Typically, the audio comes from a device-local audio file.
Currently, I use MTAudioProcessingTap on a AVMutableAudioMix. Inside the process callback I then call my processing code. In certain cases this processing code may produce more samples than the amount of samples being passed in and I wonder what's the best way to handle this (think time stretching effect for example)
The process callback takes an incoming CMItemCount *numberFramesOut argument that signals the amount of outgoing frames. For in-place processing where the amount of incoming frames and outgoing frames is identical this is no problem. In the case where my processing generates more samples I need a way to get the playback going until my output buffers are emptied.
Is MTAudioProcessingTap the right choice here anyway?

MTAudioProcessingTap does not support changing the number of samples between the input and the output (to skip silences for instance).
You will need a custom audio unit graph for this.

A circular buffer/fifo is one of the most common methods to intermediate between different producer and consumer rates, as long as the long term rate is the same. If long term, you plan on producing more samples than are played, you may need to occasionally temporarily stop producing samples, while still playing, in order not to fill up all of the buffer or the systems memory.

Related

Time diff between AudioOutputUnitStart() and render callbackm for RemoteIO

This is a pretty "minutia" question regarding timing...
I'm using iOS's RemoteIO audio unit to do things. Just wonder how exactly system handles the timing: after calling AudioOutputUnitStart(), the unit should on "on", then render callbacks will be pulled by downstream units. Allow me to guess:
Possibility 1: the next render callback happens right after the execution of AudioOutputUnitStart(), then it goes on
Possibility 2: the system has its own render callback rhythm. After calling AudioOutputUnitStart(), the next render callback catches on one of the system's "next" tick, then start from there
1 or 2? or there's 3? Thanks in advance!
The audio latency seems to depend on the specific device model, audio session and options, requested sample rate and buffer size, and whether any other audio (background or recently closed app) is or has recently been playing or recording on the system. Whether or not the internal audio amplifier circuits (etc.) need to be powered up or are already turned on may make the biggest difference. Requesting certain sample rates seems to also cause extra time due to the buffering potentially needed by the OS resampling and mixer code.
So likely (2) or (3).
The best way to minimize latency when using RemoteIO is to request very short buffers (1 to 6 mS) in the audio session setup, start the audio session and Audio Unit way ahead of time (at app startup, view load, etc.), then fill the callback buffers with zeros (or discard recorded callback data) until you need sound.

how to find an offset from two audio file ? one is noisy and one is clear

I have once scenario in which user capturing the concert scene with the realtime audio of the performer and at the same time device is downloading the live streaming from audio broadcaster device.later i replace the realtime noisy audio (captured while recording) with the one i have streamed and saved in my phone (good quality audio).right now i am setting the audio offset manually with trial and error basis while merging so i can sync the audio and video activity at exact position.
Now what i want to do is to automate the process of synchronisation of audio.instead of merging the video with clear audio at given offset i want to merge the video with clear audio automatically with proper sync.
for that i need to find the offset at which i should replace the noisy audio with clear audio.e.g. when user start the recording and stop the recording then i will take that sample of real time audio and compare with live streamed audio and take the exact part of that audio from that and sync at perfect time.
does any one have any idea how to find the offset by comparing two audio files and sync with the video.?
Here's a concise, clear answer.
• It's not easy - it will involve signal processing and math.
• A quick Google gives me this solution, code included.
• There is more info on the above technique here.
• I'd suggest gaining at least a basic understanding before you try and port this to iOS.
• I would suggest you use the Accelerate framework on iOS for fast Fourier transforms etc
• I don't agree with the other answer about doing it on a server - devices are plenty powerful these days. A user wouldn't mind a few seconds of processing for something seemingly magic to happen.
Edit
As an aside, I think it's worth taking a step back for a second. While
math and fancy signal processing like this can give great results, and
do some pretty magical stuff, there can be outlying cases where the
algorithm falls apart (hopefully not often).
What if, instead of getting complicated with signal processing,
there's another way? After some thought, there might be. If you meet
all the following conditions:
• You are in control of the server component (audio broadcaster
device)
• The broadcaster is aware of the 'real audio' recording
latency
• The broadcaster and receiver are communicating in a way
that allows accurate time synchronisation
...then the task of calculating audio offset becomes reasonably
trivial. You could use NTP or some other more accurate time
synchronisation method so that there is a global point of reference
for time. Then, it is as simple as calculating the difference between
audio stream time codes, where the time codes are based on the global
reference time.
This could prove to be a difficult problem, as even though the signals are of the same event, the presence of noise makes a comparison harder. You could consider running some post-processing to reduce the noise, but noise reduction in its self is an extensive non-trivial topic.
Another problem could be that the signal captured by the two devices could actually differ a lot, for example the good quality audio (i guess output from the live mix console?) will be fairly different than the live version (which is guess is coming out of on stage monitors/ FOH system captured by a phone mic?)
Perhaps the simplest possible approach to start would be to use cross correlation to do the time delay analysis.
A peak in the cross correlation function would suggest the relative time delay (in samples) between the two signals, so you can apply the shift accordingly.
I don't know a lot about the subject, but I think you are looking for "audio fingerprinting". Similar question here.
An alternative (and more error-prone) way is running both sounds through a speech to text library (or an API) and matching relevant part. This would be of course not very reliable. Sentences frequently repeat in songs and concert maybe instrumental.
Also, doing audio processing on a mobile device may not play well (because of low performance or high battery drain or both). I suggest you to use a server if you go that way.
Good luck.

Playing back a WAV file streamed gradually over a network connection in iOS

I'm working with a third party API that behaves as follows:
I have to connect to its URL and make my request, which involves POSTing request data;
the remote server then sends back, "chunk" at a time, the corresponding WAV data (which I receive in my NSURLConnectionDataDelegate's didReceiveData callback).
By "chunk" for argument's sake, we mean some arbitrary "next portion" of the data, with no guarantee that it corresponds to any meaningful division of the audio (e.g. it may not be aligned to a specific multiple of audio frames, the number of bytes in each chunk is just some arbitrary number that can be different for each chunk, etc).
Now-- correct me if I'm wrong, I can't simply use an AVAudioPlayer because I need to POST to my URL, so I need to pull back the data "manually" via an NSURLConnection.
So... given the above, what is then the most painless way for me to play back that audio as it comes down the wire? (I appreciate that I could concatenate all the arrays of bytes and then pass the whole thing to an AVAudioPlayer at the end-- only that this will delay the start of playback as I have to wait for all the data.)
I will give a bird's eye view to the solution. I think that this will help you a great deal in the direction to find a concrete, coded solution.
iOS provides a zoo of audio APIs and several of them can be used to play audio. Which one of them you choose depends on your particular requirements. As you wrote already, the AVAudioPlayer class is not suitable for your case, because with this one, you need to know all the audio data in the moment you start playing audio. Obviously, this is not the case for streaming, so we have to look for an alternative.
A good tradeoff between ease of use and versatility are the Audio Queue Services, which I recommend for you. Another alternative would be Audio Units, but they are a low level C API and therefor less intuitive to use and they have many pitfalls. So stick to Audio Queues.
Audio Queues allow you to define callback functions which are called from the API when it needs more audio data for playback - similarly to the callback of your network code, which gets called when there is data available.
Now the difficulty is how to connect two callbacks, one which supplies data and one which requests data. For this, you have to use a buffer. More specifically, a queue (don't confuse this queue with the Audio Queue stuff. Audio Queue Services is the name of an API. On the other hand, the queue I'm talking about next is a container object). For clarity, I will call this one buffer-queue.
To fill data into the buffer-queue you will use the network callback function, which supplies data to you from the network. And data will be taken out of the buffer-queue by the audio callback function, which is called by the Audio Queue Services when it needs more data.
You have to find a buffer-queue implementation which supports concurrent access (aka it is thread safe), because it will be accessed from two different threads, the audio thread and the network thread.
Alternatively to finding an already thread safe buffer-queue implementation, you can take care of the thread safety on your own, e.g. by executing all code dealing with the buffer-queue on a certain dispatch queue (3rd kind of queue here; yes, Apple and IT love them).
Now, what happens if either
The audio callback is called and your buffer-queue is empty, or
The network callback is called and your buffer-queue is already full?
In both cases, the respective callback function can't proceed normally. The audio callback function can't supply audio data if there is none available and the network callback function can't store incoming data if the buffer-queue is full.
In these cases, I would first try out blocking further execution until more data is available or respectively space is available to store data. On the network side, this will most likely work. On the audio side, this might cause problems. If it causes problems on the audio side, you have an easy solution: if you have no data, simply supply silence as data. That means that you need to supply zero-frames to the Audio Queue Services, which it will play as silence to fill the gap until more data is available from the network.
This is the concept that all streaming players use when suddenly the audio stops and it tells you "buffering" next to some kind of spinning icon indicating that you have to wait and nobody knows for how long.

Match a sound in recorded audio stream

I have a PCM stream incoming from the microphone. I am analyzing short chunks (Java language) of it to detect short spikes in sound loudness (amplitude). I have a determined sound that plays periodically and I need to know if detected spike is in fact this sound recorded. I have the PCM for sound played, it's completely determined.
I have no clue where to start, should I perform some comparison in time domain or frequency domain? Would be great if someone could give me some insight on how this is done and where should I dig.
Thanks.
It sounds like you want to compare an incoming set of pulses to a references set of pulses. Cross-correlation is probably what you want to use. You may need to precondition your data first, eg create an envelope instead of using raw data, or the cross-correlation may fail unless the match is perfect.

Simultaneously generate multiple sine waves into sample buffer for audio unit (iOS)

Given an array (of changing length) of frequencies and amplitudes, can I generate a single audio buffer on a sample by sample basis that includes all the tones in the array? If not, what is the best way to generate multiple tones in a single audio unit? Have each note generate it's own buffer then sum those into an output buffer? Wouldn't that be the same thing as doing it all at once?
Working on an iOS app that generates notes from touches, considering using STK but don't want to have to send note off messages, would rather just generate sinusoidal tones for the notes I'm holding in an array. Each note actually needs to produce two sinusoids, with varying frequency and amplitude. One note may be playing the same frequency as a different note so a note off message at that frequency could cause problems. Eventually I want to manage amplitude (adsr) envelopes for each note outside of the audio unit. I also want response time to be as fast as possible so I'm willing to do some extra work/learning to keep the audio stuff as low level as I can.
I've been working with sine wave single tone generator examples. Tried essentially doubling one of these, something like:
Buffer[frame] = (sin(theta1) + sin(theta2))/2
Incrementing theta1/theta2 by frequency1/frequency2 over sample rate, (I realize this is not the most efficient calling sin() ) but get aliasing effects. I've yet to find an example with multiple frequencies or data sources other than reading audio from file.
Any suggestions/examples? I originally had each note generate its own audio unit, but that gave me too much latency from touch to note sounding (and seems inefficient too). I am newer to this level of programming than I am to digital audio in general, so please be gentle if I'm missing something obvious.
yes of course you can, you can do whatever you like inside your render callback. when you set this call back up, you can pass in a pointer to an object.
that object could contain the on off states for each tone. in fact the object could contain a method responsible for filling up the buffer. ( just make sure the object is nonatomic if it is a property -- otherwise you will get artefacts due to locking issues )
What exactly are you trying to achieve? Do you really need to generate on-the-fly?
if so, you run the risk of overloading the remoteIO audio unit's render callback, which will give you glitches and artefacts
you might get away with it on the simulator and then move it over to a device and find that mysteriously it isn't working any more because you are running on 50 times less processor, and one callback cannot complete before the next one arrives
having said, you can get away with a lot
I have made a 12 tone player that can simultaneously play any number of individual tones
all I do is have a ring buffer for each tone (I am using quite a complex waveform so this takes a lot of time, in fact I actually calculate it the first time the application is run and subsequently load it from file), and maintain a read-head and an enabled flag for each ring.
Then I add everything up in the render callback, and this handles fine on the device, even if all 12 are playing together. I know the documentation tells you not to do this, it recommends only using this callback in order to fill one buffer from another, but you can get away with a lot, and it is a PITA to code up some sort of buffering system that calculates on a different thread.

Resources