Getting audio samples from speakers - ios

So I have two questions:
Is there another (maybe low-level) way to get float* samples of the audio that is currently playing?
Is it possible to do it from inside a framework? I mean when you don't have access to the instance of AVPlayer(or AVAudioPlayerNode, AudioEngine, or even low-level CoreAudio classes, whatever) who owns the audio file? Is there a way to subscribe (in order to analyze, or also may be for modifying/equalizing) to audio samples that are being played via speakers/earphones?
I've tried to install a tap on audioEngine.mainMixerNode which works, but when I set the bufferSize more than 4096 (in order to compute high-density FFT), the callback is called less frequently than it should (about 3 times in a second instead of 30 times or even frequently).
mixerNode.installTap(onBus: 0,
bufferSize: 16384, //or 8192
format: mixerNode.outputFormat(forBus: 0))
{[weak self] (buffer, time) in
//this block is being called LESS frequently...
}
I know that CoreAudio is very powerful and there should be something for this kind of purposes..

An iOS app can only get played audio samples from raw PCM samples that the app itself is playing. Any visibility into samples output by other apps or processes is blocked by the iOS security sandbox. An iOS app can sample audio from the device's microphone.
In an audio engine tap-on-bus, audio samples are delivered to the application's main thread, and thus limited in callback frequency and latency. In order to get the most recent few milliseconds of microphone audio samples, an app needs to use the RemoteIO Audio Unit callback API, where audio samples can be delivered in a high-priority audio context thread.

Related

How to stream audio as data is downloaded?

How can I take data as it is being downloaded/received by my device and then play it through the iPhone speaker? I do not want to wait until the audio is fully downloaded.
Platform: iOS 8.0 +
File type: WAV
Sample Rate: 4000 Hz
Audio Type: PCM, 16 bit
Audio Channels: 1
To minimize latency, pre-enable the apps audio session and request very short buffer durations. Start the RemoteIO Audio Unit output running with the output callback polling a circular buffer, otherwise playing a bit of silence. Then format (resample if needed) and store samples of the wave file, as any portions of the wave file are received, in the circular buffer.

Play sound without latency iOS

I can't find method how i can play sound real with low latency.
I try use AVFoundation audio player huge latency around 500ms
So i try create system sound, and too without luck latency around 200ms it's not much but not useful for me. I need 50ms max.
Be sure my sound sample is clear tone without silence.
SystemSoundID cID;
BOOL spinitialized;
-(IBAction)doInit
{
if (spinitialized){
AudioServicesPlaySystemSound (cID);
return;
}
NSURL *uref = [[NSURL alloc] initFileURLWithPath: [NSString stringWithFormat:#"%#/soundlib/1.wav", [[NSBundle mainBundle] resourcePath]]];
OSStatus error = AudioServicesCreateSystemSoundID ((__bridge CFURLRef)uref, &cID);
if (error) NSLog(#"SoundPlayer doInit Error is %d",(int)error);
AudioServicesPlaySystemSound (cID);
spinitialized = YES;
}
So i try call by button press down.
Using an already running RemoteIO Audio Unit (or AVAudioUnit) with PCM waveform data that is already loaded into memory provides the lowest latency method to produce sound on iOS devices.
Zero latency is impossible due to buffering, but on all current iOS devices, the buffer size is usually 5.3 to 5.8 milliseconds or lower. On the newest iOS devices you can get audio callbacks even more often. Your audio callback code has to ready to manually copy the proper sequential slice of the desired waveform data into an audio buffer. It will be called in a non-UI thread, so the callback needs to be thread safe, and do no locks, memory management or even Objective C messaging.
Using other AV audio playing methods may result in far higher latency due to the time it takes to load the sound into memory (including potential unpacking or decompression) and to power up the audio hardware (etc.), as well as typically using longer audio buffers. Even starting the RemoteIO Audio Unit has its own latency; but it can be started ahead of time, potentially playing silence, until your app needs to play a sound with the lowest possible (but non-zero) latency, upon receiving some event.
AVAudioEngine with AVAudioUnitSampler is a really easy way to get low latency audio file triggering.
I would suggest looking into incorporating The Amazing Audio Engine into your project http://theamazingaudioengine.com/
It has very nice tools for buffering audio files and playback. As hotpaw2 has mentioned, you're running into an issue with the system starting the buffer when you press the button. you will need to buffer the audio before the button is pressed to reduce your latency.
Michael at TAAE has create this class AEAudioFilePlayer http://theamazingaudioengine.com/doc/interface_a_e_audio_file_player.html
Initializing an AEAudioFilePlayer will load the buffer for you. You can then ask the Player to play the audio back when the button is pressed.
Configure AVAudioSession's preferredIOBufferDuration property.
preferredIOBufferDuration
The preferred I/O buffer duration, in seconds. (read-only)

CoreAudio based app stops playing if any UI is added in app

I have a CoreAudio based player that streams remote mp3s.
It uses NSURLConnection to retrieve the mp3 data -> uses AudioConverter to convert the stream into PCM -> and feeds the stream into an AUGraph to play audio.
The player works completely fine in my demo app(it only contains a play button), but when i add the player to another project, but when coupled with a project that already makes networking calls, and updates UI, the player fails to play audio past a few seconds.
Am possibly experiencing a threading issue? What are some preventative approaches that i can take or look into that can prevent this from happening?
You do not mention anything in your software architecture about buffering your data between receiving it via NSURLConnection and when you send it to your player.
Data will arrive in chunks with inconsistent arrival rates.
Please see these answers I posted regarding buffering and network jitter.
Network jitter
and
Network jitter and buffering queue
In a nutshell, you can receive data and immediately send it to your player because the next data may not arrive in time.
You don't mention the rate that the mp3 file is delivered. If it is delivered very quickly over a fast connection... are you buffering all of the data received or is it getting lost somewhere in your app? There is a chance that your problem is that you are receiving way too much data too fast and not properly buffering up the data received.

Audio driver for iOS and OSX based on Core Audio

Is it possible to have a common implementation of a Core Audio based audio driver bridge for iOS and OSX ? Or is there a difference in the Core Audio API for iOS versus the Core Audio API for OSX?
The audio bridge only needs to support the following methods:
Set desired sample rate
Set desired audio block size (in samples)
Start/Stop microphone stream
Start/Stop speaker stream
The application supplies 2 callback function pointers to the audio bridge and the audio bridge sets everything up so that:
The speaker callback is called on regular time intervals where it's requested to return an audio block
The microphone callback is called on regular time intervals where it receives an audio block
I was told that it's not possible to have a single implementation which works on both iOS and OSX as there are differences between the iOS Core Audio API and the OSX Core Audio API.
Is this true?
There are no significant differences between the Core Audio API on OS X and on iOS. However there are significant differences in obtaining the correct Audio Unit for the microphone and the speaker to use. There are only 2 units on iOS (RemoteIO and one for VOIP), but more and potentially many more on a Mac, plus the user might change the selection. There are also differences in some of the Audio Unit parameters (buffer size, sample rates, etc.) allowed/supported by the hardware.

difference between AudioQueue time and AudioQueue Device time

I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.

Resources