I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.
Related
So I have two questions:
Is there another (maybe low-level) way to get float* samples of the audio that is currently playing?
Is it possible to do it from inside a framework? I mean when you don't have access to the instance of AVPlayer(or AVAudioPlayerNode, AudioEngine, or even low-level CoreAudio classes, whatever) who owns the audio file? Is there a way to subscribe (in order to analyze, or also may be for modifying/equalizing) to audio samples that are being played via speakers/earphones?
I've tried to install a tap on audioEngine.mainMixerNode which works, but when I set the bufferSize more than 4096 (in order to compute high-density FFT), the callback is called less frequently than it should (about 3 times in a second instead of 30 times or even frequently).
mixerNode.installTap(onBus: 0,
bufferSize: 16384, //or 8192
format: mixerNode.outputFormat(forBus: 0))
{[weak self] (buffer, time) in
//this block is being called LESS frequently...
}
I know that CoreAudio is very powerful and there should be something for this kind of purposes..
An iOS app can only get played audio samples from raw PCM samples that the app itself is playing. Any visibility into samples output by other apps or processes is blocked by the iOS security sandbox. An iOS app can sample audio from the device's microphone.
In an audio engine tap-on-bus, audio samples are delivered to the application's main thread, and thus limited in callback frequency and latency. In order to get the most recent few milliseconds of microphone audio samples, an app needs to use the RemoteIO Audio Unit callback API, where audio samples can be delivered in a high-priority audio context thread.
I have a CoreAudio based player that streams remote mp3s.
It uses NSURLConnection to retrieve the mp3 data -> uses AudioConverter to convert the stream into PCM -> and feeds the stream into an AUGraph to play audio.
The player works completely fine in my demo app(it only contains a play button), but when i add the player to another project, but when coupled with a project that already makes networking calls, and updates UI, the player fails to play audio past a few seconds.
Am possibly experiencing a threading issue? What are some preventative approaches that i can take or look into that can prevent this from happening?
You do not mention anything in your software architecture about buffering your data between receiving it via NSURLConnection and when you send it to your player.
Data will arrive in chunks with inconsistent arrival rates.
Please see these answers I posted regarding buffering and network jitter.
Network jitter
and
Network jitter and buffering queue
In a nutshell, you can receive data and immediately send it to your player because the next data may not arrive in time.
You don't mention the rate that the mp3 file is delivered. If it is delivered very quickly over a fast connection... are you buffering all of the data received or is it getting lost somewhere in your app? There is a chance that your problem is that you are receiving way too much data too fast and not properly buffering up the data received.
Is it possible to have a common implementation of a Core Audio based audio driver bridge for iOS and OSX ? Or is there a difference in the Core Audio API for iOS versus the Core Audio API for OSX?
The audio bridge only needs to support the following methods:
Set desired sample rate
Set desired audio block size (in samples)
Start/Stop microphone stream
Start/Stop speaker stream
The application supplies 2 callback function pointers to the audio bridge and the audio bridge sets everything up so that:
The speaker callback is called on regular time intervals where it's requested to return an audio block
The microphone callback is called on regular time intervals where it receives an audio block
I was told that it's not possible to have a single implementation which works on both iOS and OSX as there are differences between the iOS Core Audio API and the OSX Core Audio API.
Is this true?
There are no significant differences between the Core Audio API on OS X and on iOS. However there are significant differences in obtaining the correct Audio Unit for the microphone and the speaker to use. There are only 2 units on iOS (RemoteIO and one for VOIP), but more and potentially many more on a Mac, plus the user might change the selection. There are also differences in some of the Audio Unit parameters (buffer size, sample rates, etc.) allowed/supported by the hardware.
Looking to send 9600 baud symbols generated from AudioQueue syncronized with audio, both of which will output via audio out port. If the serial data is at 19.2kHz is that effectively out of hearing range? Trying to get the audio out clean without audible distortion from serial data.
Thanks for input.
Im using MPMoviePlayerController to stream audio from a server, but after playing the audio for more than two minutes, the audio starts to stop and resume alot, im streaming more than one file one after one, so because of the interruption, some of the audio files are being skipped with those two console messages:
Took background task assertion (38) for playback stall
Ending background task assertion (38) for playback stall
I'm losing a lot of tracks because of this error.
for the first while, i thought that was a memory issue, but the console shows that each time a loose a track, it print those messages,
Check your network connectivity and the stream encoding.
This console output pretty much says exactly what your problem is; the stream dries out of content and could not keep up playing without interruption.
Either your network connection is unstable or the content is encoded in bandwidths that are far too high for your network connection.
For clarification; even if your local internet peering is offering high bandwidths, you should still check the bandwidths of the entire route. For example, you could try to download the streamed files via your browser for testing the throughput.
Are you trying it on a simulator or a device? It may be a simulator issue.
Also, on device, try streaming through multiple networks, e.g., LTE, wifi, etc., see if there is any difference