iOS AudioQueue playing AAC data always delay for 2-3 seconds - ios

Recently I was using AudioQueue to play network AAC data. My plan is that once a AAC data is received, audioqueue enqueue this buffer and play immediately.
When I start audioQueue, I use AudioQueueStart(audioQueue, NULL) to start audioQueue as soon as possible.
However, when I print log, I notice that the AAC buffer was enqueued by AudioQueueEnqueueBuffer(audioQueue, buffer, 0, NULL) immediately but the sound played after enqueuing buffer which delayed for around 2-3 seconds. It means I received and enqueued the data at beginning but the first sound started 2-3 seconds later.
I wonder if it is because that audioqueue service decode AAC to PCM itself so the sounds delayed. If so, should I decode AAC myself and use Audio Unit instead?
I've been confused for a while and wish anyone can light me up!

Finally I found the reason. When the AAC data is late, I pause the audioQueue and start again when the data come. the PAUSE operation lead to the delay of playback.

Related

How to stream audio as data is downloaded?

How can I take data as it is being downloaded/received by my device and then play it through the iPhone speaker? I do not want to wait until the audio is fully downloaded.
Platform: iOS 8.0 +
File type: WAV
Sample Rate: 4000 Hz
Audio Type: PCM, 16 bit
Audio Channels: 1
To minimize latency, pre-enable the apps audio session and request very short buffer durations. Start the RemoteIO Audio Unit output running with the output callback polling a circular buffer, otherwise playing a bit of silence. Then format (resample if needed) and store samples of the wave file, as any portions of the wave file are received, in the circular buffer.

Play sound without latency iOS

I can't find method how i can play sound real with low latency.
I try use AVFoundation audio player huge latency around 500ms
So i try create system sound, and too without luck latency around 200ms it's not much but not useful for me. I need 50ms max.
Be sure my sound sample is clear tone without silence.
SystemSoundID cID;
BOOL spinitialized;
-(IBAction)doInit
{
if (spinitialized){
AudioServicesPlaySystemSound (cID);
return;
}
NSURL *uref = [[NSURL alloc] initFileURLWithPath: [NSString stringWithFormat:#"%#/soundlib/1.wav", [[NSBundle mainBundle] resourcePath]]];
OSStatus error = AudioServicesCreateSystemSoundID ((__bridge CFURLRef)uref, &cID);
if (error) NSLog(#"SoundPlayer doInit Error is %d",(int)error);
AudioServicesPlaySystemSound (cID);
spinitialized = YES;
}
So i try call by button press down.
Using an already running RemoteIO Audio Unit (or AVAudioUnit) with PCM waveform data that is already loaded into memory provides the lowest latency method to produce sound on iOS devices.
Zero latency is impossible due to buffering, but on all current iOS devices, the buffer size is usually 5.3 to 5.8 milliseconds or lower. On the newest iOS devices you can get audio callbacks even more often. Your audio callback code has to ready to manually copy the proper sequential slice of the desired waveform data into an audio buffer. It will be called in a non-UI thread, so the callback needs to be thread safe, and do no locks, memory management or even Objective C messaging.
Using other AV audio playing methods may result in far higher latency due to the time it takes to load the sound into memory (including potential unpacking or decompression) and to power up the audio hardware (etc.), as well as typically using longer audio buffers. Even starting the RemoteIO Audio Unit has its own latency; but it can be started ahead of time, potentially playing silence, until your app needs to play a sound with the lowest possible (but non-zero) latency, upon receiving some event.
AVAudioEngine with AVAudioUnitSampler is a really easy way to get low latency audio file triggering.
I would suggest looking into incorporating The Amazing Audio Engine into your project http://theamazingaudioengine.com/
It has very nice tools for buffering audio files and playback. As hotpaw2 has mentioned, you're running into an issue with the system starting the buffer when you press the button. you will need to buffer the audio before the button is pressed to reduce your latency.
Michael at TAAE has create this class AEAudioFilePlayer http://theamazingaudioengine.com/doc/interface_a_e_audio_file_player.html
Initializing an AEAudioFilePlayer will load the buffer for you. You can then ask the Player to play the audio back when the button is pressed.
Configure AVAudioSession's preferredIOBufferDuration property.
preferredIOBufferDuration
The preferred I/O buffer duration, in seconds. (read-only)

difference between AudioQueue time and AudioQueue Device time

I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.

Audio output queue on iOS 5

Has anyone ever experienced that an audio output queue in iOS 5 is silent even though the queue is running and no errors are returned?
Downloaded a sample code that had the same issue.
If you fill the Audio Queue output buffers with zero (or any constant, or very small values close to zero), the audio output will be or seem to be silent.

How to resolve "Hardware In Use" issue (error code: 'hwiu')?

I have created an iPhone app with recording with AudioUnit, Conversion, Audio Editing and Merging parts. I done everything except Conversion. This app will work only in iOS 4 or higher.
I tried to convert .caf to .m4a file. But I am getting kAudioConverterErr_HardwareInUse error. Then I tried to convert .caf file to .wav file. Then .wav file to .m4a file. But I am getting the same issue.
I am not clear with this issue. In the Apple documentation, they mentioned like ;
"Returned from the AudioConverterFillComplexBuffer function if the underlying hardware codec has become unavailable, probably due to an audio interruption.
On receiving this error, your application must stop calling AudioConverterFillComplexBuffer. You can check the value of the kAudioConverterPropertyCanResumeFromInterruption property to determine if the converter you are using can resume processing after an interruption. If so, then wait for an interruption-ended call from Audio Session Services, reactivate the audio session, and finally resume using the codec.
If the converter cannot resume processing after an interruption, then on interruption you must abandon the conversion, re-instantiate the converter, and perform the conversion again."
Please help me to resolve it.
I just resolved such a problem.
In my case, I have MPMoviePlayerController, audio queue player, audio recorder in the application.
the movie player needs manually calling "stop" method when content ends.
Otherwise the play state is lock at MPMoviePlaybackStatePlaying. Then I can no more play MP3 and get "hwiu" when I try it. But PCM still work.
Maybe it's because the compressed audio (MP3, AAC, ...) is handled by a unique hardware device. If you are using different techniques (MPMoviePlayerController and audio queue service) to playback compressed audio, you need to release the device once after you finish playing since they are all share the same device.

Resources