Audio output queue on iOS 5 - ios

Has anyone ever experienced that an audio output queue in iOS 5 is silent even though the queue is running and no errors are returned?
Downloaded a sample code that had the same issue.

If you fill the Audio Queue output buffers with zero (or any constant, or very small values close to zero), the audio output will be or seem to be silent.

Related

Play sound without latency iOS

I can't find method how i can play sound real with low latency.
I try use AVFoundation audio player huge latency around 500ms
So i try create system sound, and too without luck latency around 200ms it's not much but not useful for me. I need 50ms max.
Be sure my sound sample is clear tone without silence.
SystemSoundID cID;
BOOL spinitialized;
-(IBAction)doInit
{
if (spinitialized){
AudioServicesPlaySystemSound (cID);
return;
}
NSURL *uref = [[NSURL alloc] initFileURLWithPath: [NSString stringWithFormat:#"%#/soundlib/1.wav", [[NSBundle mainBundle] resourcePath]]];
OSStatus error = AudioServicesCreateSystemSoundID ((__bridge CFURLRef)uref, &cID);
if (error) NSLog(#"SoundPlayer doInit Error is %d",(int)error);
AudioServicesPlaySystemSound (cID);
spinitialized = YES;
}
So i try call by button press down.
Using an already running RemoteIO Audio Unit (or AVAudioUnit) with PCM waveform data that is already loaded into memory provides the lowest latency method to produce sound on iOS devices.
Zero latency is impossible due to buffering, but on all current iOS devices, the buffer size is usually 5.3 to 5.8 milliseconds or lower. On the newest iOS devices you can get audio callbacks even more often. Your audio callback code has to ready to manually copy the proper sequential slice of the desired waveform data into an audio buffer. It will be called in a non-UI thread, so the callback needs to be thread safe, and do no locks, memory management or even Objective C messaging.
Using other AV audio playing methods may result in far higher latency due to the time it takes to load the sound into memory (including potential unpacking or decompression) and to power up the audio hardware (etc.), as well as typically using longer audio buffers. Even starting the RemoteIO Audio Unit has its own latency; but it can be started ahead of time, potentially playing silence, until your app needs to play a sound with the lowest possible (but non-zero) latency, upon receiving some event.
AVAudioEngine with AVAudioUnitSampler is a really easy way to get low latency audio file triggering.
I would suggest looking into incorporating The Amazing Audio Engine into your project http://theamazingaudioengine.com/
It has very nice tools for buffering audio files and playback. As hotpaw2 has mentioned, you're running into an issue with the system starting the buffer when you press the button. you will need to buffer the audio before the button is pressed to reduce your latency.
Michael at TAAE has create this class AEAudioFilePlayer http://theamazingaudioengine.com/doc/interface_a_e_audio_file_player.html
Initializing an AEAudioFilePlayer will load the buffer for you. You can then ask the Player to play the audio back when the button is pressed.
Configure AVAudioSession's preferredIOBufferDuration property.
preferredIOBufferDuration
The preferred I/O buffer duration, in seconds. (read-only)

The audio keeps stopping on MPMoviePlayerController Audio streaming

Im using MPMoviePlayerController to stream audio from a server, but after playing the audio for more than two minutes, the audio starts to stop and resume alot, im streaming more than one file one after one, so because of the interruption, some of the audio files are being skipped with those two console messages:
Took background task assertion (38) for playback stall
Ending background task assertion (38) for playback stall
I'm losing a lot of tracks because of this error.
for the first while, i thought that was a memory issue, but the console shows that each time a loose a track, it print those messages,
Check your network connectivity and the stream encoding.
This console output pretty much says exactly what your problem is; the stream dries out of content and could not keep up playing without interruption.
Either your network connection is unstable or the content is encoded in bandwidths that are far too high for your network connection.
For clarification; even if your local internet peering is offering high bandwidths, you should still check the bandwidths of the entire route. For example, you could try to download the streamed files via your browser for testing the throughput.
Are you trying it on a simulator or a device? It may be a simulator issue.
Also, on device, try streaming through multiple networks, e.g., LTE, wifi, etc., see if there is any difference

iOS 5/6: low volume after first usage of CoreAudio

I work on a VoIP app. The AudioSession's mode is set to kAudioSessionMode_VoiceChat.
For a call, I open a CoreAudio AudioUnit with subtype kAudioUnitSubType_VoiceProcessingIO . Everything works fine. After the first call, I close the AudioUnit with AudioUnitUninitialize() and I deactivate the audio session.
Now, however, it seems as if the audio device is not correctly released: the ringer volume is very low, the media player's volume is lower than usual. And for a subsequent call, I cannot activate kAudioUnitSubType_VoiceProcessingIO anymore. It works to create an AudioUnit with kAudioUnitSubType_RemoteIO instead, but also the call's volume is very low (both receiver and speaker).
This first occured on iOS 5. With the iPhone 5 on iOS 6, it is even worse (even lower volume).
Has anyone seen this? Do I need to do more than AudioUnitUninitialize() to release the Voice Processing unit?
I've found the solution: I've incorrectly used AudioUnitUninitialize() to free the audio component retrieved with AudioComponentInstanceNew(). Correct is to use AudioComponentInstanceDispose().
Yes, you need to dispose the audioUnit when using voiceProcessingIO. For some reason there is no problem when using RemoteIO subtype. So whenever you get OSStatus -66635 (kAudioQueueErr_MultipleVoiceProcessors), check for missing AudioComponentInstanceDispose() calls.

difference between AudioQueue time and AudioQueue Device time

I'm trying to sync music sent from a host iPhone to a client iPhone.. the audio is read using AVAssetReader and sent via packets to the client, which in turns feeds it to a ring buffer, which in turn populates the audioqueue buffers and starts playing.
I was going over the AudioQueue docs and there seems to be two different concepts of a timestamp related to the audioQueue: Audio Queue Time and Audio Queue Device Time. I'm not sure how those two are related and when one should be used rather (or in conjunction with) the other.

How to resolve "Hardware In Use" issue (error code: 'hwiu')?

I have created an iPhone app with recording with AudioUnit, Conversion, Audio Editing and Merging parts. I done everything except Conversion. This app will work only in iOS 4 or higher.
I tried to convert .caf to .m4a file. But I am getting kAudioConverterErr_HardwareInUse error. Then I tried to convert .caf file to .wav file. Then .wav file to .m4a file. But I am getting the same issue.
I am not clear with this issue. In the Apple documentation, they mentioned like ;
"Returned from the AudioConverterFillComplexBuffer function if the underlying hardware codec has become unavailable, probably due to an audio interruption.
On receiving this error, your application must stop calling AudioConverterFillComplexBuffer. You can check the value of the kAudioConverterPropertyCanResumeFromInterruption property to determine if the converter you are using can resume processing after an interruption. If so, then wait for an interruption-ended call from Audio Session Services, reactivate the audio session, and finally resume using the codec.
If the converter cannot resume processing after an interruption, then on interruption you must abandon the conversion, re-instantiate the converter, and perform the conversion again."
Please help me to resolve it.
I just resolved such a problem.
In my case, I have MPMoviePlayerController, audio queue player, audio recorder in the application.
the movie player needs manually calling "stop" method when content ends.
Otherwise the play state is lock at MPMoviePlaybackStatePlaying. Then I can no more play MP3 and get "hwiu" when I try it. But PCM still work.
Maybe it's because the compressed audio (MP3, AAC, ...) is handled by a unique hardware device. If you are using different techniques (MPMoviePlayerController and audio queue service) to playback compressed audio, you need to release the device once after you finish playing since they are all share the same device.

Resources