How to resolve "Hardware In Use" issue (error code: 'hwiu')? - ios

I have created an iPhone app with recording with AudioUnit, Conversion, Audio Editing and Merging parts. I done everything except Conversion. This app will work only in iOS 4 or higher.
I tried to convert .caf to .m4a file. But I am getting kAudioConverterErr_HardwareInUse error. Then I tried to convert .caf file to .wav file. Then .wav file to .m4a file. But I am getting the same issue.
I am not clear with this issue. In the Apple documentation, they mentioned like ;
"Returned from the AudioConverterFillComplexBuffer function if the underlying hardware codec has become unavailable, probably due to an audio interruption.
On receiving this error, your application must stop calling AudioConverterFillComplexBuffer. You can check the value of the kAudioConverterPropertyCanResumeFromInterruption property to determine if the converter you are using can resume processing after an interruption. If so, then wait for an interruption-ended call from Audio Session Services, reactivate the audio session, and finally resume using the codec.
If the converter cannot resume processing after an interruption, then on interruption you must abandon the conversion, re-instantiate the converter, and perform the conversion again."
Please help me to resolve it.

I just resolved such a problem.
In my case, I have MPMoviePlayerController, audio queue player, audio recorder in the application.
the movie player needs manually calling "stop" method when content ends.
Otherwise the play state is lock at MPMoviePlaybackStatePlaying. Then I can no more play MP3 and get "hwiu" when I try it. But PCM still work.
Maybe it's because the compressed audio (MP3, AAC, ...) is handled by a unique hardware device. If you are using different techniques (MPMoviePlayerController and audio queue service) to playback compressed audio, you need to release the device once after you finish playing since they are all share the same device.

Related

Using MPNowPlayingInfoCenter without actually playing audio

I am trying to build an iOS app which controls a music player which runs on a seperate machine. I would like to use the MPNowPlayingInfoCenter for inspecting and controlling this player. As far as I can tell so far, the app actually has to output audio for this to work (see also this answer).
However, for instance, the Spotify app is actually capable of doing this without playing audio on the iOS device. If you use Spotify Connect to play the audio on a different device, the MPNowPlayingInfoCenter still displays the correct song and the controls are functional.
What's the catch here? What does one (conceptually) have to do to achieve this? I can think of continuously emitting a "silent" audio stream, but that seams a bit brute-force.
Streaming silence will work, but you don't need to stream it all the time. Just long enough to send your Now Playing info. Using AVAudioPlayer, I've found approaches as short as this will send the data (assuming the player is loaded with a 1s silent audio file):
player.play()
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
nowPlayingInfoCenter.nowPlayingInfo = [...]
player.stop()
I was very surprised this worked within a single event loop. Any other approach to playing silence seems to work as well. Again, it can be very brief (in fact the above code in my tests doesn't even get as far as playing the silence).
I'm very interested in whether this works reliably for other people, so please comment if you make discoveries about it.
I've explored the Spotify app a bit. I'm not 100% certain if this is the same technique they're using. They're able to mix with other audio somehow. So you can be playing local audio on the phone and also playing Spotify Connect to some other device, and the "Now Playing" info will kind of stomp on each other. For my purposes, that would actually be better, but I haven't been able to duplicate it. I still have to make the audio session non-mixable for at least a moment (so you get ~ 1s audio drop if you're playing local audio at the same time). I did find that the Spotify app was not highly reliable about playing to my Connect device when I was also playing other audio locally. Sometimes it would get confused and switch around where it wanted to play. (Admittedly this is a bizarre use case; I was intentionally trying to break it to figure out how they're doing it.)
EDIT: I don't believe Spotify is actually doing anything special here. I think they just play silent audio. I think getting two streams playing at the same time has to do with AirPlay rather than Spotify (I can't make it happen with Bluetooth, only AirPlay).

Play sound without latency iOS

I can't find method how i can play sound real with low latency.
I try use AVFoundation audio player huge latency around 500ms
So i try create system sound, and too without luck latency around 200ms it's not much but not useful for me. I need 50ms max.
Be sure my sound sample is clear tone without silence.
SystemSoundID cID;
BOOL spinitialized;
-(IBAction)doInit
{
if (spinitialized){
AudioServicesPlaySystemSound (cID);
return;
}
NSURL *uref = [[NSURL alloc] initFileURLWithPath: [NSString stringWithFormat:#"%#/soundlib/1.wav", [[NSBundle mainBundle] resourcePath]]];
OSStatus error = AudioServicesCreateSystemSoundID ((__bridge CFURLRef)uref, &cID);
if (error) NSLog(#"SoundPlayer doInit Error is %d",(int)error);
AudioServicesPlaySystemSound (cID);
spinitialized = YES;
}
So i try call by button press down.
Using an already running RemoteIO Audio Unit (or AVAudioUnit) with PCM waveform data that is already loaded into memory provides the lowest latency method to produce sound on iOS devices.
Zero latency is impossible due to buffering, but on all current iOS devices, the buffer size is usually 5.3 to 5.8 milliseconds or lower. On the newest iOS devices you can get audio callbacks even more often. Your audio callback code has to ready to manually copy the proper sequential slice of the desired waveform data into an audio buffer. It will be called in a non-UI thread, so the callback needs to be thread safe, and do no locks, memory management or even Objective C messaging.
Using other AV audio playing methods may result in far higher latency due to the time it takes to load the sound into memory (including potential unpacking or decompression) and to power up the audio hardware (etc.), as well as typically using longer audio buffers. Even starting the RemoteIO Audio Unit has its own latency; but it can be started ahead of time, potentially playing silence, until your app needs to play a sound with the lowest possible (but non-zero) latency, upon receiving some event.
AVAudioEngine with AVAudioUnitSampler is a really easy way to get low latency audio file triggering.
I would suggest looking into incorporating The Amazing Audio Engine into your project http://theamazingaudioengine.com/
It has very nice tools for buffering audio files and playback. As hotpaw2 has mentioned, you're running into an issue with the system starting the buffer when you press the button. you will need to buffer the audio before the button is pressed to reduce your latency.
Michael at TAAE has create this class AEAudioFilePlayer http://theamazingaudioengine.com/doc/interface_a_e_audio_file_player.html
Initializing an AEAudioFilePlayer will load the buffer for you. You can then ask the Player to play the audio back when the button is pressed.
Configure AVAudioSession's preferredIOBufferDuration property.
preferredIOBufferDuration
The preferred I/O buffer duration, in seconds. (read-only)

iOS - Download and play audio file with AVAudioPlayer in the same time from buffer

I have a bit complex situation where I need to play an audio stream from some URL.
First I need to send some cookies along with the URL request and the stream that is coming back must be played with AVAudioPlayer.
So it is possible to use AFNetworking or standard NSURLConnection to send the cookies and save the stream to a temporary file and then play it? But can be done async? I mean once the stream is receiving data also the player must start, so not to wait for finishing the download because it can be a large file.

The audio keeps stopping on MPMoviePlayerController Audio streaming

Im using MPMoviePlayerController to stream audio from a server, but after playing the audio for more than two minutes, the audio starts to stop and resume alot, im streaming more than one file one after one, so because of the interruption, some of the audio files are being skipped with those two console messages:
Took background task assertion (38) for playback stall
Ending background task assertion (38) for playback stall
I'm losing a lot of tracks because of this error.
for the first while, i thought that was a memory issue, but the console shows that each time a loose a track, it print those messages,
Check your network connectivity and the stream encoding.
This console output pretty much says exactly what your problem is; the stream dries out of content and could not keep up playing without interruption.
Either your network connection is unstable or the content is encoded in bandwidths that are far too high for your network connection.
For clarification; even if your local internet peering is offering high bandwidths, you should still check the bandwidths of the entire route. For example, you could try to download the streamed files via your browser for testing the throughput.
Are you trying it on a simulator or a device? It may be a simulator issue.
Also, on device, try streaming through multiple networks, e.g., LTE, wifi, etc., see if there is any difference

play sound while recording fails

I have a 3rd party SDK that handles an audio recording. It has a callback when the recording starts. In the callback I'm trying to play a sound to indicate to the user that the device is now listening (like Siri, or any other speech recognition tends to do), but when I try I get the following error:
AURemoteIO::ChangeHardwareFormats: error -10875
I have tried playing the sound using AudioServicesPlaySystemSound as well as an AVAudioPlayer both with the same result. The sound plays fine at other times, and per the error my assumption is there's an incompatibility between the playback and recording on the hardware level. Can anyone clarify this error, or give me a hint as to a possible workaround?
Make sure that the Audio Session is initialised and configured for Play_and_Record before you start the RemoteIO Audio Unit recording.
You shouldn't and likely can't start playing a sound in a RemoteIO recording callback. Only set a boolean flag in the callback to indicate that a sound should be played. Play your sound from the main UI run loop.
My problem is related specifically to the external SDK and how they handle the audio interface. They override everything when you ask the SDK to start recording, if you take control back you break the recording session. So within the context of that SDK there's no way to work around it unless they fix the SDK.

Resources