play sound while recording fails - ios

I have a 3rd party SDK that handles an audio recording. It has a callback when the recording starts. In the callback I'm trying to play a sound to indicate to the user that the device is now listening (like Siri, or any other speech recognition tends to do), but when I try I get the following error:
AURemoteIO::ChangeHardwareFormats: error -10875
I have tried playing the sound using AudioServicesPlaySystemSound as well as an AVAudioPlayer both with the same result. The sound plays fine at other times, and per the error my assumption is there's an incompatibility between the playback and recording on the hardware level. Can anyone clarify this error, or give me a hint as to a possible workaround?

Make sure that the Audio Session is initialised and configured for Play_and_Record before you start the RemoteIO Audio Unit recording.

You shouldn't and likely can't start playing a sound in a RemoteIO recording callback. Only set a boolean flag in the callback to indicate that a sound should be played. Play your sound from the main UI run loop.

My problem is related specifically to the external SDK and how they handle the audio interface. They override everything when you ask the SDK to start recording, if you take control back you break the recording session. So within the context of that SDK there's no way to work around it unless they fix the SDK.

Related

Using MPNowPlayingInfoCenter without actually playing audio

I am trying to build an iOS app which controls a music player which runs on a seperate machine. I would like to use the MPNowPlayingInfoCenter for inspecting and controlling this player. As far as I can tell so far, the app actually has to output audio for this to work (see also this answer).
However, for instance, the Spotify app is actually capable of doing this without playing audio on the iOS device. If you use Spotify Connect to play the audio on a different device, the MPNowPlayingInfoCenter still displays the correct song and the controls are functional.
What's the catch here? What does one (conceptually) have to do to achieve this? I can think of continuously emitting a "silent" audio stream, but that seams a bit brute-force.
Streaming silence will work, but you don't need to stream it all the time. Just long enough to send your Now Playing info. Using AVAudioPlayer, I've found approaches as short as this will send the data (assuming the player is loaded with a 1s silent audio file):
player.play()
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
nowPlayingInfoCenter.nowPlayingInfo = [...]
player.stop()
I was very surprised this worked within a single event loop. Any other approach to playing silence seems to work as well. Again, it can be very brief (in fact the above code in my tests doesn't even get as far as playing the silence).
I'm very interested in whether this works reliably for other people, so please comment if you make discoveries about it.
I've explored the Spotify app a bit. I'm not 100% certain if this is the same technique they're using. They're able to mix with other audio somehow. So you can be playing local audio on the phone and also playing Spotify Connect to some other device, and the "Now Playing" info will kind of stomp on each other. For my purposes, that would actually be better, but I haven't been able to duplicate it. I still have to make the audio session non-mixable for at least a moment (so you get ~ 1s audio drop if you're playing local audio at the same time). I did find that the Spotify app was not highly reliable about playing to my Connect device when I was also playing other audio locally. Sometimes it would get confused and switch around where it wanted to play. (Admittedly this is a bizarre use case; I was intentionally trying to break it to figure out how they're doing it.)
EDIT: I don't believe Spotify is actually doing anything special here. I think they just play silent audio. I think getting two streams playing at the same time has to do with AirPlay rather than Spotify (I can't make it happen with Bluetooth, only AirPlay).

How to schedule a task at accurate time on Jailbroken iPhone in deep sleep

I'm developing an background (daemon) application that will schedule a task on an exact time. For example, do something at 3 PM, or it can be do something after 3 hours. I've tried NSTimer and scheduling NSThread, but it does not do the task at the time I schedule because iPhone is in deep sleep.
Note that this is an application on a jail-broken device and run as a daemon, so it doesn't have UIApplication instance.
I had the same problem with my daemon. I couldn't find any working method for scheduling device wakes. Instead I prevent it from ever falling in a deep sleep by infinitely playing audio file with silence. That way you don't need IOKit to cancel sleep and your device will stay awake. I can't find the code now but it's very simple - a few calls to AVAudioPlayer. You also need to setup audio session for audio playing and mixing. It's all public and very well known APIs so there shouldn't be any problems implementing that.
There are problems with it. For example, playing audio file will reroute audio to the device receiver. By default audio is playing through the speaker so you need to take care of that. You also need to detect when the screen is turned on/off because device will not sleep when the screen is turned on. When the screen is turned off you start playing silence. When it's turned on you stop it. That will also solve mixing problems with other apps that are trying to play audio.
Unfortunately I don't have any code with me right now to show you some examples. I can add the code later if need it.

iOS 5/6: low volume after first usage of CoreAudio

I work on a VoIP app. The AudioSession's mode is set to kAudioSessionMode_VoiceChat.
For a call, I open a CoreAudio AudioUnit with subtype kAudioUnitSubType_VoiceProcessingIO . Everything works fine. After the first call, I close the AudioUnit with AudioUnitUninitialize() and I deactivate the audio session.
Now, however, it seems as if the audio device is not correctly released: the ringer volume is very low, the media player's volume is lower than usual. And for a subsequent call, I cannot activate kAudioUnitSubType_VoiceProcessingIO anymore. It works to create an AudioUnit with kAudioUnitSubType_RemoteIO instead, but also the call's volume is very low (both receiver and speaker).
This first occured on iOS 5. With the iPhone 5 on iOS 6, it is even worse (even lower volume).
Has anyone seen this? Do I need to do more than AudioUnitUninitialize() to release the Voice Processing unit?
I've found the solution: I've incorrectly used AudioUnitUninitialize() to free the audio component retrieved with AudioComponentInstanceNew(). Correct is to use AudioComponentInstanceDispose().
Yes, you need to dispose the audioUnit when using voiceProcessingIO. For some reason there is no problem when using RemoteIO subtype. So whenever you get OSStatus -66635 (kAudioQueueErr_MultipleVoiceProcessors), check for missing AudioComponentInstanceDispose() calls.

How to resolve "Hardware In Use" issue (error code: 'hwiu')?

I have created an iPhone app with recording with AudioUnit, Conversion, Audio Editing and Merging parts. I done everything except Conversion. This app will work only in iOS 4 or higher.
I tried to convert .caf to .m4a file. But I am getting kAudioConverterErr_HardwareInUse error. Then I tried to convert .caf file to .wav file. Then .wav file to .m4a file. But I am getting the same issue.
I am not clear with this issue. In the Apple documentation, they mentioned like ;
"Returned from the AudioConverterFillComplexBuffer function if the underlying hardware codec has become unavailable, probably due to an audio interruption.
On receiving this error, your application must stop calling AudioConverterFillComplexBuffer. You can check the value of the kAudioConverterPropertyCanResumeFromInterruption property to determine if the converter you are using can resume processing after an interruption. If so, then wait for an interruption-ended call from Audio Session Services, reactivate the audio session, and finally resume using the codec.
If the converter cannot resume processing after an interruption, then on interruption you must abandon the conversion, re-instantiate the converter, and perform the conversion again."
Please help me to resolve it.
I just resolved such a problem.
In my case, I have MPMoviePlayerController, audio queue player, audio recorder in the application.
the movie player needs manually calling "stop" method when content ends.
Otherwise the play state is lock at MPMoviePlaybackStatePlaying. Then I can no more play MP3 and get "hwiu" when I try it. But PCM still work.
Maybe it's because the compressed audio (MP3, AAC, ...) is handled by a unique hardware device. If you are using different techniques (MPMoviePlayerController and audio queue service) to playback compressed audio, you need to release the device once after you finish playing since they are all share the same device.

How to handle AVPlayer errors while the app is running in the background?

I am using AVPlayer to play an audio stream, and it's possible to keep it playing in the background. I'm wondering how could I handle a situtation where the user loses internet connectivity, so that I could provide some feedback or maybe try to re-establish the playback after some seconds.
EDIT: I know that the question regards AVPlayer, but if you have an answer with MPMoviePlayerController it might be useful as well. Right now, by using MPMoviePlayerController, I'm trying to get the MPMovieFinishReasonPlaybackError case of the MPMoviePlayerPlaybackDidFinishReasonUserInfoKey, by subscribing to the MPMoviePlayerPlaybackDidFinishNotification but if f.e. my audio is playing in the background and I turn airplane mode on, I never get this notification; I only get MPMovieFinishReasonPlaybackEnded, and I don't know how to separate that from the case that the user stops the audio himself.
I tried looking around for the actual source but I remember reading somewhere that if the audio playback stops (for whatever reason) it kills the background thread. The person writing about the issue talked about possible feeding the stream some empty audio content to keep the thread alive. You might be able to send a local notification from a call back error notifying the user that the audio experienced an error and will have to be manually restarted from within the application. Haven't played around with the API enough to know which callback is the best one to use in this case. If I find the link I'm looking for I'll update.
EDIT: Here's Grant Pannell's take on audio streaming and multitasking.

Resources