Using MPNowPlayingInfoCenter without actually playing audio - ios

I am trying to build an iOS app which controls a music player which runs on a seperate machine. I would like to use the MPNowPlayingInfoCenter for inspecting and controlling this player. As far as I can tell so far, the app actually has to output audio for this to work (see also this answer).
However, for instance, the Spotify app is actually capable of doing this without playing audio on the iOS device. If you use Spotify Connect to play the audio on a different device, the MPNowPlayingInfoCenter still displays the correct song and the controls are functional.
What's the catch here? What does one (conceptually) have to do to achieve this? I can think of continuously emitting a "silent" audio stream, but that seams a bit brute-force.

Streaming silence will work, but you don't need to stream it all the time. Just long enough to send your Now Playing info. Using AVAudioPlayer, I've found approaches as short as this will send the data (assuming the player is loaded with a 1s silent audio file):
player.play()
let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default()
nowPlayingInfoCenter.nowPlayingInfo = [...]
player.stop()
I was very surprised this worked within a single event loop. Any other approach to playing silence seems to work as well. Again, it can be very brief (in fact the above code in my tests doesn't even get as far as playing the silence).
I'm very interested in whether this works reliably for other people, so please comment if you make discoveries about it.
I've explored the Spotify app a bit. I'm not 100% certain if this is the same technique they're using. They're able to mix with other audio somehow. So you can be playing local audio on the phone and also playing Spotify Connect to some other device, and the "Now Playing" info will kind of stomp on each other. For my purposes, that would actually be better, but I haven't been able to duplicate it. I still have to make the audio session non-mixable for at least a moment (so you get ~ 1s audio drop if you're playing local audio at the same time). I did find that the Spotify app was not highly reliable about playing to my Connect device when I was also playing other audio locally. Sometimes it would get confused and switch around where it wanted to play. (Admittedly this is a bizarre use case; I was intentionally trying to break it to figure out how they're doing it.)
EDIT: I don't believe Spotify is actually doing anything special here. I think they just play silent audio. I think getting two streams playing at the same time has to do with AirPlay rather than Spotify (I can't make it happen with Bluetooth, only AirPlay).

Related

Reroute Audio to Bluetooth Speaker

I am creating a Video & Audio capturing app. Every time I start to record, the music played in the bluetooth speaker plays in the phone's speaker. When I exit the app, the music comes back playing on the bluetooth speaker.
My first attempt to solve this is to provide the necessary options for the audioSession, like this:
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: [AVAudioSessionCategoryOptions.MixWithOthers, AVAudioSessionCategoryOptions.AllowBluetooth])
But that didn't work. So my second solution that I'm thinking is to reroute the music output again to the bluetooth speaker.
I researched on this and found this function: audioSession.setOutputDataSource
I don't really know the parameters to be passed there.
And I am not really sure if in the moment I started the video recording, the phone/or my code disables the bluetooth connection or it just reroutes the playback to the phone's speaker.
UPDATE: I coommented out this line: // try audioSession.setMode(AVAudioSessionModeMoviePlayback) and the music pauses a bit and plays again on the bluetooth speaker. But the problem here is that the captured video has no audio.
UPDATE 2: Would this question have a solution if I provide you with my code?
I'll go ahead and take a shot at answering the original question. From Apple
s documentation I go this:
func setOutputDataSource(_ dataSource: AVAudioSessionDataSourceDescription?)throws
Parameters dataSource
The data source for the audio session’s output.
outError
On input, a pointer to an error object. If an error occurs, the pointer is set to an NSError object that describes the
error. If you do not want error information, pass in nil. here
This page should help you figure out what the AV Session data source description does/returns, but in summery it:
You obtain data source descriptions from the shared AVAudioSession object or the AVAudioSessionPortDescription objects corresponding to its input and output ports. Only built-in microphone ports on certain devices support the location, orientation, and polar pattern properties; if a port does not support these features, the value of its dataSources property is nil. here
Are you trying to route music from your app to the speaker (is that the music playing?) or is the music coming from another app, and you would like a dual output?
For error checking you could make sure the speaker is still available, using something like the output data source. If it returns nill (null.) it means you are not able to switch between data-sources.
It's probably also worth noting the user must give you permission to record, however I doubt this is the problem as you seem to have already been recording at one point, just when it was playing through the phone, not the speaker

MPMusicPlayerController Does not prepare/preload correct

I am using MPMusicPlayerController so my app can play music that the user has bought through iTunes. When I select a song and start to play, there is a lag before the sound starts. I am assuming that the song is being buffered from the cloud.
The problem is that I have not found a way to know when the buffering is complete and the audio actually starts.
To play the song I use:
_mediaController = [MPMusicPlayerController applicationMusicPlayer];
[_mediaController setQueueWithItemCollection:collection];
[_mediaController beginGeneratingPlaybackNotifications];
[_mediaController play];
As soon as I call "play", the playback state change notification is called and the playback state is "MPMusicPlaybackStatePlaying" even though the user can't hear any music. I've noticed that even though the mediaController is in the "playing" playback state, the _mediaController.currentPlaybackTime always equals 0 until the music can be heard at which time the currentPlaybackTime properly tracks with the music.
I thought I could use the [_mediaController prepareToPlay] method to preload the audio file, but when I use that, the MPMediaPlaybackIsPreparedToPlayDidChangeNotification notification is never called. So the mediaController is never marked as "prepared".
With all this being said, I have not found a way to prebuffer songs using the MPMusicPlayerController. I know this issue has been around for a while because there is an old question from a few years ago with essentially the same issue, but no answer. Does anyone know how to make this work?
MPMediaPlaybackIsPreparedToPlayDidChangeNotification looks deprecated.
The MPMediaPlayerController getters and notifications are kind of garbage, and aren't "consistent" at all in the way where you set a value and expect the same value to come back when you grab it again.
I solved this by "buffering" the song first, as my app frequently will start in the middle of a song. So my buffering algo is - I'll play the song, then wait for the playback state changed notification, then pause it again and wait for another notifcation. This process will without a doubt trigger the MPMusicPlayerControllerNowPlayingItemDidChangeNotification, then finally the song is ready to play or be mutated (set currentTime or Rate). This seems to work very well.
The prepareToPlay completion handler is also garbage. It seems to fire before the song is actually ready to play, and the method actually seems to start playback :( , which is more than it leads on. This seems to be commonly reported as a "bug" in the apple dev forums.
In the prepareToPlay callback, setting currentPlaybackTime or Rate won't actually mutate the player - you need to wait for the additional MPMusicPlayerControllerNowPlayingItemDidChangeNotification sometime after playback starts for the first time on the song before you'll have any luck mutating any of the player properties.
currentPlaybackRate and Time also aren't very reliable, although they are more reliable once playback actually starts. I'll cache it to what the user sets it to until playback starts to solve the consistency problem. There is a lot more here to get the currentPlaybackRate or time consistently, let me know if you want a code sample, as grabbing those properties for reading will yield different results depending on the executing thread :(
Checking the playback state isn't reliable either unfortunately. Often I've found that the player reports MPMusicPlaybackStatePlaying when its fully paused and not playing - it stays like this indefinitely in certain situations. I recommend abstracting out any determinations if the MPMediaPlayerController is actually playing based on calls to play or pause and a subsequent follow-up of a MPMusicPlayerControllerPlaybackStateDidChangeNotification confirming the player has started that action.
I should mention I'm on iOS 12.1.3, as Apple seems to intermittently fix a small bug or add more API breakages as time goes on. Since its pretty broken now, any improvements may break any abstraction layer you build to fix it - so I'm testing each iOS release to make sure everything still "works".
Hope this helps a bit, its a real struggle.

How to schedule a task at accurate time on Jailbroken iPhone in deep sleep

I'm developing an background (daemon) application that will schedule a task on an exact time. For example, do something at 3 PM, or it can be do something after 3 hours. I've tried NSTimer and scheduling NSThread, but it does not do the task at the time I schedule because iPhone is in deep sleep.
Note that this is an application on a jail-broken device and run as a daemon, so it doesn't have UIApplication instance.
I had the same problem with my daemon. I couldn't find any working method for scheduling device wakes. Instead I prevent it from ever falling in a deep sleep by infinitely playing audio file with silence. That way you don't need IOKit to cancel sleep and your device will stay awake. I can't find the code now but it's very simple - a few calls to AVAudioPlayer. You also need to setup audio session for audio playing and mixing. It's all public and very well known APIs so there shouldn't be any problems implementing that.
There are problems with it. For example, playing audio file will reroute audio to the device receiver. By default audio is playing through the speaker so you need to take care of that. You also need to detect when the screen is turned on/off because device will not sleep when the screen is turned on. When the screen is turned off you start playing silence. When it's turned on you stop it. That will also solve mixing problems with other apps that are trying to play audio.
Unfortunately I don't have any code with me right now to show you some examples. I can add the code later if need it.

play sound while recording fails

I have a 3rd party SDK that handles an audio recording. It has a callback when the recording starts. In the callback I'm trying to play a sound to indicate to the user that the device is now listening (like Siri, or any other speech recognition tends to do), but when I try I get the following error:
AURemoteIO::ChangeHardwareFormats: error -10875
I have tried playing the sound using AudioServicesPlaySystemSound as well as an AVAudioPlayer both with the same result. The sound plays fine at other times, and per the error my assumption is there's an incompatibility between the playback and recording on the hardware level. Can anyone clarify this error, or give me a hint as to a possible workaround?
Make sure that the Audio Session is initialised and configured for Play_and_Record before you start the RemoteIO Audio Unit recording.
You shouldn't and likely can't start playing a sound in a RemoteIO recording callback. Only set a boolean flag in the callback to indicate that a sound should be played. Play your sound from the main UI run loop.
My problem is related specifically to the external SDK and how they handle the audio interface. They override everything when you ask the SDK to start recording, if you take control back you break the recording session. So within the context of that SDK there's no way to work around it unless they fix the SDK.

How to handle AVPlayer errors while the app is running in the background?

I am using AVPlayer to play an audio stream, and it's possible to keep it playing in the background. I'm wondering how could I handle a situtation where the user loses internet connectivity, so that I could provide some feedback or maybe try to re-establish the playback after some seconds.
EDIT: I know that the question regards AVPlayer, but if you have an answer with MPMoviePlayerController it might be useful as well. Right now, by using MPMoviePlayerController, I'm trying to get the MPMovieFinishReasonPlaybackError case of the MPMoviePlayerPlaybackDidFinishReasonUserInfoKey, by subscribing to the MPMoviePlayerPlaybackDidFinishNotification but if f.e. my audio is playing in the background and I turn airplane mode on, I never get this notification; I only get MPMovieFinishReasonPlaybackEnded, and I don't know how to separate that from the case that the user stops the audio himself.
I tried looking around for the actual source but I remember reading somewhere that if the audio playback stops (for whatever reason) it kills the background thread. The person writing about the issue talked about possible feeding the stream some empty audio content to keep the thread alive. You might be able to send a local notification from a call back error notifying the user that the audio experienced an error and will have to be manually restarted from within the application. Haven't played around with the API enough to know which callback is the best one to use in this case. If I find the link I'm looking for I'll update.
EDIT: Here's Grant Pannell's take on audio streaming and multitasking.

Resources