I'm working on an app that will play sound files. If I open apple music app, the slider let me moving between the song where I am.
Other apps like spotify or overcast does not allow this behaviour.
Until now, I have been able to change all parameters of the control center with that exception. Is there any way of making this slider useful?
I'm using something like the following code:
MPRemoteCommandCenter *commandCenter = [MPRemoteCommandCenter sharedCommandCenter];
NSArray *commands = #[commandCenter.playCommand, commandCenter.pauseCommand, commandCenter.nextTrackCommand, commandCenter.previousTrackCommand, commandCenter.bookmarkCommand, commandCenter.changePlaybackPositionCommand, commandCenter.changePlaybackRateCommand, commandCenter.dislikeCommand, commandCenter.enableLanguageOptionCommand, commandCenter.likeCommand, commandCenter.ratingCommand, commandCenter.seekBackwardCommand, commandCenter.seekForwardCommand, commandCenter.skipBackwardCommand, commandCenter.skipForwardCommand, commandCenter.stopCommand, commandCenter.togglePlayPauseCommand];
for (MPRemoteCommand *command in commands) {
[command removeTarget:nil];
[command setEnabled:NO];
}
[commandCenter.playCommand addTarget:self action:#selector(playTrack)];
[commandCenter.pauseCommand addTarget:self action:#selector(pauseTrack)];
[commandCenter.playCommand setEnabled:YES];
[commandCenter.pauseCommand setEnabled:YES];
The New Way for iOS 12+ and Swift 4+
Following is an improved way of handling scrubbing in Remote Play Center:
// Handle remote events
func setupRemoteTransportControls() {
let commandCenter = MPRemoteCommandCenter.shared()
// Scrubber
commandCenter.changePlaybackPositionCommand.addTarget { [weak self](remoteEvent) -> MPRemoteCommandHandlerStatus in
guard let self = self else {return .commandFailed}
if let player = self.player {
let playerRate = player.rate
if let event = remoteEvent as? MPChangePlaybackPositionCommandEvent {
player.seek(to: CMTime(seconds: event.positionTime, preferredTimescale: CMTimeScale(1000)), completionHandler: { [weak self](success) in
guard let self = self else {return}
if success {
self.player?.rate = playerRate
}
})
return .success
}
}
return .commandFailed
}
// Register to receive events
UIApplication.shared.beginReceivingRemoteControlEvents()
}
There is the changePlaybackPositionCommand API with the associated Event MPChangePlaybackPositionCommandEvent.positionTime (see https://developer.apple.com/library/ios/releasenotes/General/iOS91APIDiffs/Objective-C/MediaPlayer.html)
I tried
[commandCenter.changePlaybackPositionCommand
addTarget: self
action: #selector( onChangePlaybackPositionCommand: )]
with the associated method
- (MPRemoteCommandHandlerStatus) onChangePlaybackPositionCommand:
(MPChangePlaybackPositionCommandEvent *) event
{
NSLog(#"changePlaybackPosition to %f", event.positionTime);
return MPRemoteCommandHandlerStatusSuccess;
}
but the cursor is still not movable and the method is not called. I guess I still miss something
In addition to implementing the callback, it seems that you would have to set the canBeControlledByScrubbing property to true. Unfortunately, there is no public accessor to set it, so you would have to do it as follows (and won't be able to submit your app to the AppStore):
NSNumber *shouldScrub = [NSNumber numberWithBool:YES];
[[[MPRemoteCommandCenter sharedCommandCenter] changePlaybackPositionCommand]
performSelector:#selector(setCanBeControlledByScrubbing:) withObject:shouldScrub];
[[[MPRemoteCommandCenter sharedCommandCenter] changePlaybackPositionCommand]
addTarget:self
action:#selector(handleChangePlaybackPositionCommand:)];
If you do it like this, you will get the callback on your handleChangePlaybackPositionCommand: method, taking an MPChangePlaybackPositionCommandEvent as its only parameter.
If you do want to support older versions of iOS than 9.1, I'd suggest to check for the iOS version before executing the above code to prevent crashes (You can either do it using the new API introduced with iOS 8, or if you want to support iOS 7 and earlier as well, using something like
[[[UIDevice currentDevice] systemVersion] compare:#"9.1" options:NSNumericSearch] != NSOrderedAscending
I hope this helps :-)
As of 2020 with Swift 5, if we only use commandCenter.changePlaybackPositionCommand it won't work, we also need to set metadata with MPMediaItemPropertyPlaybackDuration for the player then we can use the slider.
Check this article from Apple:
https://developer.apple.com/documentation/avfoundation/media_assets_playback_and_editing/creating_a_basic_video_player_ios_and_tvos/controlling_background_audio
Look at the section: Provide Display Metadata
There’s no API to support that—as you’ve noticed, the built-in Music app is the only one that gets to use it. If you’d like an addition to the API, your best option is to file an enhancement request.
update: looks like as of iOS 9.1 this may no longer be the case—see PatrickV’s answer.
I have answered this here command center Scrubber on lock screen swift
Basically you just need the nowPlaying metadata then for each function you want you enable it and add a handler for it.
I realised that comments about this only being for Apple were wrong when I saw the Librivox app had a working scrubber.
Related
I have camera view using AVFoundation and if phone call or Skype call is active then we can't use camera.
How can i check if AVFoundation will not open then i need to open other view without using camera.
if i will check this-
BOOL isPlayingWithOthers = [[AVAudioSession sharedInstance] isOtherAudioPlaying];
then it will not open when any other app playing audio.
Any suggestions ?
The CTCallCenter object has a currentCalls property which is an NSSet of the current calls. If there is a call then the currentCalls property should be != nil.
If you want to know if any of the calls is actually connected, then you'll have to iterate through the current calls and check the callState to determine if it is CTCallStateConnected or not.
#import <CoreTelephony/CTCallCenter.h>
#import <CoreTelephony/CTCall.h>
-(bool)isOnPhoneCall {
/*
Returns YES if the user is currently on a phone call
*/
CTCallCenter *callCenter = [[[CTCallCenter alloc] init] autorelease];
for (CTCall *call in callCenter.currentCalls) {
if (call.callState == CTCallStateConnected) {
return YES;
}
}
return NO;
}
I am using this method in Swift, Tarun answer helped me.
import CallKit
func isOnPhoneCall() -> Bool {
/*
Returns true if the user is currently on a phone call
*/
for call in CXCallObserver().calls {
if call.hasEnded == false {
return true
}
}
return false
}
Your app delegate will receive the -applicationDidResignActive message and your app can listen for the UIApplicationDidResignActiveNotification. These will be received when your app is interrupted by a call as well as in other cases where the app is interrupted, such as when the screen locks or the user presses the lock button.
For more details how to handle interruptions in Responding to Interruptions.
Also refer stack overflow post on How can we detect call interruption in our iphone application?
I'm trying to test out the iOS 8.1 handoff feature with NSUserActivity between my iPhone and my iPad. For this, I tried both implementing my own solution, and to use Apple's PhotoHandoff project. However, it's not working.
If I provide a webpageURL, the handover works fine, but when I try to use userData or addUserInfoEntriesFromDictionary nothing works, and I can't for the life of me figure out what the catch is to make the data work.
Sample code:
NSUserActivity *activity = [[NSUserActivity alloc] initWithActivityType:#"com.company.MyTestApp.activity"];
activity.title = #"My Activity";
activity.userInfo = # {};
// activity.webpageURL = [NSURL URLWithString:#"http://google.com"];
self.userActivity = activity;
[self.userActivity becomeCurrent];
[self.userActivity addUserInfoEntriesFromDictionary:# { #"nanananan": #[ #"totoro", #"monsters" ] }];
(I'm also unable to make it work with a Mac app with a corresponding activity type)
I hope you found the solution already, but in case somebody stumbles upon this problem too, here is a solution. (Actually not very different from the previous answer)
Create user activity without userInfo, it will be ignored:
NSUserActivity *activity = [[NSUserActivity alloc] initWithActivityType:#"..."];
activity.title = #"Test activity";
activity.delegate = self;
activity.needsSave = YES;
self.userActivity = activity;
[self.userActivity becomeCurrent];
Implement the delegate to react to needSave events:
- (void)userActivityWillSave:(NSUserActivity *)userActivity {
userActivity.userInfo = #{ #"KEY" : #"VALUE" };
}
When needsSave is set to YES this method will be called and userActivity will be updated.
Hope this helps.
To update the activity object’s userInfo dictionary, you need to configure its delegate and set its needsSave property to YES whenever the userInfo needs updating.
This process is described in the best practices section of the Adopting Handoff guide.
For example, with a simple UITextView, you need to specify the activity type ("com.company.app.edit") identifier in the Info.plist property list file in the NSUserActivityTypes array, then:
- (NSUserActivity *)customUserActivity
{
if (!_customUserActivity) {
_customUserActivity = [[NSUserActivity alloc] initWithActivityType:#"com.company.app.edit"];
_customUserActivity.title = #"Editing in app";
_customUserActivity.delegate = self;
}
return _customUserActivity;
}
- (void)textViewDidBeginEditing:(UITextView *)textView
{
[self.customUserActivity becomeCurrent];
}
- (void)textViewDidChange:(UITextView *)textView
{
self.customUserActivity.needsSave = YES;
}
- (BOOL)textViewShouldEndEditing:(UITextView *)textView
{
[self.customUserActivity invalidate];
return YES;
}
- (void)userActivityWillSave:(NSUserActivity *)userActivity
{
[userActivity addUserInfoEntriesFromDictionary:#{ #"editText" : self.textView.text }];
}
FWIW, I was having this issue. I was lucky that one of my Activity types worked and the other didn't:
Activity: Walking
(UserInfo x1,y1)
(UserInfo x2,y2)
(UserInfo x3,y3)
Activity: Standing
(UserInfo x4,y4)
Activity: Walking
etc.
I got userInfo if the handoff occured when standing but not walking. I got other properties such as webpageURL in all cases; just userInfo came through null.
The fix for me was to invalidate & recreate the NSUserActivity object every time (e.g. when Walking to x2/y2 from x1/y1), instead of only when Activity type changed (e.g. from walking to standing). This is very much not the way the doc is written, but fixed the issue on iOS 9.
UPDATE: This workaround doesn't work on iOS 8. You need to implement this via the userActivityWillSave delegate, as gregoryM specified. Per Apple's doc:
To update the activity object’s userInfo dictionary efficiently,
configure its delegate and set its needsSave property to YES whenever
the userInfo needs updating. At appropriate times, Handoff invokes the
delegate’s userActivityWillSave: callback, and the delegate can update
the activity state.
This isn't a "best practice", it is required!
[Note: issue occurred on iOS 9 devices running code built on Xcode 6.x. Haven't tested Xcode 7 yet, and issue may not occur on iOS 8.]
You can get a list of the keyboards installed on the iOS device using:
NSUserDefaults *userDeafaults = [NSUserDefaults standardUserDefaults];
NSDictionary * userDefaultsDict = [userDeafaults dictionaryRepresentation];
NSLog(#"%#", userDefaultsDict);
This yields something in the console like:
{
...
AppleKeyboards = (
"en_US#hw=US;sw=QWERTY",
"es_ES#hw=Spanish - ISO;sw=QWERTY-Spanish",
"emoji#sw=Emoji",
"com.swiftkey.SwiftKeyApp.Keyboard"
);
AppleKeyboardsExpanded = 1;
...
}
This tells me that the device has the Spanish, Emoji and SwiftKey keyboards installed, but it tells me nothing about which will be used when the keyboard comes up.
Is there a way to tell?
There is no public API for this, but I found a solution for you, which requires very little "gray area API" (I define API as "gray area" if an API is not normally exposed, but can be hidden with little to no work).
iOS has the following class: UITextInputMode
This class gives you all the input methods the user can use. Using the following query will give you the currently used, only when the keyboard is open:
UITextInputMode* inputMode = [[[UITextInputMode activeInputModes] filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:#"isDisplayed = YES"]] lastObject];
To get the display name of the extension (or regular Apple keyboard), use:
[inputMode valueForKey:#"displayName"]
or
[inputMode valueForKey:#"extendedDisplayName"]
This only works when the keyboard is visible. So you will have to monitor input mode change yourself using
[[NSNotificationCenter defaultCenter] addObserverForName:UITextInputCurrentInputModeDidChangeNotification object:nil queue:[NSOperationQueue mainQueue] usingBlock:^(NSNotification *note)
{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"%#", [[[[UITextInputMode activeInputModes] filteredArrayUsingPredicate:[NSPredicate predicateWithFormat:#"isDisplayed = YES"]] lastObject] valueForKey:#"extendedDisplayName"]);
});
}];
We actually need to delay obtaining the current input mode, as the notification is sent before the keyboard internal implementation has updated the system with new value. Obtaining it on the next runloop works well.
Leo Natan's answer is great, but I would like to add something to it. You can actually get the current input mode at any time, not just when the keyboard is open, like this:
UITextView *textView = [[UITextView alloc] init];
UITextInputMode *inputMode = textView.textInputMode;
Please note that textView.textInputMode is nil for the Emoji keyboard for some reason.
Also, in addition to displayName and extendedDisplayName, there are other keys you can retrieve, such as identifier, normalizedIdentifier (iOS 8+), hardwareLayout, ... See the full API here:
https://github.com/nst/iOS-Runtime-Headers/blob/master/Frameworks/UIKit.framework/UIKeyboardInputMode.h
Now I'm not sure if using any of those is more risky than displayName for App Store approval...
it works for me Swift 5.0
NotificationCenter.default.addObserver(self, selector: #selector(keyBoardChanged(_:)), name:UITextInputMode.currentInputModeDidChangeNotification, object: nil)
#objc func keyBoardChanged(_ notification: NSNotification){
if let identifier = textField.textInputMode?.perform(NSSelectorFromString("identifier"))?.takeUnretainedValue() as? String{
if identifier == "YOUR APP IDENTIFIER"{
//Do Whatever you required :)
}
}
}
#Leo Natan's answers is cool but it's may return nil when the keyboard have not display.
So here I use the string to find the UIKeyboardInputMode's property.
I can tell you that this can find out the current keyboard because it's comes from Apple's Private API.
Code here:
+ (BOOL)isTheCustomKeyboard
{
UITextInputMode* inputMode = [UITextInputMode currentInputMode];
if ([inputMode respondsToSelector:NSSelectorFromString(#"identifier")])
{
NSString* indentifier = [inputMode performSelector:NSSelectorFromString(#"identifier")];
if ([indentifier isEqualToString: YOUR_APP_ID])
{
return YES;
}
}
return NO;
}
And more:
+ (BOOL)isContaintCustomKeyboard
{
NSArray * inputModes = [UITextInputMode activeInputModes];
for (id inputModel in inputModes)
{
if ([inputModel respondsToSelector:NSSelectorFromString(#"identifier")])
{
NSString* indentifier = [inputModel performSelector:NSSelectorFromString(#"identifier")];
if ([indentifier isEqualToString: YOUR_APP_ID])
{
return YES;
}
}
}
return NO;
}
Actually we can also use the displayName or the identifier and more.
Swift 5
let inputMode = UIApplication.shared.delegate?.window??.textInputMode
if inputMode?.responds(to: NSSelectorFromString("identifier")) ?? false {
let identifier = inputMode?.perform(NSSelectorFromString("identifier")).takeRetainedValue() as? String
print("\(identifier) // Current keyboard identifier.
}
Problem:
I have UITextField side by side with UIButton with send functionality. When user presses send button I'm performing simple action:
- (IBAction)sendMessage: (id)sender {
[self.chatService sendMessage: self.messageTextField.text];
self.messageTextField.text = #""; // here I get exception
}
Now when user starts using dictation from keyboard, then presses done on dictation view (keyboard) and immediately presses send button, I've got exception "Range or index out of bounds".
Possible solution:
I've noticed that other applications disable this "send" button when speech recognition server is processing data. This is exactly between two events: user presses "done" and results are appearing in text field. I wish to solve it in the same manner.
I've problem finding in documentation where this notification can be received. I've found UITextInput protocol, but this is not what I need.
Similar topics:
Using Dictation - iOS 6 - DidStart - solution not acceptable (might be rejected by apple)
Disable Dictation button on the keyboard of iPhone 4S / new iPad - similar approach as above
What have I tried:
simply catch and ignore exception. Crash didn't acured, but virtual keyboard become completely unresponsive
Disabling send button when [UITextInputMode currentInputMode].primaryLanguage is equal #"dictation". Notification UITextInputCurrentInputModeDidChangeNotification which reports end of dictation mode arrives before dictation service commits new value and I'm still able to click send button to cause exception. I could add delay when primaryLanguage losses #"dictation" value, but I don't like this approach. Most probably this required delay depends how much speech recognition service is responsive.
I've added bunch of actions on different events (this evets was looking processing: UIControlEventEditingDidBegin, UIControlEventEditingChanged, UIControlEventEditingDidEnd, UIControlEventEditingDidEndOnExit). The good thing is that it looks like UIControlEventEditingChanged is fired exactly at desired moments: when user presses "Done" on dictation view and when service is committing or ending dictation. So this is my best concept so far. The bad thing is that this is fired in other cases too and there is no information to distinguish in which case this control event was fired, so I don't know should I disable or enable the button or do nothing.
I finally found ultimate solution.
It is simple elegant will pass apple review and it Always work. Just react on UIControlEventEditingChanged and detect existance of replacemnt characterlike this:
-(void)viewDidLoad {
[super viewDidLoad];
[self.textField addTarget: self
action: #selector(eventEditingChanged:)
forControlEvents: UIControlEventEditingChanged];
}
-(IBAction)eventEditingChanged:(UITextField *)sender {
NSRange range = [sender.text rangeOfString: #"\uFFFC"];
self.sendButton.enabled = range.location==NSNotFound;
}
Old approach
Finlay I've found some solution. This is improved concept nr 3 with mix of concept nr 2 (based on that answer).
-(void)viewDidLoad {
[super viewDidLoad];
[self.textField addTarget: self
action: #selector(eventEditingChanged:)
forControlEvents: UIControlEventEditingChanged];
}
-(IBAction)eventEditingChanged:(UITextField *)sender {
NSString *primaryLanguage = [UITextInputMode currentInputMode].primaryLanguage;
if ([primaryLanguage isEqualToString: #"dictation"]) {
self.sendButton.enabled = NO;
} else {
// restore normal text field state
self.sendButton.enabled = self.textField.text.length>0;
}
}
- (IBAction)sendMessage: (id)sender {
[self.chatService sendMessage: self.messageTextField.text];
self.messageTextField.text = #"";
}
- (BOOL)textFieldShouldReturn:(UITextField *)textField {
if (self.textField.text.length==0 || !self.sendButton.enabled) {
return NO;
}
[self sendMessage: textField];
return YES;
}
// other UITextFieldDelegate methods ...
Now problem doesn't appears since user is blocked when it could happen (exactly between user presses "Done" button on dictation view and when results are coming from speech recognition service.
The good thing is that public API is used (only #"dictation" can be a problem, but I thin it should be accepted by Apple).
In iOS 7 Apple introduced TextKit so there are new information for this question:
NSAttachmentCharacter = 0xfffc
Used to denote an attachment as documentation says.
So, if your version is more or equal to 7.0, better approach is to check attributedString for attachments.
I'd like to crossfade from one track to the next in a Spotify enabled app. Both tracks are Spotify tracks, and since only one data stream at a time can come from Spotify, I suspect I need to buffer (I think I can read ahead 1.5 x playback speed) the last few seconds of the first track, start the stream for track two, fade out one and fade in two using an AudioUnit.
I've reviewed sample apps:
Viva - https://github.com/iKenndac/Viva SimplePlayer with EQ - https://github.com/iKenndac/SimplePlayer-with-EQ and tried to get my mind around the SPCircularBuffer, but I still need help. Could someone point me to another example or help bullet-point a track crossfade game plan?
Update: Thanks to iKenndac, I'm about 95% there. I'll post what I have so far:
in SPPlaybackManager.m: initWithPlaybackSession:(SPSession *)aSession {
added:
self.audioController2 = [[SPCoreAudioController alloc] init];
self.audioController2.delegate = self;
and in
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
...
self.audioController.audioOutputEnabled = self.playbackSession.isPlaying;
// for crossfade, add
self.audioController2.audioOutputEnabled = self.playbackSession.isPlaying;
and added a new method based on playTrack
-(void)crossfadeTrack:(SPTrack *)aTrack callback:(SPErrorableOperationCallback)block {
// switch audiocontroller from current to other
if (self.playbackSession.audioDeliveryDelegate == self.audioController)
{
self.playbackSession.audioDeliveryDelegate = self.audioController2;
self.audioController2.delegate = self;
self.audioController.delegate = nil;
}
else
{
self.playbackSession.audioDeliveryDelegate = self.audioController;
self.audioController.delegate = self;
self.audioController2.delegate = nil;
}
if (aTrack.availability != SP_TRACK_AVAILABILITY_AVAILABLE) {
if (block) block([NSError spotifyErrorWithCode:SP_ERROR_TRACK_NOT_PLAYABLE]);
self.currentTrack = nil;
}
self.currentTrack = aTrack;
self.trackPosition = 0.0;
[self.playbackSession playTrack:self.currentTrack callback:^(NSError *error) {
if (!error)
self.playbackSession.playing = YES;
else
self.currentTrack = nil;
if (block) {
block(error);
}
}];
}
this starts a timer for crossfade
crossfadeTimer = [NSTimer scheduledTimerWithTimeInterval: 0.5
target: self
selector: #selector ( crossfadeCountdown)
userInfo: nil
repeats: YES];
And in order to keep the first track playing after its data has loaded in SPCoreAudioController.m I changed target buffer length:
static NSTimeInterval const kTargetBufferLength = 20;
and in SPSession.m : end_of_track(sp_session *session) {
I removed
// sess.playing = NO;
I call preloadTrackForPlayback: about 15 seconds before end of track, then crossfadeTrack: at 10 seconds before.
Then set crossfadeCountdownTime = [how many seconds you want the crossfade]*2;
I fade volume over the crosssfade with:
- (void) crossfadeCountdown
{
[UIAppDelegate.playbackSPManager setVolume:(1- (((float)crossfadeCountdownTime/ (thisCrossfadeSeconds*2.0)) *0.2) )];
crossfadeCountdownTime -= 0.5;
if (crossfadeCountdownTime == 1.0)
{
NSLog(#"Crossfade countdown done");
crossfadeCountdownTime = 0;
[crossfadeTimer invalidate];
crossfadeTimer = nil;
[UIAppDelegate.playbackSPManager setVolume:1.0];
}
}
I'll keep working on it, and update if I can make it better. Thanks again to iKenndac for his always spot-on help!
There isn't a pre-written crossfade example that I'm aware of that uses CocoaLibSpotify. However, a (perhaps not ideal) game plan would be:
Make two separate audio queues. SPCoreAudioController is an encapsulation of an audio queue, so you should just be able to instantiate two of them.
Play music as normal to one queue. When you're approaching the end of the track, call SPSession's preloadTrackForPlayback:callback: method with the next track to get it ready.
When all audio data for the playing track has been delivered, SPSession will fire the audio delegate method sessionDidEndPlayback:. This means that all audio data has been delivered. However, since CocoaLibSpotify buffers the audio from libspotify, there's still some time before audio stops.
At this point, start playing the new track but divert the audio data to the second audio queue. Start ramping down the volume of the first queue while ramping up the volume of the next one. This should give a pleasing crossfade.
A few pointers:
In SPCoreAudioController.m, you'll find the following line, which defines how much audio CocoaLibSpotify buffers, in seconds. If you want a bigger crossfade, you'll need to increase it.
static NSTimeInterval const kTargetBufferLength = 0.5;
Since you get audio data at a maximum of 1.5x actual playback speed, be careful not to do, for example, a 5 second crossfade when the user has just skipped near to the end of the track. You might not have enough audio data available to pull it off.
Take a good look at SPPlaybackManager.m. This class is the interface between CocoaLibSpotify and Core Audio. It's not too complicated, and understanding it will get you a long way. SPCoreAudioController and SPCircularBuffer are pretty much implementation details of getting the audio into Core Audio, and you shouldn't need to understand their implementations to achieve what you want.
Also, make sure you understand the various delegates SPSession has. The audio delivery delegate only has one job - to receive audio data. The playback delegate gets all other playback events - when audio has finished being delivered to the audio delivery delegate, etc. There's nothing stopping one class being both, but in the current implementation, SPPlaybackManager is the playback delegate, which creates an instance of SPCoreAudioController to be the audio delivery delegate. If you modify SPPlaybackManager to have two Core Audio controllers and alternate which one is the audio delivery delegate, you should be golden.