Suppose I'm making an app that a user can install several little interactive experiences (meditations) onto.
For convenience, I'd like my users to be able to start one by saying: “Hey Siri, start Beach Sunset in Meditations.”
Because of reasons, it makes sense for users to perform this action by voice, without ever first having interacted with Beach Sunset in the iOS app. (They may for example already “own” it through my service's web app.)
That is to say: I want a voice action like “Hey Siri, start Beach Sunset in Meditations” to work even without the user setting up a Shortcut for it first, or me “donating” actions for it.
Is that possible? (I feel like many of the default apps expose similar behavior, but maybe they're special.) If not, what is the next best thing I can do?
Are "donations" necessary for Siri to be aware of my app's voice actions, or are they simply a mechanism for hinting and predicting user behavior?
Are "shortcuts" necessary for Siri to be aware of my app's voice actions, or are they simply a mechanism for user phrase customization?
I've never added Siri support to an iOS app, but it seems “parameters” have gotten a lot more powerful in iOS 13. This answer suggests something similar wasn't possible in iOS 12, but I think it's also doing something somewhat different (I want to launch the app; they want to “create an object” presumably just using Intent UI. I don't know if this matters.)
What I've done
I've defined a custom intent in the Start category (LaunchMeditation) with a single parameter (meditationName).
I considered the standard Media intents, but the media here is interactive and not strictly audio/video, and I don't want to get in trouble.
I've added an Intents Extension to my app, and written a rudimentary test "handler" that just tries to pass the meditation name on to the app:
#interface IntentHandler () <LaunchMeditationIntentHandling>
#end
#implementation IntentHandler
- (id)handlerForIntent:(INIntent *)intent { return self; }
- (void)handleLaunchMeditation:(nonnull LaunchMeditationIntent *)intent
completion:(nonnull void (^)(LaunchMeditationIntentResponse * _Nonnull))completion {
// XXX: Maybe activity can just be nil?
NSUserActivity *activity = [[NSUserActivity alloc] initWithActivityType:#"com.example.meditations.activity.launch"];
activity.title = [NSString stringWithFormat:#"Launch %# in Meditations", intent.meditationName]; // Do I need this?
activity.userInfo = #{#"meditationName": intent.meditationName};
completion([[LaunchMeditationIntentResponse alloc] initWithCode:LaunchMeditationIntentResponseCodeContinueInApp
userActivity:activity]);
}
- (void)resolveMeditationNameForLaunchMeditation:(nonnull LaunchMeditationIntent *)intent
withCompletion:(nonnull void (^)(INStringResolutionResult * _Nonnull))completion {
completion([INStringResolutionResult successWithResolvedString:intent.meditationName]);
}
#end
When I test the Intents Extension, I can now make a Shortcut for it in Shortcuts, set its parameter, give it a name (like “Beach time”), and launch it by telling Siri that name — which is everything I don't want users to have to do.
Other than that, Siri responds with
Meditations hasn't added support for that with Siri.
…no matter how I phrase my request to start Beach Sunset. That “hasn't added support” sounds agonizingly much like there's simply something I'm missing.
I'll try to briefly answer all of your questions.
You can't create a Siri action without donating actions. Once you donate your actions they are not registered to Siri either. Users must create a Shortcut to be able to use them.
The next best thing you can do is to inform your users about your Siri Shortcuts. To do this you can show a pop-up or inform your new users on onboarding screens. The good part is you can redirect your users to the "Creating Siri Shortcut" screen via this code which you can trigger by button click or tap gesture.
let shortcut = INShortcut(userActivity: shortcutActivity!) // shortcutActivity is your donated activity.
let vc = INUIAddVoiceShortcutViewController(shortcut: shortcut)
vc.delegate = self // INUIAddVoiceShortcutViewControllerDelegate
self.present(vc, animated: true, completion: nil)
Shortcuts are necessary for Siri to be aware of you implementation.
As far as I know intent domains help you specify more parameters for your Siri shortcuts. Which enables you to create more Siri interactions.
Apple also promotes Siri Shortcuts of commonly used apps. If your users are using your app in a regular basis or more often than others they might see a Siri Shortcut Suggestion in their home screen. Similar to this one.
I also think it would be great to donate Siri Shortcuts without any user action but there would be certain problems such as:
What if two or more different apps uses the same phrase for a Siri Shortcut?
How will Siri distinguish an unregistered command from a simple conversation? For example if someone created a shortcut with the phrase "Hi Siri".
Even if you donate an action with a certain phrase Siri must learn how it's user pronounces that certain phrase.
These may cause a lot of harm than good thus I think Apple choose the current way. Hope these answers your questions.
Related
I am writing an app that includes text-to-speech using AVSpeechSynthesizer. The code for generating the utterance and using the speech synthesizer has been working fine.
let utterance = AVSpeechUtterance(string: text)
utterance.voice = currentVoice
speechSynthesizer.speak(utterance)
Now with iOS 11, I want to match the voice to the one selected by the user in the phone's Settings app, but I do not see any way to get that setting.
I have tried getting the list of installed voices and looking for one that has a quality of .enhanced, but sometimes there is no enhanced voice installed, and even when there is, it may or may not be the voice selected by the user in the Settings app.
static var enhanced: AVSpeechSynthesisVoice? {
for voice in AVSpeechSynthesisVoice.speechVoices() {
if voice.quality == .enhanced {
return voice
}
}
return nil
}
The questions are twofold:
How can I determine which voice has been selected by the user in the Setting app?
Why on some iOS 11 phones that are using the new Siri voice am I not finding an "enhanced" voice installed?
I suppose if there was a method available for selecting the same voice as in the Settings app, it'd be shown on the documentation for class AVSpeechSynthesisVoice under the Finding Voices topic. Jumping to the definition in code of AVSpeechSynthesisVoice, I couldn’t find any different methods to retrieve voices.
Here's my workaround on getting an enhanced voice over for the app I am working on:
Enhanced versions of voices are probably not present in new iOS devices by default in order to save storage. Iterating thru available voices on my brand new iPhone, I only found Default quality voices such as: [AVSpeechSynthesisVoice 0x1c4e11cf0] Language: en-US, Name: Samantha, Quality: Default [com.apple.ttsbundle.Samantha-compact]
I found this article on how to enable additional voice over voices and downloaded the one named “Samantha (Enhanced)” among them. Checking the list of available voices again, I noticed the following addition:
[AVSpeechSynthesisVoice 0x1c4c03060] Language: en-US, Name: Samantha (Enhanced), Quality: Enhanced [com.apple.ttsbundle.Samantha-premium]
As of now I was able to select an enhanced language on Xcode. Given that the AVSpeechSynthesisVoice.currentLanguageCode() method exposes the currently selected language, ran the following code to make a selection of the first enhanced voice I could find. If no enhanced version was available I’d just pick the available default (the code below is for a VoiceOver custom class I am creating to handle all speeches in my app. The piece below updates its voice variable).
var voice: AVSpeechSynthesisVoice!
for availableVoice in AVSpeechSynthesisVoice.speechVoices(){
if ((availableVoice.language == AVSpeechSynthesisVoice.currentLanguageCode()) &&
(availableVoice.quality == AVSpeechSynthesisVoiceQuality.enhanced)){ // If you have found the enhanced version of the currently selected language voice amongst your available voices... Usually there's only one selected.
self.voice = availableVoice
print("\(availableVoice.name) selected as voice for uttering speeches. Quality: \(availableVoice.quality.rawValue)")
}
}
if let selectedVoice = self.voice { // if sucessfully unwrapped, the previous routine was able to identify one of the enhanced voices
print("The following voice identifier has been loaded: ",selectedVoice.identifier)
} else {
self.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) // load any of the voices that matches the current language selection for the device in case no enhanced voice has been found.
}
I am also hoping Apple will expose a method to directly load the selected language, but I hope this work around can serve you in the meantime. I guess Siri’s enhanced voice is downloaded on the go, so maybe this is the reason it takes so long to answer my voice commands :)
Best regards.
It looks like the new Siri voice in iOS 11 isn't part of the AVSpeechSynthesis API, and isn't available to developers.
In macOS 10.13 High Sierra (which also gets the new voice), there seems to be a new SiriTTS framework that's probably related to this functionality, but it's in PrivateFrameworks so it doesn't have a developer API.
I'll try to provide a more detailed answer. AVSpeechSynthesizer cannot use the Siri voice. Apple has locked this voice to ensure privacy as the malicious app could impersonate Siri and get private information that way.
Apple hasn't changed this for years, but there is ongoing initiative regarding this. We already know that there is a solution to access privacy sensitive features in the iOS using the permissions, and there is no reason why Siri voice couldn't be accessed with user permission. You may vote for this to happen using this petition and with some hope Apple may implement that in the future: https://www.change.org/p/apple-apple-please-allow-3rd-party-apps-to-use-siri-voices-for-improved-accessibility
I am implementing firebase dynamic links in my iOS app and I can already parse the link, redirect to AppStore etc. Now I want to distinguish the first run of the app, when user installs it from the dynamic link - I want to skip the intro and show him the content that is expected to be shown.
Is there some parameter, that I could catch in application(_:didFinishLaunchingWithOptions:) so I could say that it was launched thru the dynamic link?
The method application(_:continueUserActivity:userActivity:restorationHandler:) is called later, so the intro is already launched.
This case is difficult to test, because you have to have your app published on the AppStore.
You actually don't need to have the app published in the App Store for this to work — clicking a link, closing the App Store, and then installing an app build through Xcode (or any other beta distribution platform like TestFlight or Fabric) has exactly the same effect.
According to the Firebase docs, the method that is called for the first install is openURL (no, this makes no sense to me either). The continueUserActivity method is for Universal Links, and is only used if the app is already installed when a link is opened.
I am not aware of any way to detect when the app is opening for the first time after install from a 'deferred' link, but you could simply route directly to the shared content (skipping the intro) whenever a deep link is present. If a deep link is NOT present, show the regular intro.
Alternative Option
You could check out Branch.io (full disclosure: I'm on the Branch team). Amongst other things, Branch is a great, free drop-in replacement for Firebase Dynamic Links with a ton of additional functionality. Here is an example of all the parameters Branch returns immediately in didFinishLaunchingWithOptions:
{
"branch_view_enabled" = 0;
"browser_fingerprint_id" = "<null>";
data = "{
\"+is_first_session\":false,
\"+clicked_branch_link\":true,
\"+match_guaranteed\":true,
\"$canonical_identifier\":\"room/OrangeOak\",
\"$exp_date\":0,
\"$identity_id\":\"308073965526600507\",
\"$og_title\":\"Orange Oak\",
\"$one_time_use\":false,
\"$publicly_indexable\":1,
\"room_name\":\"Orange Oak\", // this is a custom param, of which you may have an unlimited number
\"~channel\":\"pasteboard\",
\"~creation_source\":3,
\"~feature\":\"sharing\",
\"~id\":\"319180030632948530\",
\"+click_timestamp\":1477336707,
\"~referring_link\":\"https://branchmaps.app.link/qTLPNAJ0Jx\"
}";
"device_fingerprint_id" = 308073965409112574;
"identity_id" = 308073965526600507;
link = "https://branchmaps.app.link/?%24identity_id=308073965526600507";
"session_id" = 319180164046538734;
}
You can read more about these parameters on the Branch documentation here.
Hmm... as far as I'm aware, there's not really anything you can catch in the application:(_:didFinishLaunchingWithOptions) phase that would let you know the app was being opened by a dynamic link. You're going to have to wait until the continueUserActivity call, as you mentioned.
That said, FIRDynamicLinks.dynamicLinks()?.handleUniversalLink returns a boolean value nearly instantly, so you should be able to take advantage of that to short-circuit your into animation without it being a bad user experience. The callback itself might not happen until several milliseconds later, depending on if it's a shortened dynamic link (which requires a network call) or an expanded one (which doesn't).
I'm trying to automate the app , but suddenly in middle the google permissions window for permission like phone , location etc pops up , is there any way that I can make sure always permission pop ups are allowed
Try to set desired capabilities:
autoAcceptAlerts = true
Since you said google permissions, I am assuming you are dealing in Android. Also since there is no language tag, I am sticking to Java, you can frame the logic in any language you are using.
Well, its sad to inform you that currently there seems to be no such capability added for android. Though iOS has few similar capabilities.
So, for android what you can do is logically -
If these pop-ups are device dependent, change the device settings that these pop-ups are not allowed.
If these pop-ups are relevant to application permissions, then you must know when they would occur. Just keep a check -
List<WebElement> popUp = driver.findElement(<find the pop up using your locator strategy>);
if(popUp.size()!=0) {
WebElement accept/dismiss = driver.findElement(<find the button accordingly>);
accept/dismiss.click();
}
I'm in a little bit of trouble with supporting Siri's Smart Reminders. It's using NSUserActivity() in order to creating a contextual reminder.
Try it with Safari and Messages : say "Remind me about this". A reminder will be created with the title of the webpage / the message as the title of the reminder, and a deep link to Safari/Messages.
Back to my issue. I can create a contextual reminder with the title only… Without even touching my code (thanks to Handoff). But I can't add a deep link to my app like Safari or Messages…
Here's my code (Swift) :
let webHandoff: NSUserActivity = NSUserActivity(activityType: "com.jpierna.Trophies")
webHandoff.webpageURL = NSURL(string: BaseURL + "/game.php?id=\(detail.id)")
webHandoff.title = detail.title
self.userActivity = webHandoff
self.userActivity?.becomeCurrent()
Siri use webHandoff.title to give the reminder a title. At first I expected Siri to give my webpageURL to the reminder, but nothing. Then I searched if I could add deep linking between the reminder and my app (e.g. the reminder opens my app with the same data, like url schemes, no issues with that), but nothing too.
Apple documentation talks mainly about Spotlight Proactive Search and Core Spotlight, but not really about Siri's Smart Reminders.
Maybe could somone show me how to properly add this in my app ?
Thanks for reading !
Double-check the values under the NSUserActivityTypes key in your Info.plist file. I've found that if your activityType value isn't declared there, Siri's "remind me about this"/"remember this" smart reminder will only get the title in your NSUserActivity, but no deep-linking app icon will appear.
My app generates some sort of text information.
User presses button like "Share" in my app and after that pops up a windows with a list of installed applications or only apps which can recieve string parameter. After that, user selects, for example, "Mail" app and then it is opened with the new email message and with a given text from my app. Or user selects Skype app and then it is opened with a given text.
How could those scenarios be implemented in iOS?
PS: I already saw similar behavior in Android app (via Intent extras).
UPDATE: I posted the answer below that works for me (via UIActivityViewController) exactly how I need.
There is no single answer that will work for all target apps. You need to research each app and see if it has a facility for receiving info from other apps.
A simple way to do this is to invoke an URL that targets the other app.
For mail, you could invoke a mailto:// URL that composes an email with the text in the desired field(s) (to, cc, bcc, subject, or body.)
If the app supports the iOS document model you may be able to pass it a document to open.
If the target app has a server then you may also be able to connect to the server and send data to it that way. Again, this is not a question you can ask in general. The answer will be different for each target app, and for some apps the answer will be "you can't, because it doesn't have any mechanism to receive data from an outside app."
Android is a different beast with different abilities than iOS. iOS is more of a "walled garden", with very limited access outside of your app.
I found the best solution for me is
- (IBAction)onShare:(id)sender {
NSString *textStr = self.textToShare.text;
NSArray *items = #[textStr];
UIActivityViewController *activity = [[UIActivityViewController alloc]
initWithActivityItems:items
applicationActivities:nil];
[self presentViewController:activity animated:YES completion:nil];
}
It does exactly what I need. It shows popup view with the list of apps which are able to receive text string. Then user can select any of them and controller sends text to it.