I'm currently working on an action using the Actions on Google SDK together with Microsoft's Bot Framework. In this action I've build a fallback that allows the user to enter a product code on their phone if they have failed to do so a couple times through voice. This setup works fine in English, but my action is multi-lingual and supports Dutch and French too.
The problem that I am running into is, when a user is using my action in Dutch or French, when they accept to move the conversation to their phone, the conversation continues in English once it is on my phone. Below you can find an the code I use in my handler.
New Surface handler
endpoint.intent(GoogleIntentTypes.NewSurface, async (conv: ActionsSdkConversation) => {
logger.logDebug("Received new surface request")
const locale = conv.user.locale;
if (conv.arguments!.get('NEW_SURFACE')!.status! === 'OK') {
conv.ask(this.messages.getResponse("AskForProductNumber_SSML", locale));
} else {
conv.close(this.messages.getResponse("EndConversation_SSML", locale));
};
});
From the moment the request enters my webhook, my conversations locale is switched to en-US. This makes me think that the locale is taken from a setting on my phone, but I can't find anything in the docs on this. Does anyone know what could be causing the switch in locale when performing a handover to phone?
My understanding is that the locale is based on the locale of the device that has sent the request.
This page on "languages and locales" (emphasis mine) says:
Locales are constructed using the language set in the Assistant settings and the region set in the device settings. The combination of these needs to form a supported locale. For example, a device set to the BR region and an Assistant device set to en-US results in a en-BR locale, which is not supported by Actions on Google.
Related
I'm building an app utilizing the new Action on Google Java API. As I understand from dealing with account linking in Alexa, the initial flow (when the userId in the JSON request is null) should redirect to a sign in form to elicit user consent:
#ForIntent("RawText")
public ActionResponse launchRequestHandler(ActionRequest request) {
String userId = request.getAppRequest().getUser().getUserId();
String queryText = request.getWebhookRequest().getQueryResult().getQueryText();
String speech = null;
ResponseBuilder responseBuilder = getResponseBuilder(request);
if (isBlank(userId) || GREETING.equalsIgnoreCase(queryText)) {
speech = "I've sent a link to your Google Assistant app that will get you started and set up in just several simple steps.";
responseBuilder.add(
new SignIn()
.setContext(speech));
//...
return responseBuilder.build();
While testing in the AoG Simulator, however, I'm not seeing any redirection being done. I'm seeing the following error:
My account linking setup:
where authorization URL redirects to a local mock auth service which is supposed to display a login form. It's accessible (both via localhost and via ssh tunnel, provided by serveo.net reverse proxy in this case). Why Google doesn't redirect me there?
Can someone please guide me how to do this initial handshake in the account linking flow and where can I see the form which the Sign-In intent sent from the web hook is supposed to trigger?
I'd rather not use my phone, as the error message seems to suggest, as the account under which I'm testing in AoG simulator differs from my user ID on the phone.
What is meant by using Simulator as a Speaker? What is missing in my setup?
Is there another Google app that simulates the physical device better, similar to Alexa's simulator?
Normally, you can simulate the account linking, by selecting the Debug tab, there you will find a url, copy-paste it on another tab and you can link your account.
Once linking is done, go to the simulator and type 'cancel' or 'stop', and then 'Talk to speech bank'.
! Don't press reset or Change Version, or you have to re-link your app
But, recently Google has removed this url from debug tab, and I can't find it anywhere...
Simulator as a Speaker, The Surface Dropdown is set to Phone, you need to select Speaker,
but when you try that one, you will receive this error...
Invocation Error
You cannot use standard Google Assistant features in the Simulator. If you want to try them, use Google Assistant on your phone or other compatible devices.
So for the moment, you can't test an Action that needs account linking, using the simulator. You can do it with your smartphone...
UPDATE 2019-03-05:
Google has added the account linking in the simulator, which is now easier to test.
I am writing an app that includes text-to-speech using AVSpeechSynthesizer. The code for generating the utterance and using the speech synthesizer has been working fine.
let utterance = AVSpeechUtterance(string: text)
utterance.voice = currentVoice
speechSynthesizer.speak(utterance)
Now with iOS 11, I want to match the voice to the one selected by the user in the phone's Settings app, but I do not see any way to get that setting.
I have tried getting the list of installed voices and looking for one that has a quality of .enhanced, but sometimes there is no enhanced voice installed, and even when there is, it may or may not be the voice selected by the user in the Settings app.
static var enhanced: AVSpeechSynthesisVoice? {
for voice in AVSpeechSynthesisVoice.speechVoices() {
if voice.quality == .enhanced {
return voice
}
}
return nil
}
The questions are twofold:
How can I determine which voice has been selected by the user in the Setting app?
Why on some iOS 11 phones that are using the new Siri voice am I not finding an "enhanced" voice installed?
I suppose if there was a method available for selecting the same voice as in the Settings app, it'd be shown on the documentation for class AVSpeechSynthesisVoice under the Finding Voices topic. Jumping to the definition in code of AVSpeechSynthesisVoice, I couldn’t find any different methods to retrieve voices.
Here's my workaround on getting an enhanced voice over for the app I am working on:
Enhanced versions of voices are probably not present in new iOS devices by default in order to save storage. Iterating thru available voices on my brand new iPhone, I only found Default quality voices such as: [AVSpeechSynthesisVoice 0x1c4e11cf0] Language: en-US, Name: Samantha, Quality: Default [com.apple.ttsbundle.Samantha-compact]
I found this article on how to enable additional voice over voices and downloaded the one named “Samantha (Enhanced)” among them. Checking the list of available voices again, I noticed the following addition:
[AVSpeechSynthesisVoice 0x1c4c03060] Language: en-US, Name: Samantha (Enhanced), Quality: Enhanced [com.apple.ttsbundle.Samantha-premium]
As of now I was able to select an enhanced language on Xcode. Given that the AVSpeechSynthesisVoice.currentLanguageCode() method exposes the currently selected language, ran the following code to make a selection of the first enhanced voice I could find. If no enhanced version was available I’d just pick the available default (the code below is for a VoiceOver custom class I am creating to handle all speeches in my app. The piece below updates its voice variable).
var voice: AVSpeechSynthesisVoice!
for availableVoice in AVSpeechSynthesisVoice.speechVoices(){
if ((availableVoice.language == AVSpeechSynthesisVoice.currentLanguageCode()) &&
(availableVoice.quality == AVSpeechSynthesisVoiceQuality.enhanced)){ // If you have found the enhanced version of the currently selected language voice amongst your available voices... Usually there's only one selected.
self.voice = availableVoice
print("\(availableVoice.name) selected as voice for uttering speeches. Quality: \(availableVoice.quality.rawValue)")
}
}
if let selectedVoice = self.voice { // if sucessfully unwrapped, the previous routine was able to identify one of the enhanced voices
print("The following voice identifier has been loaded: ",selectedVoice.identifier)
} else {
self.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) // load any of the voices that matches the current language selection for the device in case no enhanced voice has been found.
}
I am also hoping Apple will expose a method to directly load the selected language, but I hope this work around can serve you in the meantime. I guess Siri’s enhanced voice is downloaded on the go, so maybe this is the reason it takes so long to answer my voice commands :)
Best regards.
It looks like the new Siri voice in iOS 11 isn't part of the AVSpeechSynthesis API, and isn't available to developers.
In macOS 10.13 High Sierra (which also gets the new voice), there seems to be a new SiriTTS framework that's probably related to this functionality, but it's in PrivateFrameworks so it doesn't have a developer API.
I'll try to provide a more detailed answer. AVSpeechSynthesizer cannot use the Siri voice. Apple has locked this voice to ensure privacy as the malicious app could impersonate Siri and get private information that way.
Apple hasn't changed this for years, but there is ongoing initiative regarding this. We already know that there is a solution to access privacy sensitive features in the iOS using the permissions, and there is no reason why Siri voice couldn't be accessed with user permission. You may vote for this to happen using this petition and with some hope Apple may implement that in the future: https://www.change.org/p/apple-apple-please-allow-3rd-party-apps-to-use-siri-voices-for-improved-accessibility
I am implementing firebase dynamic links in my iOS app and I can already parse the link, redirect to AppStore etc. Now I want to distinguish the first run of the app, when user installs it from the dynamic link - I want to skip the intro and show him the content that is expected to be shown.
Is there some parameter, that I could catch in application(_:didFinishLaunchingWithOptions:) so I could say that it was launched thru the dynamic link?
The method application(_:continueUserActivity:userActivity:restorationHandler:) is called later, so the intro is already launched.
This case is difficult to test, because you have to have your app published on the AppStore.
You actually don't need to have the app published in the App Store for this to work — clicking a link, closing the App Store, and then installing an app build through Xcode (or any other beta distribution platform like TestFlight or Fabric) has exactly the same effect.
According to the Firebase docs, the method that is called for the first install is openURL (no, this makes no sense to me either). The continueUserActivity method is for Universal Links, and is only used if the app is already installed when a link is opened.
I am not aware of any way to detect when the app is opening for the first time after install from a 'deferred' link, but you could simply route directly to the shared content (skipping the intro) whenever a deep link is present. If a deep link is NOT present, show the regular intro.
Alternative Option
You could check out Branch.io (full disclosure: I'm on the Branch team). Amongst other things, Branch is a great, free drop-in replacement for Firebase Dynamic Links with a ton of additional functionality. Here is an example of all the parameters Branch returns immediately in didFinishLaunchingWithOptions:
{
"branch_view_enabled" = 0;
"browser_fingerprint_id" = "<null>";
data = "{
\"+is_first_session\":false,
\"+clicked_branch_link\":true,
\"+match_guaranteed\":true,
\"$canonical_identifier\":\"room/OrangeOak\",
\"$exp_date\":0,
\"$identity_id\":\"308073965526600507\",
\"$og_title\":\"Orange Oak\",
\"$one_time_use\":false,
\"$publicly_indexable\":1,
\"room_name\":\"Orange Oak\", // this is a custom param, of which you may have an unlimited number
\"~channel\":\"pasteboard\",
\"~creation_source\":3,
\"~feature\":\"sharing\",
\"~id\":\"319180030632948530\",
\"+click_timestamp\":1477336707,
\"~referring_link\":\"https://branchmaps.app.link/qTLPNAJ0Jx\"
}";
"device_fingerprint_id" = 308073965409112574;
"identity_id" = 308073965526600507;
link = "https://branchmaps.app.link/?%24identity_id=308073965526600507";
"session_id" = 319180164046538734;
}
You can read more about these parameters on the Branch documentation here.
Hmm... as far as I'm aware, there's not really anything you can catch in the application:(_:didFinishLaunchingWithOptions) phase that would let you know the app was being opened by a dynamic link. You're going to have to wait until the continueUserActivity call, as you mentioned.
That said, FIRDynamicLinks.dynamicLinks()?.handleUniversalLink returns a boolean value nearly instantly, so you should be able to take advantage of that to short-circuit your into animation without it being a bad user experience. The callback itself might not happen until several milliseconds later, depending on if it's a shortened dynamic link (which requires a network call) or an expanded one (which doesn't).
I'm trying to automate the app , but suddenly in middle the google permissions window for permission like phone , location etc pops up , is there any way that I can make sure always permission pop ups are allowed
Try to set desired capabilities:
autoAcceptAlerts = true
Since you said google permissions, I am assuming you are dealing in Android. Also since there is no language tag, I am sticking to Java, you can frame the logic in any language you are using.
Well, its sad to inform you that currently there seems to be no such capability added for android. Though iOS has few similar capabilities.
So, for android what you can do is logically -
If these pop-ups are device dependent, change the device settings that these pop-ups are not allowed.
If these pop-ups are relevant to application permissions, then you must know when they would occur. Just keep a check -
List<WebElement> popUp = driver.findElement(<find the pop up using your locator strategy>);
if(popUp.size()!=0) {
WebElement accept/dismiss = driver.findElement(<find the button accordingly>);
accept/dismiss.click();
}
I'm looking for recommendations for an iOS barcode scanner app. Specifically for iPad which will support a custom URL callback to enable the app to be launched from a web browser.
Additionally, it needs to support and a custom search URL which will send the user back to the website once the barcode has been decoded into a URN (SKU).
I have discovered ZBar which is an excellent app, unfortunately it doesn't support custom URL callback and it's designed for the iPhone.
Another app pic2shop PRO seems to tick these boxes, but it's relatively expensive at £10.49 and the setup will require somewhere in the region of 200 installs.
I did a similar project using the free version of pic2shop . The thing is that the free version can read only these types of barcodes : UPC-A, UPC-E, EAN-13, EAN-8 , according to the documentation of the app.
Pic2shop is a free barcode scanner app available for iOS® and Android®. It reads UPC-A, UPC-E, EAN-13, EAN-8 and QR codes. The app also display comparison shopping results for UPC and EAN.
From my personal experience, I can say that it scans and decodes the barcode very fast and very accurate.
In my project the app is launched from a webpage, it works for both android and ios. In order to get it working you have to invoke the pic2shop app from a url and then set your callback address. You will find the decoded barcode data as a value to a parameter in the callback url. To help you more, you can get those values using this javascript function found here.
For example:
<input type=button OnClick="scan();" value="Scan Barcode">
<script>
function scan(){
window.location="pic2shop://scan?callback=http://yourwebsiteurl.com/index.html?barcode=ean"
}
</script>
As soon as the item is successfully scanned it will redirect you to the callback url with the actual barcode number as a value to a parameter. For example http://yourwebsiteurl.com/index.html?barcode=5123548745123. I already told you how to get the value of a url parameter with javascript.
PDF417.mobi Pro barcode scanner app supports that use case.
Note: I'm a developer on that project.
Basically, the app can be launched from any other app, including a web application, when url in the form: pdf417://scan?type=PDF417,UPCA&callback=myscheme://myaction is launched.
The app then scans the barcode, in multiple formats, (PDF417 and UPCA in this example), until the result is obtained.
Then, the app opens the URL myscheme://myaction. In your case, this can be your web service, http://www.somemyscanner.com/service.
Specifically, it will open the URL using format: http://www.somemyscanner.com/service?data=[data]&type=[type].
You can then use those parameters to implement your desired functionalities.
I tried the PDF417 app and it is EXTREMELY expensive (for an app - $28) and does not work. I bought it anyway because I am trying to solve the same issue and I can tell you it is not the solution for general barcode scanning.
It might work with pdf417 barcodes, but those are few and far between and I haven't been able to get it to work. I definately does not support any standard barcode formats. It also has no settings panel (in settings) and the tap target in the app that should be settings just take you to the company web site.
I am still testing other apps but haven't found any app that does what you ask, Red Laser used to but it no longer has that functionality.