Adobe AIR: detect en_GB locale - localization

Is there a way to detect if the user is running the AIR application under en_GB locale on Windows? Capabilities.language returns only "en" and Capabilities.languages[0] returns "en_US" :(

Unfortunately, no.
But it will be something soon (sorry, can't tell you more now)!
Check here: http://www.adobe.com/cfusion/event/index.cfm?event=detail&id=1489921
"Get the inside scoop on the new
mobile features in Flash Player 10.1,
as well as the new global error
handling, UI, globalization, and media
playback features."

Now the globalization features are out in Flash Player 10.1 you can use them. Check out the documentation for them here:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/globalization/package-detail.html
and more info here:
http://www.adobe.com/devnet/flashplayer/articles/flash_globalization_package.html#articlecontentAdobe_numberedheader
You can easily get the default local as a string like so:
new StringTools(LocaleID.DEFAULT).actualLocaleIDName; // returns en-GB if region is United Kingdom on OSX

Related

What controls user locale when performing handover to phone

I'm currently working on an action using the Actions on Google SDK together with Microsoft's Bot Framework. In this action I've build a fallback that allows the user to enter a product code on their phone if they have failed to do so a couple times through voice. This setup works fine in English, but my action is multi-lingual and supports Dutch and French too.
The problem that I am running into is, when a user is using my action in Dutch or French, when they accept to move the conversation to their phone, the conversation continues in English once it is on my phone. Below you can find an the code I use in my handler.
New Surface handler
endpoint.intent(GoogleIntentTypes.NewSurface, async (conv: ActionsSdkConversation) => {
logger.logDebug("Received new surface request")
const locale = conv.user.locale;
if (conv.arguments!.get('NEW_SURFACE')!.status! === 'OK') {
conv.ask(this.messages.getResponse("AskForProductNumber_SSML", locale));
} else {
conv.close(this.messages.getResponse("EndConversation_SSML", locale));
};
});
From the moment the request enters my webhook, my conversations locale is switched to en-US. This makes me think that the locale is taken from a setting on my phone, but I can't find anything in the docs on this. Does anyone know what could be causing the switch in locale when performing a handover to phone?
My understanding is that the locale is based on the locale of the device that has sent the request.
This page on "languages and locales" (emphasis mine) says:
Locales are constructed using the language set in the Assistant settings and the region set in the device settings. The combination of these needs to form a supported locale. For example, a device set to the BR region and an Assistant device set to en-US results in a en-BR locale, which is not supported by Actions on Google.

Availability of installed voices for use by AVSpeechSynthesis in iOS

I would like to be able to test which text-to-speech voices are available for my iOS app to use with AVSpeechSynthesis. It is easy to generate a list of the installed voices, but Apple makes some of them are off-limits for use by apps, and I would like to know which.
For example, consider the following test code (swift 5.1):
import AVFoundation
...
func voiceTest() {
let speechSynthesizer = AVSpeechSynthesizer()
let voices = AVSpeechSynthesisVoice.speechVoices()
for voice in voices where voice.language == "en-US" {
print("\(voice.language) - \(voice.name) - \(voice.quality.rawValue) [\(voice.identifier)]")
let phrase = "The voice you're now listening to is the one called \(voice.name)."
let utterance = AVSpeechUtterance(string: phrase)
utterance.voice = voice
speechSynthesizer.speak(utterance)
}
}
When I call voiceTest(), the console output is this:
en-US - Nicky (Enhanced) - 2 [com.apple.ttsbundle.siri_female_en-US_premium]
en-US - Aaron - 1 [com.apple.ttsbundle.siri_male_en-US_compact]
en-US - Fred - 1 [com.apple.speech.synthesis.voice.Fred]
en-US - Nicky - 1 [com.apple.ttsbundle.siri_female_en-US_compact]
en-US - Samantha - 1 [com.apple.ttsbundle.Samantha-compact]
en-US - Alex - 2 [com.apple.speech.voice.Alex]
Some of the voices speak in their actual voice, whereas some of them speak in the default voice instead. In my case both Nicky (com.apple.ttsbundle.siri_female_en-US_premium) and Alex (com.apple.speech.voice.Alex) are listed as high quality but sound instead like the low quality default, Samantha, when selected.
I know that Apple has said that the Siri voices are not available for use in third party apps. When I manually load Samantha (High Quality) on my iPhone via Settings, it appears in the list and I can use it. Perhaps Alex is just the high-quality male Siri voice, even though Aaron would seem to be the low-quality Siri voice based on its identifier (com.apple.ttsbundle.siri_male_en-US_compact)? And that's why Alex and Nicky are the only two to be unavailable? So that if I have my app specifically exclude those it will generate the true list of available voices? It would be nice to have some clarity.
I've been looking for a way to programmatically use Siri's nice sounding voice, such as English Siri Male (United States), and quickly discovered it is not possible using public Speech API even though the voice can be selected in System Preferences.
To answer your question, there are at least two other ways of finding available voices in addition to your code example.
Using defaults command
defaults read com.apple.speech.voice.prefs > speech_prefs.txt
To find info on voice currently selected in System Preference, look for SelectedVoiceName in speech_prefs.txt.
For example, for English Siri Male (United States), this will be SelectedVoiceName = "Aaron Siri";.
Now, by further searching for aaron in speech_prefs.txt, you will find the following:
"VOICEID:com.apple.speech.synthesis.voice.custom.siri.aaron.premium_1" = {
BundleIdentifier = "com.apple.speech.synthesis.voice.custom.siri.aaron.premium";
I tried both of these strings when initializing voice, but got error saying voice is not found.
Looking for voice directories
There seems to be three locations:
/System/Library/Speech/Voices
,
/Library/Speech/Voices
and
~/Library/Speech/Voices
The third one seems to be a location for custom voices.
Each voice has its own directory.
If you compare Info.plist files of some programmatically available and programmatically unavailable voices, you will see that both have different structure. For example, the programmatically unavailable voice lacks some attributes that correspond to Speech API, such as VoiceSupportedCharacters. I believe this is because some voices are of the older generation and some are newer.
P.S.
Not directly relevant to your question, but just FYI: I'm still looking for a solution to use Siri's voice programmatically. One idea is to make a copy of the voice directory and play with its Info.plist. The other idea is to automate MacOS UI to trigger text-to-speech conversion through simulating key press bound to Speak selected text when the key is pressed option in System Preferences / Accessibility / Speech and then recording the audio.
I'd appreciate if anyone can share other ideas.

How to access Siri voice selected by user in Settings in iOS 11

I am writing an app that includes text-to-speech using AVSpeechSynthesizer. The code for generating the utterance and using the speech synthesizer has been working fine.
let utterance = AVSpeechUtterance(string: text)
utterance.voice = currentVoice
speechSynthesizer.speak(utterance)
Now with iOS 11, I want to match the voice to the one selected by the user in the phone's Settings app, but I do not see any way to get that setting.
I have tried getting the list of installed voices and looking for one that has a quality of .enhanced, but sometimes there is no enhanced voice installed, and even when there is, it may or may not be the voice selected by the user in the Settings app.
static var enhanced: AVSpeechSynthesisVoice? {
for voice in AVSpeechSynthesisVoice.speechVoices() {
if voice.quality == .enhanced {
return voice
}
}
return nil
}
The questions are twofold:
How can I determine which voice has been selected by the user in the Setting app?
Why on some iOS 11 phones that are using the new Siri voice am I not finding an "enhanced" voice installed?
I suppose if there was a method available for selecting the same voice as in the Settings app, it'd be shown on the documentation for class AVSpeechSynthesisVoice under the Finding Voices topic. Jumping to the definition in code of AVSpeechSynthesisVoice, I couldn’t find any different methods to retrieve voices.
Here's my workaround on getting an enhanced voice over for the app I am working on:
Enhanced versions of voices are probably not present in new iOS devices by default in order to save storage. Iterating thru available voices on my brand new iPhone, I only found Default quality voices such as: [AVSpeechSynthesisVoice 0x1c4e11cf0] Language: en-US, Name: Samantha, Quality: Default [com.apple.ttsbundle.Samantha-compact]
I found this article on how to enable additional voice over voices and downloaded the one named “Samantha (Enhanced)” among them. Checking the list of available voices again, I noticed the following addition:
[AVSpeechSynthesisVoice 0x1c4c03060] Language: en-US, Name: Samantha (Enhanced), Quality: Enhanced [com.apple.ttsbundle.Samantha-premium]
As of now I was able to select an enhanced language on Xcode. Given that the AVSpeechSynthesisVoice.currentLanguageCode() method exposes the currently selected language, ran the following code to make a selection of the first enhanced voice I could find. If no enhanced version was available I’d just pick the available default (the code below is for a VoiceOver custom class I am creating to handle all speeches in my app. The piece below updates its voice variable).
var voice: AVSpeechSynthesisVoice!
for availableVoice in AVSpeechSynthesisVoice.speechVoices(){
if ((availableVoice.language == AVSpeechSynthesisVoice.currentLanguageCode()) &&
(availableVoice.quality == AVSpeechSynthesisVoiceQuality.enhanced)){ // If you have found the enhanced version of the currently selected language voice amongst your available voices... Usually there's only one selected.
self.voice = availableVoice
print("\(availableVoice.name) selected as voice for uttering speeches. Quality: \(availableVoice.quality.rawValue)")
}
}
if let selectedVoice = self.voice { // if sucessfully unwrapped, the previous routine was able to identify one of the enhanced voices
print("The following voice identifier has been loaded: ",selectedVoice.identifier)
} else {
self.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) // load any of the voices that matches the current language selection for the device in case no enhanced voice has been found.
}
I am also hoping Apple will expose a method to directly load the selected language, but I hope this work around can serve you in the meantime. I guess Siri’s enhanced voice is downloaded on the go, so maybe this is the reason it takes so long to answer my voice commands :)
Best regards.
It looks like the new Siri voice in iOS 11 isn't part of the AVSpeechSynthesis API, and isn't available to developers.
In macOS 10.13 High Sierra (which also gets the new voice), there seems to be a new SiriTTS framework that's probably related to this functionality, but it's in PrivateFrameworks so it doesn't have a developer API.
I'll try to provide a more detailed answer. AVSpeechSynthesizer cannot use the Siri voice. Apple has locked this voice to ensure privacy as the malicious app could impersonate Siri and get private information that way.
Apple hasn't changed this for years, but there is ongoing initiative regarding this. We already know that there is a solution to access privacy sensitive features in the iOS using the permissions, and there is no reason why Siri voice couldn't be accessed with user permission. You may vote for this to happen using this petition and with some hope Apple may implement that in the future: https://www.change.org/p/apple-apple-please-allow-3rd-party-apps-to-use-siri-voices-for-improved-accessibility

Read logs using the new swift os_log api

Deprecated in iOS 10.0: os_log(3) has replaced asl(3)
So iOS 10.0 apparently deprecates the asl (Apple System Log) api and replaces it with the very limited os_log api.
I use something similar to the code snippet below to read out log writes for the running app to show in a uitextview in app - and now it is full of deprecation warnings. Does anyone know of a way to read the printed log using the new os_log api? Because I only see an api for writing (https://developer.apple.com/reference/os/1891852-logging).
import asl
let query = asl_new(UInt32(ASL_TYPE_QUERY))
let response = asl_search(nil, query)
while let message = asl_next(response) {
var i: UInt32 = 0
let key = asl_key(message, i)
print(asl_get(message, key))
...
}
Edit after #Will Loew-Blosser's answer
https://developer.apple.com/videos/play/wwdc2016/721/ explained nicely what is going to happen with logging in the future. The biggest giveaway was that logs are put in some compressed format and only expanded by the new Console application. Which pretty much makes my mission hopeless.
The guy (Steve Szymanski) in the video mentions "All ASL logging APIs are superseeded by new APIs" and "New APIs for searching new log data will not be made public this release" aka asl_search. And that was exactly what I was looking for!
Also he mentions that a swift API i coming.
Looks like you need to use the enhanced Console instead of your own log viewer. The logs are compressed and not expanded until viewed - this makes logging much less intrusive at debug levels. There is no text form of the logs however.
See the 2016 WWDC video session 721 "Unified Logging and Activity Tracing" https://developer.apple.com/videos/play/wwdc2016/721/
Also the Apple sample app that demos the new approach has an undocumented build setting that I had to add to my iOS app. See the setting in the 'Paper Company (Swift)' iOS app.
The setting is found in the Targets section of the top level xCode window. These are the steps that I followed:
On the Build Settings page add in "User-Defined" a new section = ASSETCATALOG_COMPRESSION.
Under it add two lines:
Debug = lossless
Release = respect-asset-catalog
After adding this build setting then logging worked in my app as per the video session demo.

App with custom URL callback and custom search URL

I'm looking for recommendations for an iOS barcode scanner app. Specifically for iPad which will support a custom URL callback to enable the app to be launched from a web browser.
Additionally, it needs to support and a custom search URL which will send the user back to the website once the barcode has been decoded into a URN (SKU).
I have discovered ZBar which is an excellent app, unfortunately it doesn't support custom URL callback and it's designed for the iPhone.
Another app pic2shop PRO seems to tick these boxes, but it's relatively expensive at £10.49 and the setup will require somewhere in the region of 200 installs.
I did a similar project using the free version of pic2shop . The thing is that the free version can read only these types of barcodes : UPC-A, UPC-E, EAN-13, EAN-8 , according to the documentation of the app.
Pic2shop is a free barcode scanner app available for iOS® and Android®. It reads UPC-A, UPC-E, EAN-13, EAN-8 and QR codes. The app also display comparison shopping results for UPC and EAN.
From my personal experience, I can say that it scans and decodes the barcode very fast and very accurate.
In my project the app is launched from a webpage, it works for both android and ios. In order to get it working you have to invoke the pic2shop app from a url and then set your callback address. You will find the decoded barcode data as a value to a parameter in the callback url. To help you more, you can get those values using this javascript function found here.
For example:
<input type=button OnClick="scan();" value="Scan Barcode">
<script>
function scan(){
window.location="pic2shop://scan?callback=http://yourwebsiteurl.com/index.html?barcode=ean"
}
</script>
As soon as the item is successfully scanned it will redirect you to the callback url with the actual barcode number as a value to a parameter. For example http://yourwebsiteurl.com/index.html?barcode=5123548745123. I already told you how to get the value of a url parameter with javascript.
PDF417.mobi Pro barcode scanner app supports that use case.
Note: I'm a developer on that project.
Basically, the app can be launched from any other app, including a web application, when url in the form: pdf417://scan?type=PDF417,UPCA&callback=myscheme://myaction is launched.
The app then scans the barcode, in multiple formats, (PDF417 and UPCA in this example), until the result is obtained.
Then, the app opens the URL myscheme://myaction. In your case, this can be your web service, http://www.somemyscanner.com/service.
Specifically, it will open the URL using format: http://www.somemyscanner.com/service?data=[data]&type=[type].
You can then use those parameters to implement your desired functionalities.
I tried the PDF417 app and it is EXTREMELY expensive (for an app - $28) and does not work. I bought it anyway because I am trying to solve the same issue and I can tell you it is not the solution for general barcode scanning.
It might work with pdf417 barcodes, but those are few and far between and I haven't been able to get it to work. I definately does not support any standard barcode formats. It also has no settings panel (in settings) and the tap target in the app that should be settings just take you to the company web site.
I am still testing other apps but haven't found any app that does what you ask, Red Laser used to but it no longer has that functionality.

Resources