How to detect if an iOS device has downloaded a voice file? - ios

I'm working on an iOS text to speech app and trying to add an option to use the Alex voice, which is new for iOS 9. I need to determine whether or not the user has downloaded the Alex voice in Settings -> Accessibility. I can't seem to find out how to do this.
if ([AVSpeechSynthesisVoice voiceWithIdentifier:AVSpeechSynthesisVoiceIdentifierAlex] == "Not Found" ) {
// Do something...
}
The reason is the other language voices that are standard, play back at a certain rate, different from the Alex voice. So I have a working app, but if the user hasn't downloaded the voice, iOS automatically defaults to a basic voice, but it plays back at the incorrect rate. If I can detect the voice hasn't been downloaded, I can compensate for the difference and / or advise the user.

OK, so I guess I was overthinking this and thought it was more complicated. The solution was simple.
if (![AVSpeechSynthesisVoice voiceWithIdentifier:AVSpeechSynthesisVoiceIdentifierAlex]) {
// Normalize the speech rate since the user hasn't downloaded the voice and/or trigger a notification that they need to go into settings and download the voice.
}
Thanks to everyone who looked at this and to #CeceXX for the edit. Hope this helps someone else.

Here's one way to do it. Let's stick with Alex as an example:
- (void)checkForAlex {
// is Alex installed?
BOOL alexInstalled = NO;
NSArray *voices = [AVSpeechSynthesisVoice speechVoices];
for (id voiceName in voices) {
if ([[voiceName valueForKey:#"name"] isEqualToString:#"Alex"]) {
alexInstalled = YES;
}
}
// react accordingly
if (alexInstalled) {
NSLog(#"Alex is installed on this device.");
} else {
NSLog(#"Alex is not installed on this device.");
}
}
This method loops through all installed voices and queries each voice's name. If Alex is among them, he's installed.
Other values you can query are "language" (returns a language code like en-US) and quality (1 = standard, 2 = enhanced).

Related

how to detect what bluetooth device audio is coming out of

I have two BluetoothHFP bluetooth devices connected to my iPad(bluetoothA2DP and bluetoothLE) and I need to detect which one is currently getting the audio. Below is the code I am using to detect what bluetooth is available :
let currentRoute = audioSession.currentRoute
for description in currentRoute.outputs {
if convertFromAVAudioSessionPort(description.portType) == convertFromAVAudioSessionPort(AVAudioSession.Port.bluetoothA2DP) {
//Do Something
break
}else if convertFromAVAudioSessionPort(description.portType) == convertFromAVAudioSessionPort(AVAudioSession.Port.bluetoothHFP) {
//Do Something
break
}else if convertFromAVAudioSessionPort(description.portType) == convertFromAVAudioSessionPort(AVAudioSession.Port.bluetoothLE){
//Do Something
break
}
}
What can I use to find out what one of the BluetoothHFP devices is currently getting audio?
You can distinguish between ports of the same type using description.portName and description.uid. Note that the name is not promised to be unique (it comes from the device). The UID is system-assigned and is not promised to be stable. It's only promised to be consistent with the owningPortUID property, and will be unique at any given time.
It happens to be true currently (iOS 13) that the UID is based on the hardware MAC address, and is stable. It's in the format aa:bb:cc:dd:ee:ff-tacl (the MAC address followed by -tacl). This has been true for a long time. I've been using this fact since at least iOS 8, and it's very likely it's been true as long as AVAudioSession has been around. But it's not promised, and is exactly the kind of thing that Apple has been known to change without notice.

Disabling Callkit from China Store Best Approach?

We are using CallKit framework to benefit native usage for Voip features. Users can make Voice and Video Calls in our Messenger App.
But Apple removing CallKit apps from China, because of Chinese government.
What is the best approach for CallKit apps like us for now?
We do not want to remove our app from China and we do not remove all CallKit functionality from our app because of China..
I agree with txulu that it seems that CallKit just needs to be disabled/not used for users in China - see this helpful response on the Apple Developer forums.
The general consensus seems to be that as long as you can explain to App Review how you’re disabling CallKit features for users in China, that should probably be acceptable unless/until Apple publishes specific guidelines.
For your particular problem Ahmet, it sounds like CallKit may provide some of the the core functionality of your app. If this is the case and you really need to support users in China, you might want to look at rebuilding your app using another VOIP framework to make calls (VOIP is still allowed in China...just not using CallKit). Or perhaps you could disable and hide the calling features in your app if the user is in China.
My app was only using CallKit to observe when a call initiated from my app ends, so I was able to devise a work around. For users in China I now observe for the UIApplicationDidBecomeActiveNotification and make my best guess about whether a phone call initiated from the app has ended based on how much time has elapsed since the call began. It's not as good as using CallKit's CXCallObserver, but it seems to work well enough for my purpose.
Update! My app passed App Store review with the fix described.
Submitted a new version yesterday.
Included a short message in the reviewer info section saying "In this version and onwards, we do not use CallKit features for users in China. We detect the user's region using NSLocale."
App was approved around 12hr later without any questions or comments from the App Review team.
Detecting users in China
To determine if a user is in China, I am using NSLocale to get the users' currentLocale and countryCode. If the countryCode contains one of the ISO codes for China (CN, CHN), I set a flag to note I cannot use CallKit and not initialize or use CallKit features in my app.
- (void)viewDidLoad {
[super viewDidLoad];
NSLocale *userLocale = [NSLocale currentLocale];
if ([userLocale.countryCode containsString: #"CN"] || [userLocale.countryCode containsString: #"CHN"]) {
NSLog(#"currentLocale is China so we cannot use CallKit.");
self.cannotUseCallKit = YES;
} else {
self.cannotUseCallKit = NO;
// setup CallKit observer
self.callObserver = [[CXCallObserver alloc] init];
[self.callObserver setDelegate:self queue:nil];
}
}
To test this, you can change the region in Settings > General > Language and Region > Region. When I set Region to 'China' but left language set as English, [NSLocale currentLocale] returned "en_CN".
Swift 5
Utility Functions
func isCallKitSupported() -> Bool {
let userLocale = NSLocale.current
guard let regionCode = userLocale.regionCode else { return false }
if regionCode.contains("CN") ||
regionCode.contains("CHN") {
return false
} else {
return true
}
}
MainViewController
class MainViewController: UIViewController {
...
var callObserver = CXCallObserver()
...
override func viewDidLoad() {
super.viewDidLoad()
if isCallKitSupported() {
callObserver.setDelegate(self, queue: nil)
}
...
}
...
}
Note: countryCode is now regionCode and only returns 'US', 'CN', etc. No language before country code like 'en_CN'.
Swift 5
func isCallKitSupport() -> Bool {
let userLocale = NSLocale.current
if userLocale.regionCode?.contains("CN") != nil ||
userLocale.regionCode?.contains("CHN") != nil {
return false
} else {
return true
}
}
One thing you could try, even though it may not work: disable callkit functionality based on the locale region. This may be enough "proof" that Callkit is disabled for China from the legal perspective in order to be approved for the Appstore. Then your Chinese customers could just switch the region in the settings to get Callkit. This would be already "their" problem so to speak.
Disclaimer: I'm by no means a lawyer or anything, follow this advice at your own risk.
Edit:
CXProvider.isSupported is no longer available: I keep the answer here hoping that it will be restored back on an upcoming iOS 13 release.
From iOS 13 onwards, the correct way to do this is to check the new CXProvider.isSupported property.
Here's the documentation (from Xcode, as the online documentation has not been updated yet):
Go to “Pricing and Availability” in iTunes Connect.
Availability” (Click blue button Edit).
Deselect China in the list “Deselect” button.
Click “Done”.

AudioContext.createMediaStreamSource alternative for iOS?

I've developed an app using Cordova and the Web Audio API, that allows the user to plug in headphones, press the phone against their heart, and hear their own heartbeat.
It does this by using audio filter nodes.
//Setup userMedia
context = new (window.AudioContext||window.webkitAudioContext);
navigator.getUserMedia = (navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
navigator.getUserMedia(
{audio:true},
userMediaSuccess,
function(e) {
alert("error2 " + e.message);
});
function userMediaSuccess(stream)
{
//set microphone as input
input = context.createMediaStreamSource(stream);
//amplify the incoming sounds
volume = context.createGain();
volume.gain.value = 10;
//filter out sounds below 25Hz
lowPass = context.createBiquadFilter();
lowPass.type = 'lowpass';
lowPass.frequency.value = 25;
//filter out sounds above 425Hz
highPass = context.createBiquadFilter();
highPass.type = 'highpass';
highPass.frequency.value = 425;
//apply the filters and amplification to microphone input
input.connect(lowPass);
input.connect(highPass);
input.connect(volume);
//send the result of these filters to the phones speakers
highPass.connect(context.destination);
lowPass.connect(context.destination);
volume.connect(context.destination);
}
It runs fine when I deploy to Android, but it seems most of these features aren't available on iOS mobile browsers.
I managed to make getUserMedia function using the iosRTC plugin, but createMediaStreamSource is still "not a function."
So, I'm looking for an alternative to the Web Audio API that can filter out frequencies, or if there are any plugins I could use, that would be perfect.
There's no way to do this on ios web. You'd need a native app, since Apple doesn't support audio input in safari.
Did you try to use
document.addEventListener('deviceready', function () {
// Just for iOS devices.
if (window.device.platform === 'iOS') {
cordova.plugins.iosrtc.registerGlobals();
}
});
You asked this question quite a while ago, but sadly createMediaStreamSource is still not supported in Safari Mobile (will it ever be?).
As previously said, a plugin is the only way to achieve this, and there is actually a Cordova/Phonegap plugin that does exactly that. cordova-plugin-audioinput gives you access to the sound from the microphone using either the Web Audio API or by callbacks that delivers raw audio data chunks, and it supports iOS as well as Android.
Since I don't want to post the same answer twice, I'll instead point you to the following answer here on stackoverflow, where you'll also find a code example: https://stackoverflow.com/a/38464815/6609803
I'm the creator of the plugin and any feedback is appreciated.
Good news, full support for ios safari
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaStreamSource

IOS WebAudio only works on headphones

I've been running into an issue now for a while where on some ios devices my webaudio system only seems to work with headphones where as other devices (exact same os, model, etc) the audio plays perfectly fine through the speakers or headphones. I've searched for a solution to this but haven't found anything on this exact issue. The only thing I can think of is that maybe it's an audio channel issue or something.
How can I fix this?
#Alastair is correct, the mute toggle switch does mute WebAudio, but it does not mute HTML5 tags. Thanks to his work I managed to find a work around for the web which enables WebAudio to play even when the mute toggle switch is on. I'd post this as a comment on his reply, but I don't have the reputation.
In order to play WebAudio you must also play at least one WebAudio sound source node and one HTML5 tag during a user action. It is fine if these sounds are short bits of silence. I found that this self contained code works without any extra files needed:
EDIT 11/29/19:
Removed vestigial typescript typedefs. Thanks #Joep. I also realized the code below is woefully out of date and janky. Just consider it an example. Editing this post prompted me to create an open source solution for this. You can see a demo of it here: https://spencer-evans.com/share/github/unmute/ and check out the repo here: https://github.com/swevans/unmute
/**
* PLEASE DONT USE THIS AS IT IS, THIS IS JUST EXAMPLE CODE.
* If you want a drop in solution I have a script on git hub
* Demo:
* #see https://spencer-evans.com/share/github/unmute/
* Github Repo:
* #see https://github.com/swevans/unmute
*/
var isWebAudioUnlocked = false;
var isHTMLAudioUnlocked = false;
function unlock() {
if (isWebAudioUnlocked && isHTMLAudioUnlocked) return;
// Unlock WebAudio - create short silent buffer and play it
// This will allow us to play web audio at any time in the app
var buffer = myContext.createBuffer(1, 1, 22050); // 1/10th of a second of silence
var source = myContext.createBufferSource();
source.buffer = buffer;
source.connect(myContext.destination);
source.onended = function()
{
console.log("WebAudio unlocked!");
isWebAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
source.start();
// Unlock HTML5 Audio - load a data url of short silence and play it
// This will allow us to play web audio when the mute toggle is on
var silenceDataURL = "data:audio/mp3;base64,//MkxAAHiAICWABElBeKPL/RANb2w+yiT1g/gTok//lP/W/l3h8QO/OCdCqCW2Cw//MkxAQHkAIWUAhEmAQXWUOFW2dxPu//9mr60ElY5sseQ+xxesmHKtZr7bsqqX2L//MkxAgFwAYiQAhEAC2hq22d3///9FTV6tA36JdgBJoOGgc+7qvqej5Zu7/7uI9l//MkxBQHAAYi8AhEAO193vt9KGOq+6qcT7hhfN5FTInmwk8RkqKImTM55pRQHQSq//MkxBsGkgoIAABHhTACIJLf99nVI///yuW1uBqWfEu7CgNPWGpUadBmZ////4sL//MkxCMHMAH9iABEmAsKioqKigsLCwtVTEFNRTMuOTkuNVVVVVVVVVVVVVVVVVVV//MkxCkECAUYCAAAAFVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV";
var tag = document.createElement("audio");
tag.controls = false;
tag.preload = "auto";
tag.loop = false;
tag.src = silenceDataURL;
tag.onended = function()
{
console.log("HTMLAudio unlocked!");
isHTMLAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
var p = tag.play();
if (p) p.then(function(){console.log("play success")}, function(reason){console.log("play failed", reason)});
}
window.addEventListener("mousedown", unlock);
This is likely because the iPhone's side switch is on "mute". It's very confusing - HTML5 <audio> tags still play fine when the phone is muted, but WebAudio does not. Why? Who knows. But it's a restriction I currently haven't found a way around.
If the iPhone mute button is down, meaning that the iPhone is muted, what is played through Web Audio Api will be muted.
Unfortunately there is no way to check if that physical button (located on the left edge towards the top of the iPhone) is on or off through Javascript.
This issue is completely independent from the fact that in iOS Safari the audio has to be started by a user action for it to be unmuted. There are some tricks that can be done to overcome that fact, including the one suggested by here Spencer, were you use "any action or a specific action" started by the user to "play" a silent audio file to allow subsequently playing audio files to play unmuted.
had same issue, and finally understood problem.
indeed WebView don't play sound on internal speakers if phone is in mute.
when i dig deeper i found a workaround :)
original post => https://stackoverflow.com/a/37874619/8064246
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
//print("AVAudioSession Category Playback OK")
do {
try AVAudioSession.sharedInstance().setActive(true)
//print("AVAudioSession is Active")
} catch _ as NSError {
//print(error.localizedDescription)
}
} catch _ as NSError {
//print(error.localizedDescription)
}

Detect if headset is plugged in - iOS 5

I'm aware that this is a question already asked, I've found possible duplicates:
Detecting if headphones are plugged into iPhone
headphone plug-in plug-out event when audio route doesn't change - iOS
Detect if headphones (not microphone) are plugged in to an iOS device
...and more info on the WWW. But I've tried out every solution given and everytime I have problem, probably because they are old threads and are referring to iOS 4.
How can I detect it on iOS 5.0?
Thanks
If you're okay with an iOS 6-only solution, Apple added several new AVAudioSession properties that let you detect audio routes in just a few lines (and without the use of C).
Use this method to check for headphones (or adjust it to check for other outputs - "Speaker", "Headset", etc.):
- (BOOL)isHeadsetPluggedIn
{
// Get array of current audio outputs (there should only be one)
NSArray *outputs = [[AVAudioSession sharedInstance] currentRoute].outputs;
NSString *portName = [[outputs objectAtIndex:0] portName];
if ([portName isEqualToString:#"Headphones"]) {
return YES;
}
return NO;
}
If you want to respond to audio route changes passively, you can do this with the new NSNotification, AVAudioSessionRouteChangeNotification. Unfortunately, this notification doesn't tell you what the new route is, just the previous route that it switched from. But, you can just call some variation of the method above to get the current route.
Wes seems to have a great solution. Alas it is not international-proof. This code only works for the English language. In Dutch, for instance, the headset is called 'Koptelefoon' and
*portName
contains indeed 'Koptelefoon' which makes the test fail.
This will do the job internationally correct:
if ([portDescription.portType isEqualToString:AVAudioSessionPortHeadphones])
;

Resources