I'm trying to use AVSpeechSynthesizer with AVSpeechSynthesisIPANotationAttribute but it doesn't seem to work properly.
Here is the IPA notation for the pronunciation of the word "participation" taken from Wiktionary: pɑɹˌtɪsɪˈpeɪʃən. But AVSpeechSynthesizer does not read it correctly.
When I add a custom pronunciation using Settings -> Accessibility -> Spoken Content -> Pronunciations on my iPhone, it gives me this notation: pəɻ.ˈtɪ.sɪ.ˈpe͡ɪ.ʃən, which the iPhone reads correctly.
Why does "pəɻ.ˈtɪ.sɪ.ˈpe͡ɪ.ʃən" work but a string like "pɑɹˌtɪsɪˈpeɪʃən" that is taken from a dictionary does not?
Here is a sample code you can try:
import UIKit
import AVKit
class ViewController: UIViewController {
let synthesizer = AVSpeechSynthesizer()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func buttonPressed(_ sender: Any) {
let pronunciationKey = NSAttributedString.Key(rawValue: AVSpeechSynthesisIPANotationAttribute)
let attrStr = NSMutableAttributedString(string: "foo",
attributes: [pronunciationKey: "pɑɹˌtɪsɪˈpeɪʃən"])
let utterance = AVSpeechUtterance(attributedString: attrStr)
synthesizer.speak(utterance)
}
}
Related
I have one button, one text box and one button in my iOS app. The user inputs text, presses the button and the user input is shown in the label. Also, the user input is saved when the app is closed and ran again by UserDefaults. The code for the iOS app is as follows:
import UIKit
class ViewController: UIViewController {
// Connecting the front-end UI elements to the code
#IBOutlet weak var myTextField: UITextField!
#IBOutlet weak var myLabel: UILabel!
// Connecting the button to the code
#IBAction func myButton(_ sender: Any) {
UserDefaults.standard.set(myTextField.text, forKey: "myText")
myLabel.text = UserDefaults.standard.object(forKey: "myText") as? String
}
override func viewDidLoad() {
super.viewDidLoad()
// If there is any value stored previously, the stored data will be shown in the label and the text box to edit.
if (UserDefaults.standard.object(forKey: "myText") as? String) != nil {
UserDefaults.standard.set(myTextField.text, forKey: "myText")
myLabel.text = UserDefaults.standard.object(forKey: "myText") as? String
}
}
}
But now, I want to access the data stored in the key myText in the WatchKit Extension and display the text in the WatchKit App. I have inserted a label in the WatchKit App but want the WatchKit Extension to access the data stored in UserDefaults.standard.object(forKey: "myText").
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController {
// Connecting the label to the code
#IBOutlet var watchKitLabel: WKInterfaceLabel!
override func awake(withContext context: Any?) {
super.awake(withContext: context)
watchKitLabel.setText((UserDefaults.standard.object(forKey: "myText")) as? String)
}
}
Can anyone please help me? I have read the documentations about App Groups on various sources, but the documentation is either of Swift v3.1 or Objective-C. Or any other solution except App Groups will work. Also, I want the data to be platform-independent. So, if the user inputs once in the iOS app and quits, I want the data to be accessible by the watch right away.
Thanks in advance... :)
I am currently developing an iOS app that converts text to speech using AVSynthesizer.
What I want to do is that while the synthesizer is speaking, utterance rate can be changed and with a slider and the speed of the speaking changes.
I am doing this in the IBAction of the slider:
self.utterance = sender.value
but the synthesizer doesn't change the speed. I've been looking for information but I haven't found something yet.
What cand I do? Thanks in advance.
Ok, so after playing a bit with this cool feature I wasn't aware of, I found out a way to change the rate of utterance. The main problem is that the utterance is currently enqueued by synthetizer, the rate can't be changed. Corresponds to the documentation:
/* Setting these values after a speech utterance has been enqueued will have no effect. */
open var rate: Float // Values are pinned between AVSpeechUtteranceMinimumSpeechRate and AVSpeechUtteranceMaximumSpeechRate.
open var pitchMultiplier: Float // [0.5 - 2] Default = 1
open var volume: Float // [0-1] Default = 1
So the workaround would be to stop the synthesizer and feed him new utterance with trimmed string.
import UIKit
import AVFoundation
class ViewController: UIViewController {
var synthesizer: AVSpeechSynthesizer!
var string: String!
var currentRange: NSRange = NSRange(location: 0, length: 0)
#IBAction func sl(_ sender: UISlider) {
synthesizer.stopSpeaking(at: .immediate)
if currentRange.length > 0 {
let startIndex = string.index(string.startIndex, offsetBy: NSMaxRange(currentRange))
let newString = string.substring(from: startIndex)
string = newString
synthesizer.speak(buildUtterance(for: sender.value, with: string))
}
}
func buildUtterance(for rate: Float, with str: String) -> AVSpeechUtterance {
let utterance = AVSpeechUtterance(string: str)
utterance.rate = rate
utterance.voice = AVSpeechSynthesisVoice(language: "en-US")
return utterance
}
override func viewDidLoad() {
super.viewDidLoad()
string = "I am currently developing an iOS app that converts text to speech using AVSynthesizer.What I want to do is that while the synthesizer is speaking, utterance rate can be changed and with a slider and the speed of the speaking changes. I am doing this in the IBAction of the slider: self.utterance = sender.value but the synthesizer doesn't change the speed. Ive been looking for information but I haven't found something yet. What can I do? Thanks in advance."
synthesizer = AVSpeechSynthesizer()
synthesizer.delegate = self
synthesizer.speak(buildUtterance(for: AVSpeechUtteranceDefaultSpeechRate, with: string))
}
}
extension ViewController: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) {
debugPrint(characterRange.toRange())
currentRange = characterRange
}
}
implement the delegate method willSpeakRangeOfSpeechString of AVSpeechSynthesizerDelegate and define the delegate of synthesizer as self: synthesizer.delegate = self
In that delegate method, save the characterRange which would be spoken next.
Bind IBAction func sl(_ sender: UISlider) to your slider's event touchUpInside.
in that IBAction, stop speaking, and get the substring of your current text being spoken from index it would've continue.
Build new utterance and start speaking it
Profit.
swift 3
import UIKit
import AVFoundation
class ViewController: UIViewController{
#IBOutlet weak var sliderVolume: UISlider! //for volume
#IBOutlet weak var sliderRate: UISlider! //for rate
#IBOutlet weak var sliderPitch: UISlider! //for pitch
#IBOutlet weak var txtForSpeak: UITextField!
let speechSynth = AVSpeechSynthesizer()
#IBAction func btnStartToSpeak(_ sender: UIButton) {
let speechUtt = AVSpeechUtterance(string: self.txtForSpeak.text!)
speechUtt.rate = self.sliderRate.value
speechUtt.volume = self.sliderVolume.value
speechUtt.pitchMultiplier = self.sliderPitch.value
self.speechSynth.speak(speechUtt)
}
}
I just want to get the name and email from the mobile contact and print it. I'm using the following code to do this task.
import UIKit
import ContactsUI
class ViewController: UIViewController, CNContactPickerDelegate{
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
#IBAction func button(_ sender: Any)
{
let contactPicker = CNContactPickerViewController()
contactPicker.delegate = self
contactPicker.displayedPropertyKeys =
[CNContactNicknameKey
,CNContactEmailAddressesKey]
self.present(contactPicker, animated: true, completion: nil)
}
func contactPicker(_ picker: CNContactPickerViewController, didSelect contact: CNContact)
{
if let emailValue : CNLabeledValue = contact.emailAddresses.first
{
print(emailValue.value as String)
}
print(contact.givenName + " " + contact.familyName)
}
}
It prints the name and email that I select in CNContactPickerViewController, even if the selected name does not have an email, it just prints the name alone.
Now, what i want is, I don't want to display the names that has no email, in the CNContactPickerViewController. Only names that has email stored along with it, should be displayed. How can i do that ?
Using Xcode 8.2, Swift 3, IOS 10
NOTE: I don't want to check whether the email exists or not, or is it valid or not.
Have you tried adding a predicate to the CNContactPickerViewController? The documentation appears to have exactly the requirement that you're looking for.
In your button() method add the following before calling present().
contactPicker.predicateForEnablingContact = NSPredicate(format: "emailAddresses.#count > 0")
I added a textview and assigned its delegate to the View Controller.
#IBOutlet var textview: UITextView!
var attributedString = NSMutableAttributedString()
override func viewDidLoad() {
super.viewDidLoad()
textview.delegate = self
attributedString = NSMutableAttributedString(attributedString:textview.attributedText!)
}
I have a button which when clicked will add/append a custom emoji to the textview a NSAttributedString containing NSTextAttachment using following code.
#IBAction func actAngry(_ sender: Any) {
let attachment = NSTextAttachment()
attachment.image = UIImage(named: "Angry")
let attributedEmoji = NSAttributedString(attachment: attachment)
attributedString.append(attributedEmoji)
textview.attributedText = attributedString
}
Now when I'll click on send button and here i want to replace that custom emoji with some code e.g. #43567, this is custom code for the emoji.
#IBAction func sendNow(_ sender: Any) {
print(textview.attributedText)
print(textview.text)
}
So If a type Hi, How are you? and than press actAngry button.
It will look like Hi, How are you?[Angry Image].
It should convert to: Hi, How are you?#43567, so that i can send it to server.
What have I tried, I tried follwing delegate method to get some idea, but failed:
func textView(_ textView: UITextView, shouldInteractWith textAttachment: NSTextAttachment, in characterRange: NSRange, interaction: UITextItemInteraction) -> Bool {
Found THIS amazing library to perform this task and to use that you need to add two files onto project listed below:
Emoji.swift
String+Emoji.swift
Code is too much long So I can not paste it here.
Now you can use it this way when you want to send text to server:
let emojiWithString = "Hi, How are you? 😁"
print(emojiWithString.emojiEscapedString)
This will print: Hi, How are you? :grin:
You can send this text to server now and when you receive same string from server you can convert string to emoji this way by using same library:
let str = "Hi, How are you? :grin:"
print(str.emojiUnescapedString)
And this will print:
Hi, How are you? 😁
Hope this will help.
Between the pod spec and what is currently on S.O. I had a tough time figuring out how to get speech-to-text working using SpeechKit + CocoaPod + Swift. Finally got it working so figured I'd help the next poor soul that comes looking for help! :)
First install the CocoaPod: https://cocoapods.org/pods/SpeechKit
Add #import <SpeechKit/SpeechKit.h> to your bridging header
Login to Nuance's dev portal and create an app: https://developer.nuance.com/
Clean up the demo code so that is is more organized. I just wanted as much of the code to be in one place as possible so you can see a fully working implementation.
Then create a UIViewController and add the following code with the correct credentials:
import UIKit
import SpeechKit
class SpeechKitDemo: UIViewController, SKTransactionDelegate {
override func viewDidLoad() {
super.viewDidLoad()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
//!!link this to a corresponding button on the UIViewController in I.B.
#IBAction func tappedButton(sender: AnyObject) {
// All fields are required.
// Your credentials can be found in your Nuance Developers portal, under "Manage My Apps".
let SKSAppKey = "[Get this from the nuance app info page]";
let SKSAppId = "[Get this from the nuance app info page]";
let SKSServerHost = "[Get this from the nuance app info page]";
let SKSServerPort = "[Get this from the nuance app info page]";
let SKSLanguage = "eng-USA";
let SKSServerUrl = "nmsps://\(SKSAppId)#\(SKSServerHost):\(SKSServerPort)"
let session = SKSession(URL: NSURL(string: SKSServerUrl), appToken: SKSAppKey)
//this starts a transaction that listens for voice input
let transaction = session.recognizeWithType(SKTransactionSpeechTypeDictation,
detection: .Short,
language: SKSLanguage,
delegate: self)
print(transaction)
}
//required delegate methods
func transactionDidBeginRecording(transaction: SKTransaction!) { }
func transactionDidFinishRecording(transaction: SKTransaction!) { }
func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) {
//Take the best result
let topRecognitionText = recognition.text;
print("Best rec test: \(topRecognitionText)")
//Or iterate through the NBest list
let nBest = recognition.details;
for phrase in (nBest as! [SKRecognizedPhrase]!) {
let text = phrase.text;
let confidence = phrase.confidence;
print("\(confidence): \(text)")
}
}
func transaction(transaction: SKTransaction!, didReceiveServiceResponse response: [NSObject : AnyObject]!) { }
func transaction(transaction: SKTransaction!, didFinishWithSuggestion suggestion: String!) { }
func transaction(transaction: SKTransaction!, didFailWithError error: NSError!, suggestion: String!) { }
}