I'm using Amazon Polly for text-to-speech functionality of my app. I have succeeded in developing it except for one thing. I can't get the current spoken word so I can highlight it on the screen. I know in AVSpeechSynthetizerDelegate, we have this function: AVSpeechSynthetizerDelegate.speechSynthesizer(_:willSpeakRangeOfSpeechString:utterance: to do this. Is there any other similar function in Amazon Polly for this?
Thanks
Related
I've recently installed Walmart's API into my iOS app. The app's purpose is for the user to search for items using Walmart's API and then display the queried items to the user. However I am having a hard time finding any examples on how to use Walmart's API with Xcode and Swift language. Was curious if anyone could point me in the direction of how to properly use the API while using Swift language or if anyone has experience using the Walmart API in Xcode? I feel like it shouldn't be hard to accomplish what I am trying to do but right now I am a bit lost. All help and advice is appreciated in advance! Thanks.
Whenever An API is Created The Author gives out a Documentation, about the usage of the API, Requests Parameters etc.
So, Go through the Documentation of Your Needed API,
Once the Requests Are made in-app, they Generally respond you in JSON, XML etc.
Using these Incoming data you can Populate data in your APP
Currently following hot word example, I create custom commands like turn screen on/off, how do I disable voice response "sorry I can't help you"
there are multiple ways do it.follow this link and details google assistant
1 - if your using this method/project creation and run it. then you can parse the request/query in event.args['text'] based on which you can perform activity local without sending it to google assistant. problems: google will response with some voice message parallel.
2 - use IFTTT, pretty simple to work with. basic use with webhooks takes little time though. this link is useful and use ngrok for local webhook url
3 - use API.AI this is for advanced projects where you depend on google to assist with questions recognization and response with your answers from webhooks. it's not straight forward to work with, the details and tutorials that are given are with google cloud functions which works only with node.js as of now. if your python programmer or any other languages google has examples in github which are again not stright forward, I guess.
I'm trying to enable translating some text in my app. I want user to be able to launch whichever translation tool they use on their device (Google Translate or iTranslate) and see the translation without having to type it. For this, I'm using the url schemes:
googletranslate://
itranslate://
Now, I need to pass the query to those apps. I know how to do this for iTranslate:
itranslate://translate?from=auto&to=en&text=<encoded_string>
This is cool, now I would like to know how to do the same for Google Translate. It needs to automatically detect the language and translate it to english.
It is not currently possible to prefill UI elements in the Google Translate iOS application when opening it from googletranslate:// URLs. The contents of the URL after googletranslate:// appear to be completely ignored. So the most you can get from using these links at the moment is opening the iOS application.
If this does get implemented at some point in the future, one can test by opening a link like googletranslate://example%20text/?param=value&from=zh_TW. I would strongly recommend that you let your voice be heard on the Google Translate product forum by requesting this feature.
In the meantime, you may want to consider using Translation API to provide translations within your application. This can be achieved using the REST API.
If is there anyone still looking for the answer, you can use it like this.
googletranslate://?sl=en&tl=tr&text=hello%20world
You can change the parameters
sl = source language
tl = translation language
text = the thing you want to translate
How do you implement suggestions when a user starts typing in their city/location into an iOS app text field? For example, if you look at the yelp or maps app, when you start typing in a city it will provide suggestions based on what you start typing. Is there a way to do this, like somehow get a list of all the CLRegions in the CoreLocation framework, or some other way of implementing this?
Thanks for any help.
The best solution for this is to use someone else's (web-based) geocoding database - geocoding is a complex problem that you don't want to solve yourself :-). Geocoding is the act of taking a textual address and turning it into a latitude/longitude. Typeahead for geocoding is a helpful feature that requires fairly large databases that likely aren't suitable for mobile apps, thus my recommendation to use one of the web-based geocoding services for this.
Google's places API is probably a good candidate:
https://developers.google.com/places/documentation/autocomplete?csw=1
Check out this answer for some commentary from Google from 2011 when they first opened up that API in 2011:
Google Places API in iOS application
I'm currently working on an iOS app using the Soundcloud API, and it's working great so far. Something that I haven't been able to figure out, however, is how to construct an URL in order to get only the tracks pinned as spotlight tracks from a certain user.
For example, let's say I'm using the following URL:
http://api.soundcloud.com/users/username/tracks.json?client_id=mySecretId
I've carefully gone through the API documentation at developers.soundcloud.com/docs/api/, but it feels like there's a subresource I'm missing.
I'm aware that I can use the created_at filter to show the most recently added tracks, but if I'm understanding the platform correctly, a spotlight track does not necessarily have to be one of the most recently added?
I would be ever so grateful for feedback on this subject!
Cheers
/Anders
Spotlight is not available via public API, sorry.