Is there any API to use TTS (Text-To-Speech) in BlackBerry? This could be on any version of the OS.
this is what RIM has to say about Text to speech API
The Text-To-Speech API in the BlackBerry® Java® Development Environment permits a developer to create a BlackBerry device
application that converts information into audio output. The Text-To-Speech API uses the JSR 113 specification (also known as
the Java® Speech API 2.0 specification) to support a speech synthesizer.
The developer can use the Text-To-Speech API and the Accessibility API to create a screen reader application. The Accessibility
API retrieves information from device applications and sends the information to the Text-To-Speech API. The Text-To-Speech
API can use the information that the Accessibility API provides to create audio output. For example, the Oratio™ for BlackBerry®
smartphones application uses the Accessibility API and the Text-To-Speech API to convert information into audio output for
users who are blind or visually impaired.
Currently, RIM limits the use of the Text-To-Speech API.
For more information about the JSR 113 specification, visit http://jcp.org/en/jsr/detail?id=113.
There are other apis which you can try
http://www.ispeech.org/text.to.speech.tts.saas.api
Yes we have
Go to below link:
Speech-Enable Your BlackBerry
And follow these steps.
First click on signing up for a developer's account. and get the key it is FREE.
From this you can get Two keys. One for Device and another for simulator;
when you check on simulator you must have to give the simulator key;
when you signIn on application replace the simulator key with the Device key;
Click on Sample Application on that link.
click on iSpeech BlackBerry Demo to download the sample demo. Extract that zip file you can get the one Lib file. (if you want to do your own Add it to your application).
NOTE: If you not get the key then, the sample demo which is provided by that link does not work.
Related
I would like to integrate Google assistant inside my app. The idea is that I have a app which provides various press services, like giving latest news and such. I would like to integrate Google assistant for handling some particular requests. For example the user may ask, "what did the Lakers yesterday?" If i search this on Google or ask to the assistant, i will get a card with the score of yesterday's game. I would like, from inside my app, to replicate this interaction, that is sending the request to Google assistant and showing the answer that Google return to the user (or at least opening Google assistant with the answer)
Is such a thing possible?
I was looking at Google Assistant service sdk (https://developers.google.com/assistant/sdk/guides/service/python/) and it says:
The Google Assistant Service gives you full control over the integration with the Assistant by providing a streaming endpoint. Stream a user audio query to this endpoint to receive a Google Assistant audio response.
Is this possible only with audio interaction? I'm not quite certain this is the solution I should look into
The Google Assistant SDK Service allows you to send both audio or text to the Assistant and you'll get back responses including audio, display text, and rich HTML visual content.
For mobile apps, there's less support compared to Python, but it's still doable. For example, there's a version of the SDK for Android Things, which means for IoT devices like a Raspberry Pi. You can go through this project and remove all the IoT references, but it's something you'd need to do yourself.
We are creating an iOS application on Bluemix and we are trying to link the Speech to Text service. We've bound the service to the application, but now we don't know how to utilize the service within our app.
How do we use the Speech to Text API in our iOS app with our back end hosted on Bluemix?
You have two options:
You make the call to the Watson Speech to Text service directly from your iOS application. You can either invoke the REST API directly from your iOS app using something like RestKit, or you can use the Watson Speech iOS SDK to make that invocation easier.
You can send all the received audio to your app on Bluemix (serving as a mobile back end) and invoke the Speech to Text REST API from there. This will offload computation from the mobile device, but will most likely increase the latency of getting back the audio transcription to your mobile phone.
Additionally, there is now a Watson iOS SDK which includes the Speech to Text service. This seems like an ideal solution over using the REST API directly if you plan to do a lot of work with Watson.
I apologise for the possibility of the title of my question would lead to confusion of the problem. For that I will explain my purpose in detail.
We are currently developing our own wifi speaker which is built with MIPS. The speaker comes with an app that will be used to manage it. One of the features that would we would like to include in the app is accessing contents of Spotify and be able to play them on the speakers.
Unfortunately, after going through the iOS SDK Documentation, and did some tests on Web API Console provided by the official of Spotify, I noticed that Spotify does not allow developers to directly get URL of a song, except for preview purposes. I also wasn't able to find any way to get the data bytes of the music streamed from the server. Every content comes with a corresponding URI which is used for a request.
For the device(WiFi Speaker) part, we recently tried to contact Spotify and ask for an SDK that can be used for development. However, one problem is that Spotify told us that they have SDK for x86, and ARMs architecture only. They don't have MIPS.
Now, here are my questions:
Is there any way for me to push music from an app to the WiFi Speakers without having to use SDK (for backend device)?
If Spotify can provide an SDK for our device, then how can we integrate the SDK with our platform?
I'll explain my 2nd question for clarity. Like for instance, in Android and iOS, these are popular platforms and are widely used by mobile devices. So if they provide SDKs for the two OS, then they can use default system frameworks to access the player for playing the content. (In iOS, it's the AVFoundation Framework). However, if Spotify were able to provide the SDK that we need, how would we able to integrate that with our own platform?
I will answer your question no 1:
You should be able to push music from an app using a buffer that you can read from using Core Audio and also forward to a device of your choice. I think what you are looking for can be found at CocoaLibSpotify
I see that it is possible to connect smart card readers to an iPad or iPhone.
Does iOS have an API for accessing smart cards - or does it require proprietary SDK’s from the smart card reader manufactures?
Specifically I want to use the certificate stored on the smart card to sign a message.
yes, require proprietary SDK..
For example you can easily use:
UniMag, Mobile MagStripe Reader "..reader that works with various mobile devices.., ...Plugs into audio headphone jack – no cables..."
The SDK is there... in page... http://www.idtechproducts.com/products/mobile-readers/112.html
another reader with own SDK: http://www.magtek.com/support/software/programming_tools/
I wonder is there open android API for Youtube TV pairing.
As I know, we can control Youtube on TV or browser(www.youtube.com/tv) using Youtube on mobile or "Youtube remote" app in play store.
But to do this, we should pair Youtubes in TV(or browser) and mobile first.
I try to find some API related with paring or youtube paring application, but I can't.
Please let me know it is possible to use Youtube pairing API for android.
ex)API for generate pairing code.
If you're looking to build general "pairing" functionality, this can be done using something called "OAuth for Devices". You could add videos to a playlist which is being autocycled in a browser. A demo of this functionality is here, with source code for the sample app in Python here.