I am making an interactive app for kids which uses human voices for interaction; I need a software which can generate audio files of natural human speech( like TTS) or any other way out. The App will be using audio files for commercial purpose, I am ready to buy or license the software.
Try with ispeech. It works with mobile apps. It's not free.
NeatSpeech can generate US and UK accent.
You can use TTSEngine.com as a drop-in alternative to the Google or other TTS services. They give you 1000's of requests for free every month with a default voice.
They also sell custom voices, so if you would like your app to have a unique voice they can record an actor and build a TTS from the actor's voice. They'll give the custom voice exclusively to you for use in your apps.
Related
I have a roadside assistance service application. It has some of the functionalities similar to ride booking app(Eg. Uber). How far can i leverage iOS 10 Sirikit? May be, apple can reject it. But i need to know the technical feasibility.
My application's functionality - I am struck in the middle of road with a flat tire. I need a tow assistance for my car. I give my current location and i ask my app to tow to my preferred dealer location. I pay for the service and wait for the provider to respond. I receive continuous updates from my service provider regarding the driver.
1st Step tried: I am trying to open my app with the statement "Siri, get a roadside assistance for my flat tire". I need to open my app and capture FLAT TIRE as a parameter. But I couldn't.
I tried using AppIntentVocabulary.plist. It was not working. I am missing something and there are no complete tutorials over the internet. Any help is much appreciated.
Sample Project:
Github Link for my simple Siri Integration:
https://github.com/vivinjeganathan/SiriExample
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.
SiriKit is a way for you to make your content available through Siri. It also lets you add support for your services to the Maps app. To support SiriKit, you use the Intents framework and Intents UI framework to implement one or more extensions that you then include inside your iOS app. When the user requests specific types of services through Siri or Maps, the system uses your extensions to provide those services.
Add SiriKit support only if your app implements one of the following types of services:
Audio or video calling
Messaging
Payments
Searching photos
Workouts
Ride booking
Check this Link: http://airflypan.com/foundation-course/233
http://airflypan.com/foundation-course/233
You really can't do this in any sensible way for the user. Although you app's usecase will map to ride-hailing intent the vocabulary will not. Siri gives you almost no options to influence the vocabulary she is using to communicate to your User. If you could only teach you users that requesting a tow is actually requesting a ride... :)
Apple have analyzed the domains they are supporting and the vocabulary that is used in these domains and "taught" them to Siri in every language/culture that Siri is supporting. This makes total sense from Apple's point of view because you'd be hard pressed to be able to do this yourself.
It seems like most people insist that there is no way to play anything but the maximum 30 second audio file associated with a local notification, if your app is not open. Does anyone know then, how the default alarm clock is able to play any song from the music library?
Being an app written by Apple that comes bundled with the OS, it's not subjected to the same restrictions third-party apps are limited by. It's very likely Apple is using a private API.
Apple's own official applications are not bound by the same restrictions that Apple has imposed on 3rd party applications such as your own. Therefore, it is highly likely that their own apps are exempted from the rule of max. 30 seconds sound.
Apple might have been using their own private API for their applications.
I am about to build an app that initially displays thumbnails of high quality videos. When users click on a thumbnail, they will go through iOS's in-app payment system to pay for the video and once that is complete, the video will open and start playing in Quicktime(iphones native video player).
Can you please suggest where i should host my videos? Does apple provide video uploads as well or is there a simple to use tool that allows this? I am looking for a service that will let me upload or delete high quality videos whenever needed so that non tech people can administer too. Then i can easily just link those videos to my app.
Thanks in advance
Depends on format if its just progressive download mp4 you could contract with any of the hosting companies, they start at 5.00 - 100 dollars monthly depending on what you need. the higher priced ones offer dedicated servers that can run .net or php, you could take a service for instance that hosts at 5.00 a month,, right a simple php app that does the security or get a pre-built one.
If you want to do real streaming using HLS then you need a server that can support it, one though a bit expensive is wowzma, prices vary but it is usually quite expensive.
Is it possible that Apple does or will provide an API for Siri? It would be great if I can be sipping my coffee and say,
User: Hey Siri, could you please open Angry
Birds; Level 4 and throw a first bird for me. Make sure you at least hit one green pig or it's coming out of your paycheck.
Siri: Yes sure, I will do that for you.
Is this possible? And would you think Apple will provide this to us?
THIS IS NO LONGER ACCURATE:
There is no API and there is no indication of it changing anytime soon. There are private headers that you can look at by decompiling the SDK. This is a great synopsis:
Quora
You can be clever like RTM though, this is as close as it gets:
http://www.rememberthemilk.com/services/siri/
In iOS 10, Apple has announced an API for Siri called SiriKit. However, you can only do it as an app extension and only if your app implements one of the following types of services:
Audio or video calling
Messaging
Payments
Searching photos
Workouts
Ride booking
Climate and radio
SiriKit is a way for you to make your content available through Siri.
It also lets you add support for your services to the Maps app. To
support SiriKit, you use the Intents framework and Intents UI
framework to implement one or more extensions that you then include
inside your iOS app. When the user requests specific types of services
through Siri or Maps, the system uses your extensions to provide those
services.
This means SiriKit cannot be used for the scenario mentioned in the question and in ways that many of us would like.
Source: Apple Docs for SiriKit
When the iPhone was first released, there was absolutely no public talk from Apple about custom app development. The delayed release of the SDK gave them plenty of time to get public feedback on the iPhone user experience and make the SDK ready for public use.
It seems likely that they're taking a similar approach with Siri.
Not yet. If you want it, file a feature request at bugreport.apple.com, and briefly describe what you want it for. The more people ask for it, the more likely it is to happen.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls. The app would allow the user to enter a mode where the microphone was live and listened for predefined keywords like 'down', 'up', 'next', 'back', 'home', etc.
I don't want to reinvent the wheel on this so I'm just wondering first, if someone has done this already and if not, are there any good tutorials or SDKs available to help with recording someone's voice, and then comparing future output to see if it matches, or just dealing with the microphone in general?
Let's put aside that this is a fairly vaguely worded question for the moment.
If you are expecting to allow voice control in your app that somehow works throughout the entire device, it's just not possible. Your app would only work to control itself -- or at least itself and whatever external hooks you can normally get to the rest of the device, like, say, playing a song out of the user's iTunes library.
If you're planning on doing this in a jailbroken environment, then you should find some open-source library that does voice recognition -- if there are any -- and start from there. Be prepared for a very long haul, though.
Dragon Mobile SDK is what you're looking for.
http://dragonmobile.nuancemobiledeveloper.com/
There maybe others voice recognition SDKs out there, but this is the only one I can think of from the top of my head.
You can find a library called CMU Sphinx. There's an iphone version for it called
PocketSphinx. See if it fits your needs.
I would like to build a simple reader app for the iPad 2 that would allow users to navigate/read via voice controls.
The iOS 13 new feature Voice Control fully meets your request because you can control your device and your app with your voice exactly the same as with touches.
It's also possible to define actions for some specific words for instance.
The device settings are perfectly well detailed to handle this amazing new feature (Accessibility - Voice Control):
If you need dedicated names to be read out in your app, use the accessibilityUserInputLabels property to define them.
That's definitely the built-in tool your need to reach your goal: no need to use external library or SDK, everything is natively provided. ;o)