I'm using speech recognition in my app. It's guite important for user experience, so I want it to be good (and free or cheap).
Right now, I'm using Speech Kit from Apple, and it works like a charm but it's not very reliable because there are some limits per app and per device, and I don't know these limits.
Another option is to use OpenEars. It's not nearly as good as Speech Kit for me, so I'm thinking about switching from Speech Kit to OpenEars silently if Speech Kit is not working (and back,when Speech Kit is alive and well).
But is there a way to know that Speech Kit is not working right now before ever using it?
The only way I know of is to try to recognise some audiofile before every user session, but it needs time (at least, several seconds will be spent, and several seconds is a lot), and it's not very good solution in terms of using the service — it seems too costly to recognise audio just to check if Speech Kit is working or not. Also, I don't know how to debug this, because obviously I don't have any problems with limits in my app right now.
What is the best way to solve this?
I also thought about this question not long ago. Here's an answer from the Apple Q & A. "The current rate limit for the number of SFSpeechRecognitionRequest calls a device can make is 1000 requests per hour." There is also an example of the error being received upon reached limit, so you can prepare yourself for that :)
Here's the link: Apple Q & A
Related
I'm currently looking for a way to track user activity. I'm working on an IOS app using swift and i need stats of apps usage. basically I want to get-make a tracking of the used apps. Data like opened apps, start time and shut down time... I know that for get all stats, maybe is necessary run a backgroud service, but, this is another problem that i think to solve after. for now i want to know if it's posible, if there is some way to get stats for used apps. I know that the UIApplication class call the UIApplicationMain function when an app is launched. Maybe, from my app, there is a way for access this info?... Thanks, i have been a long time reading but really, i can't see some clear option.
If (as David has interpreted your question in the comments) you are trying to track usage of other apps that aren't yours, he's right; you can only track your own app's usage.
If you are needing to track events in your own app, there are a good number of analytic frameworks available to do exactly what you are needing to do.
Flurry is one I've used in the past with success, and is one of the more well know solutions. I've also utilized Google's analytics framework. Both are pretty straightforward to integrate into your app and to track the sort of fine grained events you are looking to capture. You can't go wrong with either one of those.
Here is a (slightly old) list of additional tracking/analytics options beyond Flurry and Google's offerings.
You can record your feedback and user experiences, and bug reports with lookback.io
I'm a complete noob at Swift (and Xcode), as a matter of fact, the only programming language I (somewhat) know is Javascript.
I'm trying to make a Swift SpriteKit game, and I would like to access the number of calories burned in HealthKit.
The idea is that my game will provide more points the more calories you burn using other apps like Endomondo. My app does not actually track anything, I would just like to access other data left by other apps in the Health App.
Is this even possible? (I'm running the latest version of everything, from Mac OS X to Xcode)
Certainly. I don't think there is anything technically preventing you from making calls to the HealthKit APIs in your game. In fact, you're fairly free to mix and match the use of any public frameworks provided on iOS.
One thing to keep in mind is privacy and disclosure of the use of health data. The user will have to explicitly grant your app permission to see data.
HealthKit is a really rich API with lots of ways to access lots of different kinds of data, and you're really only interested in a small part right now, so a quick way to experiment is to create a new Swift SpriteKit game from the new project template in Xcode, do your research on HealthKit, and see if you can just log the number of calories burned since some time point while your app is running. If you can do that, the rest is details (as in, the entire app :-)).
Here are what I think might be some helpful links, good luck on your project!
https://itunes.apple.com/us/book/swift-programming-language/id881256329?mt=11
https://developer.apple.com/library/ios/documentation/HealthKit/Reference/HealthKit_Framework/index.html
You'll also find some good documentation on SpriteKit (references and guides) on the iOS Developer Library site.
I am working on a game for iPhone that is fully usable by providing YES / NO responses.
It would be great to make this game available to blind users, runners, and people driving cars by allowing voice control. This does not require full speech recognition, I am looking to implement keyword spotting.
I can already detect start and stop of utterances, and have implemented this at https://github.com/fulldecent/FDSoundActivatedRecorder The next step is to distinguish between YES and NO responses reliably for a wide variety of users.
THE QUESTION: For reasonable performance (distinguish YES / NO / STOP within 0.5 sec after speech stops), is AVAudioRecorder a reasonable choice? Is there a published algorithm that meets these needs?
Your best bet here is OpenEars, a free and open voice recognition platform for iOS.
http://www.politepix.com/openears/
You most likely DO NOT want to get into the algorithmic side of this. It's massive and nasty - there is a reason only a small number of companies do voice recognition from scratch.
I've been researching several iOS speech recognition frameworks and have found it hard to accomplish something I would think is pretty straightforward.
I have an app that allows people to record their voices. After a recording is made, they have the option to create a text version.
Looking into the services out there (i.e., Nuance) most require you to use the microphone. OpenEars allows you to do this, but the dictionary is so limited because it is an offline solution (they recommend 300 or less words).
There are a few other things going on with the app that would make it very unappealing to switch from the current recording method. For what it is worth, I am using the Amazing Audio Engine framework.
Anyone have any other suggestions for frameworks. Or is there a way to dig deeper with Nuance to transcribe a recorded file?
Thank you for your time.
For services, there are a few cloud based hosted speech recognition services you can use. You simply post the audio file to their URL and receive back the text. Most of them don't have any constraint on the vocabulary. You can of course choose any recording method you like.
See here: Server-side Voice Recognition . Many of them offer free trial as well.
I have a question about writing a script which can manage to play online games in different codes. I think the easiest to understand is when I say I need to make a platform on which Playstation as xbox players are allowed to play online Modern Warfare 3 together.
Mathematically it seems it is possible: at the end you have two different screens which project the same. On the platform, Sony and Microsoft players stream their code or screen to the platform and play together. Big problem is that you get it delivered in 2 different codes which you have to translate to one language in less than 0,001 second.
Honestly said I have to get into this stuff but I cannot get much further.
Do you have any tips, other forums or solutions for this problem? Maybe it is writing a new language? (Google is technically using it for Google-translating over the phone)
Depending on the game this might not be possible even in theory. Many console games use a peer-to-peer lock-step synchronization model for multiplayer. Games that use this approach only send each other the player input from the other consoles and rely on deterministic simulation (the same inputs produce the same outputs) to keep the systems synchronized.
This only works when the exact same compiled code is running on the same CPU for all players. Games with this networking model usually have periodic desynch checks to make sure that the different systems haven't drifted out of sync with each other. A desynch failure is usually considered a fatal error and either a bug in the game or evidence of attempted cheating by one of the players.
Other multiplayer games use a client server model and so it would be possible in theory to allow different consoles to play against each other. Reverse engineering the network protocol would be a formidable technical challenge however and it would be a difficult problem to get this to work reliably.
Even if you could solve the technical problems though you would likely have even bigger legal issues to overcome. Sony and Microsoft don't want to allow cross platform play so even though it would be possible in theory to make this work with a client server multiplayer game developers aren't able to implement it. A third party trying to make this work would likely have to deal with legal challenges from Microsoft, Sony and the game developer.