UISearchBar and dictation support - ios

I have user interface with UISearchBar and I implement the UISearchBarDelegate's searchBarSearchButtonClicked: to perform the search. I do not have a device with dictation support to test this, so I'm going to speculate here...
On devices with dictation support, I would like to perform the search as soon as the dictation ends, without requiring the user to hit the search button manually.
Does this work out-of-the-box?
Or do I need to handle it programmatically?
Since iOS 5.1, there are new methods in UITextInput protocol and I could theoretically hook onto dictationRecordingDidEnd. Is that the way to go?

Yes, you would want to use the dictationRecordingDidEnd protocol method. Apple's documentation says this about dictationRecordingingDidEnd:
Implement this optional method if you want to respond to the
completion of the recognition of a dictated phrase.
That said, I have yet to find in Apple's human interface guidelines anything that talks about the expected use of this method.
You may also want to look at dictationRecongitionFailed as well as the UIDictationPhrase class.

Related

Programmatically "tap" the Microphone key on the iOS keyboard [duplicate]

I'd like to programmatically put my UITextField input into dictation mode, without requiring the user to bring up and select dictation from the keyboard. Searched the API documentation but can find no solution. Any ideas?
This is currently not possible on iOS.
The only place where it is kind of possible is in an app using WatchKit. In WKInterfaceController you can actually use presentTextInputControllerWithSuggestions with nil as parameter which starts dictation input immediately.
Yes
With iOS 10 Apple added SFSpeechRecognizer which allows for starting speech recognition without user interaction.
You have to implement SFSpeechRecognizer, use Accelerate framework to get mic sound level floats and make an animated view yourself. It will look cooler!
Sorry for not providing the code, I think i lost it. Can’t find it on my multiple hard drives :/

canPerformAction:with sender method called on tap in ios 9

one thing I observed in iOS 9.0 is that when I tap on button or TableView, canPerformMethod:withSender: method is called with sender as UIButton type. I am using this method to prepare my customized option menu.
I did not observed that in previous iOS. Can anyone see me API changes, because I went through overall changes of iOS, but I did not find above mentioned changes in change log or change history.
Per Apple Documentation,
iOS 3.0 introduced system capabilities for generating motion events,
specifically the motion of shaking the device. The event-handling
methods for these kinds of events are motionBegan:withEvent:,
motionEnded:withEvent:, and motionCancelled:withEvent:. Additionally
for iOS 3.0, the canPerformAction:withSender: method allows responders
to validate commands in the user interface while the undoManager
property returns the nearest NSUndoManager object in the responder
chain.
So, all the UIResponder sub classes are entitled to receive a call back for canPerformAction:withSender:. You should use sender parameter to do the handling in this method.

UITextField begin dictation

I'd like to programmatically put my UITextField input into dictation mode, without requiring the user to bring up and select dictation from the keyboard. Searched the API documentation but can find no solution. Any ideas?
This is currently not possible on iOS.
The only place where it is kind of possible is in an app using WatchKit. In WKInterfaceController you can actually use presentTextInputControllerWithSuggestions with nil as parameter which starts dictation input immediately.
Yes
With iOS 10 Apple added SFSpeechRecognizer which allows for starting speech recognition without user interaction.
You have to implement SFSpeechRecognizer, use Accelerate framework to get mic sound level floats and make an animated view yourself. It will look cooler!
Sorry for not providing the code, I think i lost it. Can’t find it on my multiple hard drives :/

Macro Recording in iOS

Is it possible to record set of touch events on iPhone and then playback?
I have searched alot but could not find any answer. if its possible, can anyone explain with an example.
I m not looking for testing purpose. Within my application, instead of creating animation, i just want to record set of events and then want to playback to explain the app flow to the users.
Regards.
Recording is pretty simple. Look at the various "Responding to Touch Events" and "Responding to Motion Events" methods on UIResponder. Just create your own UIView subclass (since UIView inherits from UIResponder) and keep a copy of the events passed into the relevant methods.
Playback is a bit more complicated; there's no way to make UITouch or UIEvent objects (so you can't make a fake event and pass it on to -[UIApplication sendEvent:]). But, there's nothing stopping you from manually parsing an array of Event objects and handling it on your own (aside from it being some kind of ugly code).
There's no built-in macro capability, but you could certainly build that ability into your application. You'll need to do more than just play back events, though. Touches aren't normally visible, but if you're trying to explain how to use your app to the user you'll probably want to have some sort of visual representation for the touches that trigger different responses similar to the way the iOS Simulator uses white dots to represent multiple touches when you hold down the option key.
Assuming that you can solve that problem, two strategies for easily recording user actions come to mind:
Use the Undo Manager: NSUndoManager is already set up to "record" undoable events. If you invest some time into making everything in your app undoable, you could (maybe) perform a set of actions, undo them all to move them to the redo stack, and then save the events in the redo stack as your script.
Use Accessibility: The Accessibility framework sends notifications whenever user interface elements are touched. Your app could use those notifications to create a playback script. You'll still need to write the code to play back the events in the script, though.
You could mirror your application with AirServer and use any screen capture software to make the video.

Can I support VoiceOver in my Cocos2D-iPhone Game?

I'm making a game where a player reacts to sounds via motion - seeing as the visual element isn't needed to play it, and many play with their eyes closed, it seems a shame to not be fully VoiceOver compatible. I'm currently using Cocos2D-iPhone and CocosDenshion for audio, and am now starting to think about how I'll be building my menu system to choose levels and configure controls.
Is it reasonably easy to support VoiceOver in Cocos2D's menu system, or should I look in to trying to create my menus in UIKit which I have no experience using?
I don't know if Cocos' menu system supports VoiceOver, but if it doesn't, you could probably add the functionality you're looking for yourself without having to delve into a lot of UIKit work. All you need to do is create a UIView subclass which gets added to your main window when your app starts up. Then use the UIAccessibilityContainer protocol and UIAccessibilityPostNotification calls to allow users to interact with your game via VoiceOver.
The UIAccessibilityContainer protocol lets you inform VoiceOver what interface elements are currently on the screen, their labels, their traits, etc. VoiceOver then uses this information to let users swipe between elements and get feedback on them.
When your game changes state, you can change what that protocol sends back and then issue a
UIAccessibilityPostNotification(UIAccessibilityLayoutChangedNotification, nil)
...to inform VoiceOver that the screen layout has changed. And to just speak something via VoiceOver, say when your game state has changed, you can send a different notification to speak some text:
UIAccessibilityPostNotification(UIAccessibilityAnnouncementNotification, #"Achievement unlocked!");
No need to go with UIKit framework you can go with the cocos2d native methods and class to implement this.
For sound option we have SimpleAudioEngine Which can be used. you can distinguish between sound using its ID which is of type ALuint.
ALuint soundEffectID;
//to start
soundEffectID=[[SimpleAudioEngine sharedEngine] playEffect:#"my sound"];
//to stop
[[SimpleAudioEngine sharedEngine] stopEffect:soundEffectID];
You have to mange these effect and I think your problem will be solved.

Resources