Gesture pattern recognizer in IOS.? - ios

I am trying to implement a special login for developer without having a bit of changes in the UI. For example, suppose to log in as a developer, I made a "D" shape over the UI and it will open a Developer mode for me. How can I achieve this functionality? Is there any third party which can recognize the shape that i am trying to made or any other suggestion?

Yeah, you can do that with a subclass of UIGestureRecognizer; the tutorial I am linking shows all the tools you will need to build your own. You want to look at the Custom Gesture Recognizer part of the tutorial towards the bottom.
Basically, you will want to write a gesture that can evaluate if the user made a "D" shape over whichever view has your gesture recognizer. This can be done by keeping track of the last point, and seeing if the current point at any given time fits in the gesture. Or, you could keep track of every point the gesture has ever recorded and write a function that would evaluate if the points you've recorded qualify as a "D" in your gesture.
This may get complicated as there multiple ways to draw a D. However, you could start with two, one looking for a vertical line, followed by an backwards C. The other, a backwards C followed by a vertical line.
Here is a good tutorial:
http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more

While searching here and there I found that it would be a great idea to divide our screen in 9 area and assign each one as a digit same as mobile phone keypad. When a user pan at any location have that location co-ordinate and match it with divided region if it falls manage a array and hold that value.
That value works like a unique pin for you.
For example to check that letter is "L" check that if the order of array element is 1->4->7->8->9 or to check "U" check that if the order of array is 1->4->7->8->9->6->3 then it should be "U".
Is there any other way to recognize the character by touch on phone.

Related

How to get correct screen coordinate from UITapGestureRecognizer while VoiceOver is on

I'm currently working on an interactive view that relies heavily on the user's touch location. I have found that there are a few ways to interact with the UITapGestureRecognizer while VoiceOver is on, but when I tap my point the values given are very wrong. I've looked elsewhere, but my use case is outside of the norm so there is not a lot to tell me what is going on. Has anyone experienced this before?
I am aware that I can change accessibilityTrait to UIAccessibilityTraitAllowsDirectInteraction which will give me the correct screen point when used, but I would like to know what is causing this issue at the very least for the sake of knowledge. To interact with the UITapGestureRecognizer I either double tap or do a 3D touch by pressing on hard on the screen. The ladder method doesn't work for the tap gesture but will work for the pan gesture.
This is the only line I use to get my screen points. My map view is a UIImageView
CGPoint screenPoint = [tapGesture locationInView:map];
I'm using a map of a building and I try to tap the same corner or landmark for my testing. I know I can't hit the same exact point every time, but I do use a stylus and I can get pretty close.
Without VoiceOver on I would get the result: (35.500, 154.363)
With VoiceOver on and tapping in generally the same spot, I get : (187.500, 197.682)
The point I am using to test is on the left side of the screen and the result from VoiceOver being on is in the middle of the screen. I believe the y-axis value may have changed because of my tool bar's size, but I have no idea what is throwing off the x-axis value. If more information is needed let me know.
UPDATE: Upon further investigation, it turns out that the UITapGestureRecognizer will always return (187.500, 197.682) no matter where I touch in the map view when VoiceOver is on. That point seems to be the middle of the map view. Oddly enough though, the UIPanGestureRecognizer will give me the correct (x,y) for my view if I use the 3D touch while VoiceOver is on.
On a side note not relating to the problem at hand, it seems if I use the accessibility trait UIAccessibilityTraitAllowsDirectInteraction the method UIAccessibilityConvertFrameToScreenCoordinates returns a frame that is higher than my view. It works fine if I do not change the trait.
Your problem may deal with the reference point used when VoiceOver is on.
Verify what your point coordinates are referring to : view or screen coordinates ?
I suggest you take a look at the following elements :
accessibilityFrame
accessibilityFrameInContainerSpace
UIAccessibilityConvertFrameToScreenCoordinates
According to your project, the previous elements may be interesting to get your purposes.

Is it possible to attach a gesture recognizer to a button, so that the user swipes up after/during the button press?

I read through a few similar questions here, but most of them are for much older versions of Swift.
This tutorial shows how to create a gesture recognizer and works pretty well: https://www.ioscreator.com/tutorials/swipe-gesture-ios-tutorial-ios11
What I'd like to accomplish is to add functionality that would allow the user to swipe up or down after pressing a button, while still holding the button, and have my app react to the combination of the specific button being pressed and the upward or downward swipe gesture.
Here's the specific design I'm trying to implement. Basically I'd like the user to press the "A" button and then swipe up or down to get the "#" or "b".
Is this possible? The # & b could be image views or buttons (though if they're buttons, I don't want them to be pressable on their own). If this is a crazy design, I welcome suggestions for improvement.
You want to use a UILongPressGestureRecognizer (probably in conjunction with image views). It has the advantage that first it recognizes a finger held down in one spot (the "A") and then it tracks the movement of that finger (panning up to the sharp or down to the flat). Where the finger is held down — i.e., is it in the "A" or not — will determine whether to recognize in the first place. Then if you do recognize, you watch where the finger goes and decide whether it has entered the sharp or the flat.
I ended up using a Pan Gesture Recognizer, and it worked out really well! I am simply using the y coordinate of the pan gesture to determine if the user is moving his/her finger up to the sharp or down to the flat.

Is it possible to render a tableview 'skewed'?

I'm coding in Swift 2.0 for devices running iOS7+.
Is it possible to present a tableview in a skewed/diagonal/slanted format as indicated below?
Obviously if the answer is yes, what process would I need to go through to get the result?
Yes it's possible. Views in iOS have a transform property, of type CGAffineTransform. You can use that to make the view appear skewed. I don't know offhand how to create a transform that creates the skewing effect. I suggest doing some google searching.
The next issue you will face is interacting with taps. Changing the transform of a view does not transform the coordinate system applied to taps, so taps will still land on the non-skewed views. That will be much harder to sort out, and without doing a fair amount of research I don't have an answer for you on that one. (It would probably be possible to intercept touch events before they get to your table view and apply the inverse of your skewing transform to them so that you map the taps back to the rectangular coordinate system the table view is expecting.)

Puzzle swipe gesture

Im trying to figure the best way available in iOS to solve the following:
Basically I've built a 4x6 tile matrix with UIButtons, each containing a letter. The buttons are contained within a UIView. (Apple, Fast, Tree)
A
P F
P A
L S
E T
T R E E
All UIButtons have userInteractionEnabled set to FALSE to receive touchesBegan calls. On creation, all UIButtons are placed into a NSMutableArray.
My challenge is how to Swipe&drag from a letter(starting point) and move to a destination letter, trying to "find" the complete word.Kind of like the Ruzzle App but only horizontal & Vertical swipes.
The UIButtons that are being "multi-selected" have to change background color as a visual indication.
Im receiving the touch location via the touchesMoved. Does the entire code of detection has to be triggered under touchesMoved?
What will the best approach for this be? and the least process intensive
Instead of using touchesBegan/Moved methods, why don't you look into UIControlEventTouchDragEnter/UIControlEventTouchDragInside events? That will give you better performance as the associated action will only be called when touch enters the button or is dragged inside the button. In these methods you can keep pushing the new buttons in an array as touch enters into their bounds and check for their position with respect to buttons pushed previously. In this approach I think you will have to handle the first button touch using UIControlEventTouchDown event.
I would love to know how you implement it finally.

How to recognise letters with UIGestureRecognizer

Is there a way to give UIGestureRecognizer a shape (a series of co ords) which it can use to trigger an action when the user draws the shape with his fingers? I'm thinking of letter shapes, but it could be anything.
EDIT:
I found this https://github.com/chrismiles/CMUnistrokeGestureRecognizer which will probably do what I want.
Unfortunately implementing custom gesture recognisers isn't as simple as providing a UIGestureRecognizer with a shape or series of points. You have to subclass UIGestureRecognizer and write code that tracks the user's interaction through touchesBegan:withEvent: and touchesMoved:withEvent: etc. Then, based on line lengths and angles etc. of the gesture the user draws you determine whether it successfully matched what you were expecting and fire the UIGestureRecognizer callback.
This results in inherent complications as users are not very precise when squiggling gestures with their fingers. You would have to design your gestures with a tolerance as to what was recognised; too strict and it will be useless, too generic and it will report too many false positives.
I suspect that if you were attempting to recognise a large quantity of gestures, like the letters of the alphabet for instance, instead of implementing 26 different gesture recognisers you would be better off writing a generic one that recorded the user's input once and checked whether it matched a selection of gesture definitions you have stored somewhere. Then implement a custom callback that tells the handler which gesture it matched.
The very reputable 'Beginning iOS Development: Exploring the iOS SDK' series from Apress dedicate a small portion of a chapter to implementing a custom gesture recogniser. The accompanying source code can be downloaded from the official Apress website here (Source Code/Downloads tab at the bottom).
See pages 627-632 in chapter 17: 'Taps, Touches, and Gestures'.
The Gesture Recognizers chapter of Apple's Event Handling Guide for iOS contains a 'Creating a Custom Gesture Recognizer' section that also has relevant information and examples.

Resources