Is there a way to give UIGestureRecognizer a shape (a series of co ords) which it can use to trigger an action when the user draws the shape with his fingers? I'm thinking of letter shapes, but it could be anything.
EDIT:
I found this https://github.com/chrismiles/CMUnistrokeGestureRecognizer which will probably do what I want.
Unfortunately implementing custom gesture recognisers isn't as simple as providing a UIGestureRecognizer with a shape or series of points. You have to subclass UIGestureRecognizer and write code that tracks the user's interaction through touchesBegan:withEvent: and touchesMoved:withEvent: etc. Then, based on line lengths and angles etc. of the gesture the user draws you determine whether it successfully matched what you were expecting and fire the UIGestureRecognizer callback.
This results in inherent complications as users are not very precise when squiggling gestures with their fingers. You would have to design your gestures with a tolerance as to what was recognised; too strict and it will be useless, too generic and it will report too many false positives.
I suspect that if you were attempting to recognise a large quantity of gestures, like the letters of the alphabet for instance, instead of implementing 26 different gesture recognisers you would be better off writing a generic one that recorded the user's input once and checked whether it matched a selection of gesture definitions you have stored somewhere. Then implement a custom callback that tells the handler which gesture it matched.
The very reputable 'Beginning iOS Development: Exploring the iOS SDK' series from Apress dedicate a small portion of a chapter to implementing a custom gesture recogniser. The accompanying source code can be downloaded from the official Apress website here (Source Code/Downloads tab at the bottom).
See pages 627-632 in chapter 17: 'Taps, Touches, and Gestures'.
The Gesture Recognizers chapter of Apple's Event Handling Guide for iOS contains a 'Creating a Custom Gesture Recognizer' section that also has relevant information and examples.
Related
I've been thinking about this for quite some time now and I haven't found a suiting answer to this.
How performant are UIGestureRecognizer in swift/iOS development?
Let me explain by giving you a theoretical example:
You have an app on the iPad Pro (big screen, much space) and there you have maybe dozens of different views and buttons and so on. For whatever reason you need every one of these views and buttons to be moveable/clickable/resizable/...
What's better?
Adding one (or multiple) UIGestureRecognizer(s) to each view (which results in many active gesture recognizers and many small, specific handling methods [maybe grouped for each type of view])
Adding one single recognizer to the superview (which results in one active gesture recognizer and a big handling method that needs to cycle through the subviews and determine which one has been tapped)
I guess the first one is the most simple one but is it slower than the second one? I'm not sure about that. My stomach tells me that having that many UIGestureRecognizers can't be a good solution.
But either way, the system has to cycle through everything (in the worst case), be it many recognizers or many subviews. I'm curious about this.
Thank you
Let look at your question in terms of gesture recognition flow -> to pass event to the right gesture recognize the system goes by the views tree to find last one specific to this gesture, that will return true in one specific method of UIView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
Adding one (or multiple) UIGestureRecognizer(s) to each view
This way I recommended to use. In this case the system will do most of the work for you and prevent you from mistakes that very difficult to debug later - trust me. This is for UI. Especially if you have multiple different gestures on different parts of a screen. In my particular case I have a huge video player UI that has around 20 gesture recognizers on one screen and feels pretty good - no lags or frame drop in UI. this way is simple and self describing. I recommend implement it using storyboard or xib. You can refer to Interface Builderand later any time to recall in a moment what recognizer should you update to change the behaviour of UI. The speed of this approach guaranteed by a system.
Adding one single recognizer to the superview
This way could be used with only one simple gesture for a multiple views (more > 20). this could happened if you implementing some game where user pick up and paste a bricks of different shapes for example. It is not suitable for common UI tasks. The speed depends on your implementation, and based on question itself I am not recommend to do it. This approach is design specific not speed relevant.
Can anyone clarify the difference between these 2 ways of triggering a function when tapping a view?
1)
myView.addTarget(self, action: #selector(myFunctionToTrigger(_:)), forControlEvents: UIControlEvents.TouchUpInside)
2)
let tapGesture = UITapGestureRecognizer(target: self, action:
#selector(myFunctionToTrigger(_:)))
myView.addGestureRecognizer(tapGesture)
This is 2 completely different ways of implementing user event handling in iOS apps.
1). addTarget() - is method on UIControl class, which is part of Target-Action Mechanism. More about that in documentation.
And you can't addTarget tot any UIView, only to UIControl subclasses.
2). UIGestureRecognizer subclasses is just simply a mechanism to detect and distinguish user gestures on specific view.
Main difference between them that Gesture Recognizers can detect more complex events like swipe or pinch or zoom, but -addTarget is a much more efficient way to detect user activity, also it provides the same level of interface for all UIControls such as UISegmetedControl, UISlider, etc.
Hope that I helped you.
These two method work at two different levels of abstraction:
addTarget:action:forControlEvents is the lower level that provides isolated events. Several of these events must be combined and interpreted to detect more complex gestures like swiping or pinching.
addGestureRecognizer works at a higher level closer to what an app usually needs. It adds specific gesture recoginzer that listen to the low level events, detect gestures and deliver specific information about the gesture.
In the case of a tap, the difference is minor. But when it comes to swiping, pinching and a combination of tapping, swiping, pinching (e.g. in a image viewr or in a map app), one or more gesture recoginzers are the way to go.
Here is the difference
For UITapGestureRecognizer you can add event for specified gestures like UITapGestureRecognizer, UIPanGestureRecognizer... and many other gestures .
Where as For UIView addTarget() you can add target for specified events like UIControlEvents.TouchUpInside.. and many other events.
Pavel's answer is correct, you can only add a target to a UIControlView, which is a subclass of UIView. A UIGestureRecognizer can be added to any UIView.
Codo's answer that a target is lower level than a gesture is wrong, gestures are the lower level touch support. A UIControl uses gestures to make addTarget:action:forControlEvents work.
There are several benefits for addTarget:
It is a build-in function. You don't need to initialize another object to do the same thing.
You can set when to react to the action: "touchUpInside" or "touchDown" (or "valueChanged" for sliders).
You can set the different appearances of the button (e.g. title text, title color, content image, background image, highlight tint) and the button only shows those statuses if addTarget is used.
Besides the benefits above, I think it's more like a coding convention for UIControl elements.
I am trying to implement a special login for developer without having a bit of changes in the UI. For example, suppose to log in as a developer, I made a "D" shape over the UI and it will open a Developer mode for me. How can I achieve this functionality? Is there any third party which can recognize the shape that i am trying to made or any other suggestion?
Yeah, you can do that with a subclass of UIGestureRecognizer; the tutorial I am linking shows all the tools you will need to build your own. You want to look at the Custom Gesture Recognizer part of the tutorial towards the bottom.
Basically, you will want to write a gesture that can evaluate if the user made a "D" shape over whichever view has your gesture recognizer. This can be done by keeping track of the last point, and seeing if the current point at any given time fits in the gesture. Or, you could keep track of every point the gesture has ever recorded and write a function that would evaluate if the points you've recorded qualify as a "D" in your gesture.
This may get complicated as there multiple ways to draw a D. However, you could start with two, one looking for a vertical line, followed by an backwards C. The other, a backwards C followed by a vertical line.
Here is a good tutorial:
http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more
While searching here and there I found that it would be a great idea to divide our screen in 9 area and assign each one as a digit same as mobile phone keypad. When a user pan at any location have that location co-ordinate and match it with divided region if it falls manage a array and hold that value.
That value works like a unique pin for you.
For example to check that letter is "L" check that if the order of array element is 1->4->7->8->9 or to check "U" check that if the order of array is 1->4->7->8->9->6->3 then it should be "U".
Is there any other way to recognize the character by touch on phone.
I'm new to developing iOS apps,
I've successfully implemented a Swipe Gesture Recognizer,
What I was wondering is if there is an easy to use recognizer like the swipe gesture. That would let you implement the homescreen page turning effect but just on a small view in the view controller?
If your unclear on what effect I mean, when you look at the iPhone's homescreen you can drag your finger and it responds instantly (unlike swipe) and also has some spring feeling to it, is this some effect I can use, or do I manually have to program this into the code if so is there a tutorial that explains this?
Thanks,
I hope my question makes sense.
Have a look at UIPanGestureRecognizer:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIPanGestureRecognizer_Class/Reference/Reference.html
UIPanGestureRecognizer is a concrete subclass of UIGestureRecognizer
that looks for panning (dragging) gestures. The user must be pressing
one or more fingers on a view while they pan it. Clients implementing
the action method for this gesture recognizer can ask it for the
current translation and velocity of the gesture.
A panning gesture is continuous. It begins
(UIGestureRecognizerStateBegan) when the minimum number of fingers
allowed (minimumNumberOfTouches) has moved enough to be considered a
pan. It changes (UIGestureRecognizerStateChanged) when a finger moves
while at least the minimum number of fingers are pressed down. It ends
(UIGestureRecognizerStateEnded) when all fingers are lifted.
Clients of this class can, in their action methods, query the
UIPanGestureRecognizer object for the current translation of the
gesture (translationInView:) and the velocity of the translation
(velocityInView:). They can specify the view whose coordinate system
should be used for the translation and velocity values. Clients may
also reset the translation to a desired value.
Edit: The spring feeling part you would need to implement yourself. Since iOS 7 there is UIDynamics which contains different animators, for what you describe you may need UIGravityBehavior and maybe UICollisionBehaviour. Look at the WWDC 2013 videos for this topic, I think you will find some examples there.
I'd like to implement multitouch, and I was hoping to get some sanity checks from the brilliant folks here. :)
From what I can tell, my strategy to detect and track multitouch is going to be to use the touchesBegan _Moved and _Ended methods and use the allTouches method of the event parameter to get visibility on all relevant touches at any particular time.
I was thinking I'd essentially use the previousLocationInView as a way of linking touches that come in with my new events with the currently active touches, i.e. if there is a touchBegan for one that is at x,y = 10,14, then I can use the previous location of a touch in the next message to know which one this new touch is tied to as a way of keeping track of one finger's continuous motion etc. Does this make sense? If it does make sense, is there a better way to do it? I cannot hold onto UITouch or UIEvent pointers as a way of identifying touches with previous touches, so I cannot go that route. All I can think to do is tie them together via their previouslocationInView value (and to know which are 'new' touches).
You might want to take a look at gesture recognizers. From Apple's docs,
You could implement the touch-event handling code to recognize and handle these gestures, but that code would be complex, possibly buggy, and take some time to write. Alternatively, you could simplify the interpretation and handling of common gestures by using one of the gesture recognizer classes introduced in iOS 3.2. To use a gesture recognizer, you instantiate it, attach it to the view receiving touches, configure it, and assign it an action selector and a target object. When the gesture recognizer recognizes its gesture, it sends an action message to the target, allowing the target to respond to the gesture.
See the article on Gesture Recognizers and specifically the section titled "Creating Custom Gesture Recognizers." You will need an Apple Developer Center account to access this.