I am a novice at iOS programming, obviously a novice at Swift (as it is new), and a dabbler at best in programming overall over the past ~35 years. I have, however, worked for the past 20 years as a manager of multidisciplinary teams that include programmers and as a result I understand a lot of fundamental concepts of software design. I provide this information for context.
I am working on a database app for a class and adding a lot of functionality of my own choosing to enhance my own learning experience. I yesterday wanted to allow users to tap a UIImageView to choose a new picture for the database entry. I added a Tap Gesture Recognizer to the UIImageView and hooked up the IBAction to the appropriate view controller then added a println() to the IBAction to test whether it was being recognized. Taps on the UIImageView didn't produce the println() and I was frustrated, so I looked around on the tubes for some hints and found some sample code to programmatically recognize the tap:
let recognizer = UITapGestureRecognizer(target: self, action: Selector("didTap:"))
recognizer.delegate = self
view.addGestureRecognizer(recognizer)
This worked a treat, as they say. I was frustrated, however, as I saw a lot of reference to the idea that the code was unnecessary if I was using the storyboard. After a bit of experimentation with a test project, I eventually found that the UIImageView had to have "User Interaction Enabled" in the Attributes Inspector (not the default setting) to recognize user interaction, which in hindsight makes sense.
My question (at last!) is whether the difference between the two approaches is stylistic or whether there is a reason to choose to do it programmatically over the implementation via storyboard. For performance or delegation or otherwise. I can, for example, see that I could embed the recognition code in an if statement. Are there other reasons?
Is this question too theoretical for this format?
IMHO, I always use storyboards unless and until I encounter something that has to be done in code. It's just conceptually easier for me to understand the overall shape of the app if I can see it in large, interconnected chunks. There shouldn't be any noticeable performance differences.
Regarding your particular example, whenever I have an image that has to be tappable, I just put the image in a UIButton, and hook the button up to an IBAction in the controller. This obviates the need for adding a custom gesture recognizer and remembering to make the image tappable.
Related
I've been thinking about this for quite some time now and I haven't found a suiting answer to this.
How performant are UIGestureRecognizer in swift/iOS development?
Let me explain by giving you a theoretical example:
You have an app on the iPad Pro (big screen, much space) and there you have maybe dozens of different views and buttons and so on. For whatever reason you need every one of these views and buttons to be moveable/clickable/resizable/...
What's better?
Adding one (or multiple) UIGestureRecognizer(s) to each view (which results in many active gesture recognizers and many small, specific handling methods [maybe grouped for each type of view])
Adding one single recognizer to the superview (which results in one active gesture recognizer and a big handling method that needs to cycle through the subviews and determine which one has been tapped)
I guess the first one is the most simple one but is it slower than the second one? I'm not sure about that. My stomach tells me that having that many UIGestureRecognizers can't be a good solution.
But either way, the system has to cycle through everything (in the worst case), be it many recognizers or many subviews. I'm curious about this.
Thank you
Let look at your question in terms of gesture recognition flow -> to pass event to the right gesture recognize the system goes by the views tree to find last one specific to this gesture, that will return true in one specific method of UIView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
Adding one (or multiple) UIGestureRecognizer(s) to each view
This way I recommended to use. In this case the system will do most of the work for you and prevent you from mistakes that very difficult to debug later - trust me. This is for UI. Especially if you have multiple different gestures on different parts of a screen. In my particular case I have a huge video player UI that has around 20 gesture recognizers on one screen and feels pretty good - no lags or frame drop in UI. this way is simple and self describing. I recommend implement it using storyboard or xib. You can refer to Interface Builderand later any time to recall in a moment what recognizer should you update to change the behaviour of UI. The speed of this approach guaranteed by a system.
Adding one single recognizer to the superview
This way could be used with only one simple gesture for a multiple views (more > 20). this could happened if you implementing some game where user pick up and paste a bricks of different shapes for example. It is not suitable for common UI tasks. The speed depends on your implementation, and based on question itself I am not recommend to do it. This approach is design specific not speed relevant.
I'm trying to implement 3D Touch feature that presents a summary of information (like Peek). But I don't want that it pops. I just want to preview the information like contacts app does with contatcs:
It only presents an UIView and doesn't deal with two levels of force (peek and pop).
How can I do something like this?
Ps.: I don't want to deal with long press gesture.
Introduction
Hello
I know this is a bit to late, probably, but in case someone else stumbles upon it: I certainly believe it is possible and I don't think its a "native behavior for contacts". Although it would not be as simple as the UIKit api for peek pop views. You would need to:
Steps
subclass UIGestureRecognizer (perhaps it may work with the UITapGestureRecognizer also), and register UITouches and use their force property.
Setup a UIViewController with transparent but blurred background around the edges (together with a modalPresentationStyle .overCurrentContext if i recall correctly), with your desired content in the middle (much like the peek view). Then add a UIPanGestureRecognizer to the center view for dismissal/sliding up the buttons.
And then create a custom animation transition for that UIViewController to be triggered once the force property of the registered UITouches from the subclassed UIGestureRecognizer is high enough. And then reversed once the force property gets low enough.
Concluding notes
I believe this is a bit of a tedious task and there might be a simpler way. For example, a simpler way using a 3rd party library for long pressure gestures (that registers size of the touch), but it would not give the same feel.
Brent Simmons wrote in a blog post that tap gesture recognizers, presumably on a UIView, are less accessible than UIButtons. I'm trying to learn my way around making my app accessible, and I was curious if anyone could clarify what makes that less accessible than a UIButton, and what makes an element "accessible" to begin with?
For more customizability I was planning to build a button comprised of a UIView and tap gesture recognizers with some subviews, but now I'm not so sure. Is it possible to make a UIView as accessible as a UIButton?
Accessible in this context most likely refers to UI elements that can be used using Apple's accessibility features, such as VoiceOver (see example below).
For example, a visually impaired person will not be able to see your view or subviews, or buttons for that matter; but the accessibility software "VoiceOver" built into every iOS device will read to her/him the kind of object and its title, something like "Button: Continue" (if the button title is "Continue").
You can see that most likely the tap gesture recognizer will not be read by VoiceOver and thus be less "accessible".
I've got a UICollectionView, and I'd like to be able to touch-and-drag items up and out of the View, and thus delete them. (Very much along the same lines as how the Dock works on OS X: drag something off and let go, and it is removed).
I've done some research, but almost everything I find is looking for CollectionViews that are drag-and-drop to reorder. I don't need to reorder (I'm happy to just remove the item at the given index from the source array and then reload), I just need to detect when an item is moved outside of the View and released.
So I suppose my questions are these:
1) Is that possible with the built-in CollectionView, some kind of itemWasDraggedOutsideViewFromIndex: method or something?
2) If not, is it something that can be done with a subclass (and specifically is it possible for a CollectionView beginner)?
3) Are there any code samples or tutorials you can recommend that do this?
Here is a helper class that I've been working on that does just that: implementation: https://github.com/Ice3SteveFortune/i3-dragndrop, hope it helps. There's examples on how to use it in the TestApp.
UPDATE
About a year on, this is now a full-on drag-and-drop framework. Hope this proves useful: https://github.com/ice3-software/between-kit
There is no built-in method like you're suggesting. What you're wanting to be can be done but you'll have to handle it with a gesture recognizer and appropriate code to handle the drag/drop operation.
I tried using a subclass to do this and finally went back to putting it in my view controller. In my case, though, I was dragging stuff in/out of the collection view as well as two other views on the screen.
I don't know if you have the book, but the most helpful thing I found was Erica Sadun's Core iOS6 Develper's Cookbook, which has excellent code on drag/drop within Collection Views. I don't think it specifically addresses dragging outside of the CV, but for me the solution was to put the gesture recognizer on the common superview and always use its coordinates rather than the subview's coordinates.
One problem I hit was I wanted to be able to select cells with a tap as well as drag, and there is no way (despite Apple's docs to the contrary) to require the single tap gesture to fail on the collection view. As a result, I ended up having to use the long press gesture to perform the entire operation, and there is no translationInView for long press (there is locationInView) so that required some additional work:
iOS - Gesture Recognizer translationInView
Another thing that will make it harder or easier is the number of possible drop targets you have. I had many, in many different types of views (straight UIView, collectionview, and scrollViews). I found it necessary to maintain a list of "drop targets" and to test for intersections with targets as the dragged object was moved. Somehow, you have to be able to determine whether the view you're intersecting is a place where a drop can occur.
If you are addressing the specific situation of dragging something out of a view to delete it (like dragging to a trash can view) and that's it, this should not be complicated. You have to remember that when you do a transform your frame becomes meaningless, but the center is still good; so you end up using the center for everything that you would normally use the frame for.
Here is the closest thing I found online that was helpful; I didn't end up using this class though as I thought it would be too complicated to implement in my app.
http://www.ancientprogramming.com/2012/04/05/drag-and-drop-between-multiple-uiviews-in-ios/
Hope this has been some help.
Yes there is.
1 - Conform your view to UIDropInteractionDelegate.
2 - Then add this line to your viewload or init:
For viewcontroller add to ViewDidload:
self.view.addInteraction(UIDropInteraction(delegate: self))
Or, for UIViews add to init:
self.addInteraction(UIDropInteraction(delegate: self))
3 - Then get the location for item being dragged here and have fun with it:
func dropInteraction(_ interaction: UIDropInteraction, sessionDidUpdate session: UIDropSession) -> UIDropProposal {
print(session.location(in: self))
return UIDropProposal(operation: .move)
}
I know what I want to do, but I'm stumped as to how to do it: I want to implement something like the iOS multitasking gestures. That is, I want to "steal" touches from any view inside my view hierarchy if the number of touches is greater than, say, two. Of course, the gestures are not meant to control multitasking, it's just the transparent touch-stealing I'm after.
Since this is a fairly complex app (which makes extensive use of viewController containment), I want this to be transparent to the views that it happens to (i. e. I want to be able to display arbitrary views and hierarchies, including UIScrollViews, MKMapViews, UIWebViews etc. without having to change their implementation to play nice with my gestures).
Just adding a gestureRecognizer to the common superview doesn't work, as subviews that are interaction enabled eat all the touches that fall on them.
Adding a visually transparent UI-enabled view as a sibling (but in front) of the main view hierarchy also doesn't work, since now this view eats all the touches. I've experimented with reimplementing touchesBegan: etc. in the touchView, but forwarding the touches to nextResponder doesn't work, because that'll be the common superview, in effect funnelling the touches right around the views that are supposed to be receiving them when the touchView gives them up.
I am sure I'm not the only one looking for a solution for this, and I'm sure there are smarter people than me that have this already figured out. I even suspect it might not actually be very hard, and just maybe my brain won't see the forest for the trees today. I'm thankful for any helpful answers anyway :)
I would suggest you to try using method swizzling, reimplementing the touchesbegan on UIView. I think that the best way is to store in a static shared variable the number of touches (so that each view can increment/decrement this value). It's just a very simple idea, take it with a grain of salt.
Hope this helps.
Ciao! :)
A possible, but potentially dangerous (if you aren't careful) approach is to subclass your application UIWindow and redefine the sendEvent: method.
As this method is called for each touch event received by the app, you can inspect it and then decide to call [super sendEvent:] (if the touch is not filtered), or don't call it (if the touch is filtered) or just defer its call if you are still recognizing the touch.
Another possibility is to play with the hitTest:withEvent: method but this would require your stealing view to be placed properly in the subview, and I think it doesn't fit well when you have many view controllers. I believe the previous solution is more general purpose.
Actually, adding a gesture recognizer on the common superview is the right way to do this. But it sound like you may need to set either delaysTouchesBegan or cancelsTouchesInView (or both) to ensure that the gesture recognizer handles everything before letting it through to the child views.