I have a map and I want to turn different regions of it into clickable elements. I know I could just splice up the map using photoshop and turn each region I want into a button individually, but that feels a bit hacky to me and I don't know if the aspect ratio of everything would stay the same from device to device when I piece the puzzle together. What is the best way to take a single image and divide it up into several complexly shaped clickable areas?
The most general-purpose solution is probably to make the entire view (image view) clickable by attaching a tap gesture recognizer to it and then interpreting the tap gesture.
I'd suggest creating a custom subclass of UIView that has an image view inside it, attaches a tap gesture recognizer, and responds to the messages from the tap gesture recognizer to figure out which region was tapped.
Related
I am playing with the pinching/dragging/rotating gestures (working with the gestures, not raw touch events). It works nice, but I have noticed one problem. For pinching/rotating both fingers need to be inside the view. As long as the view is large enough its not a problem, but as soon as the view becomes very small (in regards to the parent thats a scrollview), it feels a little bit like struggle to get the view "back". I have been checking some other apps, and Whatsapp does that the best. When you add a emoticon to an image inside Whatsapp, you can do all the gestures without the need to have both fingers on the view. For example, I made the emoticon really small, put two fingers so that the emoticon lies between them, and the gesture recognizer applies my movement to the emoticon (but not the to the scrollview which is the superview). When I do it inside my App the gestures are being applied to the scrollview as long as I dont place both fingers on the view I want to transform
Is there some setting on the gestures so it does that or do I have to work with raw touch events and calculate my own stuff ?
Thanks
I'm trying to divide one image to a more than one clickable part. for example, if the image is a body image, and I tapped the head, it should take me to a different the HeadViewController, but if I tapped on the left hand, it should take me to a different view controller
Any idea how to do that?
Easy method:
Add UIButtons on top of the image with clear background color. You can do this with AutoLayout and always get correct proportions to the areas when scaling up and down.
Hard method:
Add UITapGestureRecognizer to the UIImageView and calculate CGPoint depending on where it the touchPoint is received. This is complicated and must be calculated correctly.
For you, I suggest the first method suggested.
Attach a tap gesture recognizer to your image view. Set user interaction enabled to true.
In the handler for the tap gesture, fetch the coordinates of the user's tap and write custom code that figures out which "hot box" the user tapped in.
Alternately you could create a custom subclass of UIGestureRecognizer that has multiple tap regions.
I have a UITableView with rows.
Each row has a small UIImageView aligned to the right (a "bookmark" icon)
The UIimageView has a UITapGestureRecognizer associated.
cell.favoritedImageView.userInteractionEnabled = true
cell.favoritedImageView.addGestureRecognizer(gestureRecognizer)
The problem is that to actually tap it with the finger (in a real device), you have to use the tip of the finger and be very accurate, because the image is small.
If you miss tapping the imageView, the cell is tapped (didSelectRowAtIndexPath) and you end up executing a show-segue to another view, so you have to go back and try again (not cool)
Question: what is the best way to solve this? I want it to be easy to be tapped.
I have some ideas:
Create a larger image with transparent surrounding (ie: crop out with transparent background) -- downside is that I also use this image in other views, in which is not tappable, so I'd have to create two versions of the image
Put the image inside a UIView and make the UIView big and tappable instead of the UIImageView
Add padding to the UIImageView (will this work? or the padding is not recognized in the UITapGestureRecognizer?)
Per your own suggestion, you should create a transparent view that is much larger and attach the UITapGestureRecognizer to the view and then nest your smaller image within the view. That way appearances are the same, but you handle a much larger area for the tap to be recognized with selecting the cell.
Here's the scenario I am trying to implement:
I already have a view that can let user draw doodles by touching inside the view directly (like a doodle canvas). This view implements touchesBegan, touchMoved, and touchEnded handlers to draw lines from touch event parameters.
Now instead of that, I want user to be able to drag and move another UIView on this canvas view and can still draw lines just like they touch directly. For example, user can drag a pen image view on the canvas view to draw pen-style lines.
In this case, how can I transfer the movement of this pen image view to the canvas so it recognize it? One more question: if I want this canvas view to only recognize movements of drag other views rather than touching directly, what I should do?
(Sorry that this question is little too general, just want to get some pointer of it)... Thanks!
A better way to look at the problem is
How can I transfer movement on the canvas into the location of a pen
image view?
That's easy. You already have all the code that keeps track of movement in the canvas (touchesBegan, touchesMoved, touchesEnded), so all you need to do is change to center property of the pen view to track the movement in the canvas. (Obviously, you'll need to apply small X and Y offsets to put the center of the pen view at the correct location).
The only non-obvious detail that you need to be aware of is that the pen view must have userInteractionEnabled set to NO. That way, the pen view won't interfere with touches reaching the canvas view.
Note that UIImageView has user interaction disabled by default, so you don't need to do anything if the pen view is a UIImageView. However, if you're using a generic UIView to display the pen, then you need to disable user interaction in storyboard under the Attributes inspector or disable it in code, e.g. in viewDidLoad.
Essentially what I'm trying to do is use this gesture functionality as demonstrated below
https://www.youtube.com/watch?v=tG3lzBDMRQQ
http://www.vogella.com/articles/AndroidGestures/article.html
Except instead of just setting a color, I want to be able to add a variety of visual effects to the lines drawn during a gesture motion.
IE: pulsating thickness / color changing / particle effects like a sparkler-stick firework etc.
Where would one start in attempting such a venture?
edit: One method I'm considering is to set the gesture color to transparent, but have a separate listener for touches as in some paint-type apps. And So it simultaneously creatures the gesture and draws the proper image over top of it. Would this work? Can the screen be listening for input from two views at once?
GestureOverlayView is a normal view which has the capability to draw on screen. You can simply extend GestureOverlayView and add your custom effects. You can set your custom paint style, or you can override dispatchTouchEvent() to add your own effects while drawing it.