Get Tap handling actions on specific parts of uiimage view in iOS - ios

I have an image view in which the image is having various small child images in it. But the thing is I am getting this entire image(parent image with child images) as one single image from a Web Service at run time.
How can I handle taps on these child images(pasted or attached on the main image) after the parent image is received at run time?
I know I can add uiview on those small images and then handle their taps. But the problem is I dont know how to extract coordinates of child images in parent image, hence can't place uiviews on the child images...
Is the solution to this problem even possible? I am quite stuck.
Any help will be greatly appreciated.
Sorry in advance ,if the question is vague. I'm new to iOS development.

Get the point of the gesture:
CGPoint loc = [tap locationOfTouch:0 inView:imageView];
Then you can check the loc againts the cordinates of the child images, that is if you know child images' cordinates in the parent image.

Related

Swift: Drawing a UIBezierPath based on touch in a UIView

I've been looking at this thread as I'm trying to implement the same thing. However, I see that the Canvas class is implemented as a subclass of UIImageView. I'm trying to do the same thing except in a UIView. How will using a UIView rather than UIImageView affect the implementation of this solution? I see self.image used a couple times, but I don't know how I'd change that since I don't think that is available in a generic UIView.
Yes, you can implement this as a UIView subclass. Your model should hold the locations of the touch events (or the paths constructed from those locations) and then the drawRect of the view can render these paths. Or you create CAShapeLayer objects associated with those paths, too. Both approaches work fine.
Note, there is some merit to the approach of making snapshots (saved as UIImage) objects that you either show in a UIImageView or manually draw in drawRect of your UIView subclass. As your drawings get more and more complicated, you'll start to suffer performance issues if your drawRect has to redraw all of path segments (it can become thousands of locations surprisingly quickly because there are a lot of touches associated with a single screen gesture) upon every touch.
IMHO, I think that other answer you reference goes too far, making a new snapshot upon every touchesMoved. When you look at full resolution image for retina iPad or iPhone 6 plus, that's a large image snapshot to create upon every touch event. I personally adopt a hybrid approach: My drawRect or CAShapeLayer will render the current path associated with the current gesture (or the collection of touchesMoved events between touchesBegan and touchesEnded), but when the gesture finishes, it will create a new snapshot.
In the answer to that question, self.image is drawn into the drawing context first, then drawing is applied on top, then finally the image is updated to be the old image with new content drawn on top.
Since you just want to add a UIBezierPath, I'd just create a CAShapeLayer into which you place your bezier path, and place it on top of your views backing layer (self.view.layer). There's no need to do anything with DrawRect.

How to detect tap on small nearest buttons in iOS?

I need to build an app in which there is an image. On image there are many points where user can tap and depend upon that location of tap we need to take input. Tap locations are fixed.
User can zoom image. Detect multiple taps. (Single tap, double tap, etc.)
Biggest problem we are facing is there are too many points near to each other. So if we tap on one point we are getting other points clicked.
Following is the image according which I need to work on.
I need to detect tap on all red dots and take decision based upon that. That red dots will not be visible to user.
What I have tried is following.
Placed buttons on image as shown image. But problem is when user tap on button either button's tap event is not calling or it's not tapping right button which user seems to tap.
What I am thinking to do now is.
Taken image in scroll view then detect tap of scroll view and then based upon co-ordinates detect tap.
Is there any easier way to detect tap?
Your requirement is a pretty complex one.
Here you have to take a help of Core image. You need to process that image and get the core details of that image. Also "Morphological Operations" will help you to detect object from image. Take a look on links:
Core image processing
Morphological Operations

Simulate 360 degree object rotation using multiple images on iOS

I would like to show a swipe-able 360 degree view of a product along a single axis by using multiple images stitched together to make it animated.
I'm new to iOS development, and am hoping to get pointed in the right direction to find libraries or built-in methods that could help me achieve this. I'm guessing this is a fairly common task, but I'm not even sure of the correct terminology. (I'm dabbling in RubyMotion as well, so that would be a bonus if it could work using that approach.)
how i might do it:
get an image showing up in the ui, running on the phone, base case :)
make a 'ThreeSixtyImageView' (subclass of UIView) that contains a big UIImageView.
keep an NSArray of UIImages in your ThreeSixtyImageView class; load up all your UIImages into that array.
keep a number that's an index into that array. when it changes, set the UIImageView image to the UIImage at that array index! hook up a button that increments the index (and show that image) to make sure that works.
add a UIPanGestureRecognizer to track touch state
when the pan gesture begins, remember which image you're on, and where they tapped (as an anchor point)
when the pan gesture updates, subtract the anchor and divide by something that feels nice to get 'how much user wants images to rotate'. this gives you a new image index value.
update your main UIImage with that new image index (into your array)
if there's a step here you don't understand, look in the examples included in the xcode/iOS documentation, and copy their code! the sample code is pretty good, and helped me a lot with editing XIB documents, and learning about GestureRecognizers.
good luck!

Open view controller with custom animation

I have an srollview with many images. After some image had touched I should open new view controller with this image and related images for image that was touched.
Now I use pushViewController.
So question is. Is it possible to open new viewController wiht zoom animation?
i.e. after user touched image, this image is zooming into the center of the screen (and this is new viewController already)
If it's possible please let me know with what I can realise it.
Thanks
I have good news and bad news for you. The good news is yes, it can be done! The bad news is that it can be complex.
The way I ended up doing it was as follows
when the user taps the image, you know what image to zoom, and you grab a reference and a frame
overlay a new view over the current view containing the scrollView, then add a NEW UIImageView containing another UIImage of the one the user tapped (could even be a higher resolution version)
animate that view to fill the screen (this image can be in a zoomable scrollview too, that's for future work!)
when you want to dismiss, you animate the frame down to be exactly whats in the scrollView, then you remove the overlay view
now users is more or less back to where they were before the tap

Doodles... Open GL?

I am making an app where one function is the ability to place pre-made doodles on the page, They will have the ability to zoom the doodles in and out and place them where they wish. This is all for the iPad. Do I need to use open GL for this or is there a better / easier way? (New to iOS programming)
You should be able to achieve this using CALayers. You will need to need to add the QuartzCore framework for this to work. The idea would be to represent each doodle as a single CALayer. If your doodles are images, you can use the contents property to assign the doodle to the layer. You will need to assign a CGImageRef object which you can easily retrieve using CGImage property of a UIImage object.
You will need a view which will be your drawing board. Since you want to be able to move and alter the sizes of the doodles, you will have to attach a UIPanGestureRecognizer object for moving the layers and a UIPinchGestureRecognizer to zoom the doodles in and out. Since the recognizers can only be attached to a view and not layers, the non-trivial part when the gesture handlers are called will be identify which sublayer of the view are they manipulating. You can get the touches for the gestures using locationInView: for the pan gesture and locationOfTouch:inView: for the pinch gesture with the view argument being the view that the gesture is being done on which can be retrieved using gesture.view. Once you identify the layer in focus, you can use translationInView: for pan gesture to move the layer and use scale property of the pinch gesture to transform the layer.
While CALayer objects are lightweight objects, you could face problems when there are just too many of them. So stress test your application. Another roadblock is that images are usually memory hogs so you might not be able to get a lot of doodles in.

Resources