How to split an image into polygons - ios

Similar to some coloring apps that fill a certain region with selected color when tapped, I would like to convert a png image to polygons which could be tapped in order to perform a certain action. An example picture is posted below.
For this example, I would like to implement the logic to divide the image into regions 1, 2, 3, and 4 (not necessarily in this order) so when the user taps on the upper-left rectangle, action1, for upper-right rectangle action2, for the ellipse actions, and for the rest action3 is executed.
Does anyone know how to do it by using SpriteKit?

You don't need to split the image into regions. Attach a tap gesture recognizer to your image view.
In the action of the tap gesture recognizer, take the coordinates of the tap and figure out which region it falls into.
Rectangular regions are really easy. You just see if the coordinates fall within the x/y bounds of the rectangle.
For more complex shapes, you can create UIBezierPath shapes and use the UIBezierPath contains(_:) method to see if the tap point falls in a particular path.
The simplest way to structure your code would be an array of structs each of which contains a UIBezierPath and a closure to invoke if a tap lands in that path. You can then invoke the closure when a tap lands in one of those paths.

Related

iOS MapKit - drawing shapes on map

I'd like to allow users to mark some area on map. This area should be any closed shape. For instance user can draw something like this:
In next step I'd like to calculate region of this shape. How can I achive this?
You can use MKPolygon.
First of all you'll need to disable user interaction on the map view so that it won't move around while you're trying to draw on it. Next up you can use the UIResponder functions touchesBegan, touchesMoved and touchesEnded. As you move through these three functions you can record the points the user has pressed. Lastly you can then create an MKPolygon from this array of points that you have recorded.

iOS Circular Slider

I want to create a circular slider like below.
But i want two functionalities in addition.
1) I want to start the slider from any point,but in fig. it starts from 0.
2) I want to include multiple sliders in a single circular black plot.
I'm sharing the link of this project:
https://www.cocoacontrols.com/controls/circularsliderdemo
Can anyone help me to do these functionalities.
Thanks in advance.
Take a look at CAShapeLayer. You could create a path that is a full circle, and use the strokeStart and strokeEnd properties to only draw part of the circle. You could use core animation to animate between the beginning and the end.
There is an open source custom gesture recognizer on Github that is a one finger gesture recognizer. That would be a good start for detecting and responding to the twirl gesture that such a control would need. EDIT: It's called KTOneFingerRotationGestureRecognizer (link)
Those are some ideas to help get you started.
I have a project on github called iOS-CAAnimation-group-demo That includes a "clock wipe" animation. The clock wipe works by setting up a shape layer as the mask layer for an image view, installing a full-circle arc that's wide enough to completely fill a rectangular area, and then animate the strokeEnd property of the shape layer to reveal/hide the image view. The clock wipe is much more complex than what you need, but it would give you the seed of what you want. You'd use a shape layer with a much thinner line width, and you would use it as a content layer, not as a mask.

iOS Triangular Image view

so I'm making a game and pretty much when the player (which is a triangular shaped rocket) hits an object flying at you (a rock) the game ends. I have everything working well but my problem is the rocket is a triangle yet the image view its in is a rectangle. So if the edge of the image view touches the rock the game will end even though the actual rocket didn't touch the object. So basically how can I make the rock image view not recognize the parts of the rocket image view which are empty? Basically a triangular shaped image view.
Thank you for your help. Let me know if you need more info or want to see the code I have for them to collide.
You analytically present the triangle with 3 points and a rock with a center and radius then find and implement an algorithm checking a hit test between those 2 shapes. Or draw the two shapes onto some graphics context using an appropriate blending and check for overlapping pixels (for instance draw one as red and another as green and look if a pixel that is both red and green exists) you could actually do that with 2 image views having those colors and .5f alpha added on the 3rd invisible view but you would need to get the image from the view and then iterate through all the pixels. In any of the cases do this check only after the corresponding view frames overlap.

iOS: Segmented touchmap over image

I've been trying to find a way to solve this problem, and haven't been able to find anything useful, so forgive me if this is a duplicate of something I couldn't find.
I have, essentially, a large complicated image in the style of a stained glass window in a scroll view so that I can pan and zoom around it. Each of the individual segments of the window has some information associated with it. What I need to be able to do is tap on any of the segments and determine which segment was tapped so that I can display the information. What I'm not sure of is how to do the mapping between touch points and segments. Most of the segments aren't even regular polygon shapes let alone orthogonal squares, so I can't think of a straightforward way to determine which segment I've tapped.
If anybody has any ideas as to how I might go about implementing this, it would be most appreciated!
Cheers
Put each individual segment in a different layer. Now you can do hit-testing on what layer was tapped. Your test must be designed so that if a layer was tapped but on a transparent area (i.e. outside its segment), your test will fall through to the next layer behind it. Thus the test will succeed if and when you discover a layer's non-transparent region under the tap. Since it is one segment per layer, the segment is the one corresponding to that layer.

Possible to ignore pan gestures on transparent parts of UIImageViews?

I'm working on an app that lets the user stack graphics on top of each other.
The graphics are instantiated as a UIImageView, and is transparent outside of the actual graphic. I'm also using pan gestures to let the user drag them around the screen.
So when you have a bunch of graphics of different sizes and shapes on top of one another, you may have the illusion that you are touching a sub-indexed view, but you're actually touching the top one because some transparent part of it its hovering over your touch point.
I was wondering if anyone had ideas on how we could accomplish ONLY listening to the pan gesture on the solid part of the imageview. Or something that would tighten up the user experience so that whatever they touched was what they select. Thanks
Create your own subclass of UIImageView. In your subclass, override the pointInside:withEvent: method to return NO if the point is in a transparent part of the image.
Of course, you need to determine if a point is in a transparent part. :)
If you happen to have a CGPath or UIBezierPath that outlines the opaque parts of your image, you can do it easily using CGPathContainsPoint or -[UIBezierPath containsPoint:].
If you don't have a handy path, you will have to examine the image's pixel data. There are many answers on stackoverflow.com already that explain how to do that. Search for get pixel CGImage or get pixel UIImage.

Resources