circular dial using small small buttons in swift - ios

I want to create a circular dial in swift. The entire circle is divided into images and I want to place those images to create a circle. Each image is clickable so that one image placed in centre of the arch could get change.
So I need to create circle using buttons in swift. Please note that I don't want to create a circular button, I want to create circle using button. Something like what is done in below link using labels.
Draw text along circular path in Swift for iOS

Suppose we have n images to arrange around the circumference of a circle of radius r. The circle is 2π radians, so use polar coordinates from the center of the circle to find the points on the circumference at angles that are multiples of 2π/n. Now center the images (buttons) at those points.

Related

In iOS, is there a method to make a button move around arc line or circle?

I have a custom control like the picture below, it is too complicated to draw all the images using core graphics. So I use the images designer give, but the problem is how to make the small circle(which is actually a button) on the big white circle(which is an UIImageView) move aroud the big circle? Thanks in advance!
You need to calculate all circumference points. Just set centre of your image to the points calculated by you. Automatically your image will start moving in circle.

Set corners of UIView (iOS)

I have the following problem: I'm scanning a QR Code with AVFoundation. This works quite well and I can also create a border around the code by adding a subview and set the frame attribute by subview.frame = qrCodeObject.bounds. This only has the problem that the border is only a rectangle and dismisses the perspective of the QR code.
I know that the qrCodeObject has a property corners which incorporates the top right, top left, bottom right and bottom left points of the QR code detected.
My question is now: how can I apply those corner points to the "border" view to make this border to have the same perspective as the QR code? Or in other words: how to "transform" the view according to the corner points?
Thanks a lot in advance!
UPDATE:
Here you can see the problem: the red box is a UIView, having it's frame property set to the QR codes bounds property. This misses perspective. I would like to transform the UIView (the red box) to following the corners property of the QR code, which includes the top right, top left, bottom right and bottom left points (CGPoint) of the QR code. It is important to apply this to a UIView, because I later want to apply it to an ImageView. Also a mask is not usable, as it just hides part of the view, but does not stretch or transform the content of the view.
I found a solution: AGGeometryKit did the trick: https://github.com/hfossli/AGGeometryKit/
Thanks everybody for helping!
You can't transform a CGRect that way, as far as I know. (At least I'm unaware of any framework that can do that kind of image processing.)
What you can do is to draw a polygon using the points of the qrCodeObject.
In drawRect of your UIView, change use CGContext and CGPath to draw the path you'd like.
You want your drawing UIView to be the same size as the one showing the QR code so that you don't have to translate the points onto a second coordinate space.
This answer has directions if you need more guidance on how to do that.
Ok, the problem you are facing is that a CGRect can only represent a rectangle that is not tilted or distorted. What you are dealing with is an image that has different kinds of perspective distortion.
I haven't tried to do this, but it sounds like AVFoundation gives you 4 CGPoint objects for a reason. You need to draw those 4 CGPoints using a UIBezierPath rather than trying to draw a CGRect. Simply create a bezier path that moves to the first point, then draws lines to each subsequent point, and finally, back to the first point. That will give you a quadrilateral that takes into account the distortion of your QR code.
CATransform3DRotate could be your friend, here.
https://guides.codepath.com/ios/Using-Perspective-Transforms might be a good starting point.

How to let the user define a region?

I'd like to let the user define a region on a map. The region will be represented by a circle defined by a center and its radius.
Here is a picture of the result I'd like to have :
radius: I want the radius to be define with a small circle that the user will drag to set the radius size (and I'd like to be able to draw a dashed line between the center and the circle as well as showing the distance).
center I want to set the center of the map by letting the user drag the map behind to move the red pin to the desired new center.
I know how to get the blue circle, how to add a gesture to get the new radius and to redraw a circle at run time. What I don't know is how to draw dashed line, small black circle and display / update text inside the MKMapView.
Also, I'm completely lost when it's about setting the center. How can I drag the map around by letting all this object (blue circle, red pin center, dashed line, ...) not moving ?
Should I completely forget about MKMapView and draw everything above it in a UIView and then get coordinate by mapping fake touch on MKMapView ?

iOS Triangular Image view

so I'm making a game and pretty much when the player (which is a triangular shaped rocket) hits an object flying at you (a rock) the game ends. I have everything working well but my problem is the rocket is a triangle yet the image view its in is a rectangle. So if the edge of the image view touches the rock the game will end even though the actual rocket didn't touch the object. So basically how can I make the rock image view not recognize the parts of the rocket image view which are empty? Basically a triangular shaped image view.
Thank you for your help. Let me know if you need more info or want to see the code I have for them to collide.
You analytically present the triangle with 3 points and a rock with a center and radius then find and implement an algorithm checking a hit test between those 2 shapes. Or draw the two shapes onto some graphics context using an appropriate blending and check for overlapping pixels (for instance draw one as red and another as green and look if a pixel that is both red and green exists) you could actually do that with 2 image views having those colors and .5f alpha added on the 3rd invisible view but you would need to get the image from the view and then iterate through all the pixels. In any of the cases do this check only after the corresponding view frames overlap.

Xcode, iOS - Image line/shape recognition

I want to identify squares/rectangles inside my UIImageView (or UIImage).
I looked at "Very simple image recognition on iOS", but that's not quite what I'm looking.
At the moment I have an UIImageView which is given a UIImage from time to time.
Most of the UIImagees has black squares/rectangles like this:
.
But the corners may (or may not) have rounded edges.
How can I identify the first black square/rectangle's size?
The end result would be to resize my UIImageView to make the first black square in the UIImage fill the screen. Like so:
If your images will always be sharp black squares in a horizontal row, you could use corner detection to identify the rectangles, then pick out the four leftmost corners. I have three variants of corner detectors in my open source GPUImage framework based on the Harris, Noble, and Shi-Tomasi corner detection methods.
Running a GPUImageHarrisCornerDetectionFilter against your boxes with a threshold of 0.4 and sensitivity of 4.0 yields the following result:
They're a little hard to see, but red crosshairs mark where the detector found the corners of your boxes. Again, you just need to take the leftmost four points to find your target rectangle, and then simply scale your image or view so that this rectangle now fills your view.
An example of how to run such feature detection can be found in either the FilterShowcase or FeatureExtractionTest example within my framework. I describe the process by which I do this in this answer over at Signal Processing.
It seems easiest solution would be:
sum up all pixels vertically to the top-most row (like an excel table)
rows with the smallest/biggest value are your "gap" region
width can be derived from (2).
From what I understood about your question, you need to implement the Canny Edge Detection Algorithm for detecting the edges of the black borders in your image.
For this you should use the image processing framework available at the following links
Google
Github
Use the ImageWrapper *Image::cannyEdgeExtract(float tlow, float thigh)function from the Image.m file.

Resources