How to ensure a UIView with a transform fills a CGRect? - ios

I have a view UIView that by default has a UIImageView (the blue square in the included image) that is centered in this main view (the gray square in the included image). The main view has an added CALayer that serves as a border (window or mask — the yellow rectangle in included image) through which the image can be seen. The user uses gesture recognizers to pan (translate), pinch (scale), and rotate the image view's transform, but I'd like to give them the option of transforming the image so it completely fills the yellow border (all four corners), much the same way the iOS crop tool does (see attached gif). I've tried to figure out how I can calculate the correct scale factor, but now I'm wondering — is there some simple formula or function that can help me solve this problem?

Related

IOS: Dynamically draw shape with wave divider in Swift (Bezier Curve)

Initially I have 2 backgrounds images (I use here 2 rectangles for simplicity, a green one and a grey one) the rectangles height is dynamically adjustable and I need to draw the black wave divider (also a custom image with gradient) between them as you can see in the image. I've tried to simply crop the backgrounds with the divider and merge them, but because the height (first rectangle will have a different height than the second one) is adjustable it will mess up the wave divider. It is not enough to draw the divider, also the rectangles should be cropped to match the divider form. Everything should be done programmatically in Swift and a pdf document will be generated in the end. I can only think to Bezier Curve (CGMutablePath, UIBezierPath), but I don't know exactly how to achieve it. Thanks a lot!
Shape I need to obtain
Grey Rectangle
Green rectangle

How to determine the scale of an Image

I have an image that I'm using which is 960x1280. I have coordinates for a rectangle of x:391 y:772, x:574 y:772, x:574 y:870, x:391 y:870 which allows me to put the rectangle in the proper spot IF the image is still 960x1280. Of course, when I'm in Xcode, the screen size is 375x667.
When drawing the rectangle with the above coordinates, the rectangle is no longer visible. If just use the screen scale of 3 (UIScreen.main.scale) it's not accurate either.
I create a UIImageView that has constraints of 0 for all four sides and using aspect .fit or .fill. How do I now know the proper scale of the image so the rectangle is drawn in the right spot?
Thanks

UIImageView remove white area after rotation applied

I have a problem with rotating UIImageView. As you look at the attached screenshot you will find that image is rotated but the frame of UIImageView has some unwanted empty area. The blue line around image is the frame of UIImageView.
How can I remove that empty area? And assign that much of frame which is require by image.

How to slicing image and make it stretchable with image assets

stretch image right and left side
centre arrow remain as it is
i tried with it but centre down arrow position is not properly set by me.
You can't do this with sliced images.
The area between the edges will stretch or repeat in order to fill the area. You can't also preserver the centre section of an image this way.
What you might be better doing is creating a custom drawn view using either CALayer or drawRect.

iOS - Get framing of Visible part of UIImage from UIImageView

I am trying to make a transition like APP Tinder.
Detail:
In Screen One there is a Vertical Rectangular UIImaveView with contentMode = Aspect Fill, so it hides some portion of Image to adujust Aspect Ratio.
In Screen Two (Detail Screen) the same image after transition has to to be passed, but the ImageView in Second screen is Square One.
I want to make a Morphing kind of Transition in which User should think that the same ImageView from Screen One become square one in Second one without stretching the Image.So What should i do?
Currently i am trying to get Frame of UIImage that is in visible area of UIImageView so that I can do some Logical stuff to achieve this. but can anyone help me to Get the Frame of Visible Portion of UIImage.
EDIT
Please Find out the Attached Image for understanding
I think there's a little ambiguity in the question: a frame must be specified in a coordinate system. But I think you're looking for a rect relative to the original, unclipped image.
If that's right, then the rect can be computed as follows. Say the image is called image, and the image view is imageView. The size of the rect is the size of the image view:
imageView.bounds.size
And, since aspect fill will center the oversized dimension, it's origin is:
CGPointMake((image.size.width - imageView.bounds.size.width) / 2.0, 0.0);

Resources