Position an UIImageView over another with scaling - ios

Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.

The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.

Related

What is the correct approach to show the image without distortion in collection view

I have a very fundamental question hence not posting any code.
OverView
I have a collection view which makes use of custom layout which I have written. All this custom layout does is it applies some maths and finds out the optimal number of cells to place on screen based on screen width and tries to round off the additional pixels using padding. As a result most of my cells size vary when the collection view changes its orientation especially on a device like iPad Pro. What I mean is if the cell size is 300 x 300 in portrait it might become 320 x 300 something like that in landscape. (They are not exact values just trying to give an idea)
Every single cell has imageView in it and image needs to be downloaded from server. I am making use of SDWebImage. Images downloaded are way bigger than my cells and I don't want to load such big images into memory, hence I have decided that I'll be resizing the image once it is downloaded before putting it on to SDWebImageCache.
Issue
Now as my cells sizes are changing based on device orientation, in order to make the image look pixel perfect I'll have to resize the image every time device rotates for each cell. But that will lead to bad performance.
Solutions I have figured out
I'll have to cache two images each one for one orientation in SDWeImageCache but thats again will increase memory footprint of the app.
Find the biggest size cell will get among various orientation and resize the image to that size and then use the same image for all the smaller cell size with image view having its mode set to AspectToFit.
Please suggest me what is the correct approach. I don't need the code, just need idea.
Let me know if this is a bad idea, but potential options.
You could
1) Get the original image size.
Save this.
func resetImage() {
2) Determine if picture width > height
3) If width > height, then you know it is gonna be a longer picture. Determine how long you would like it. Let's say you want the wider pictures about 1/10*Width of a cell
4) Determine the height of the image accordingly. ((1/10)/widthOfCell)*imageHeight
5) If (width > height) imageView.frame = CGRect(0, 0, 1/10*widthCell, height)
6) imageView.contentMode = .ScaleAspectFit, imageView.image = UIImage(named: "awesome")
7) Possibly save imageView.size to use incase another rotation.
This allows you to change just the imageView instead of saving the image data, hopefully removing the white space. Comment and lemme see whatcha think.

UIImageView - anyway to use 2 content modes at the same time?

So in my scenario, I have a square that is (for understanding's sake) 100x100 and need to display an image that is 300x800 inside of it.
What I want to do is be able to have the image scale just as it would with UIViewContentMode.ScaleAspectFill so that the width scales properly to 100.
However, after that, I would like to then "move" the image up to the top of the image instead of it putting it inside the imageView right in the center, basically what UIViewContentMode.Top does. However that doesn't scale it first.
Is there anyway to do this type of behavior with the built in tools? Anyway to add multiple contentModes?
I already had a helper function that I wrote that scaled an image to a specific size passed in, so I just wrote a function that calculated the scaled image that would fit into the smaller square I had similar to the size AspectFill would do, and then I wrote code that would crop it with the rectangle size I needed at (0,0).

How to calculate relative tap location in UIImageView

I have a UIImageView which loads images with different aspect ratios. To display the full image properly I am using UIViewContentModeScaleAspectFit.
I need to be able to allow users to tap on image and tag friends. How do I calculate the X and Y percentages of the tap location relative to the actual image content - since UIImageView contains padding at top-bottom or sides to satisfy the UIViewContentModeScaleAspectFit constraint. I need to be able to isolate that out of the percentage calculation.
Also, the inverse needs to be done when the UIImageView needs to be rendered with the image and tags.
The easy way is with the built in function AVMakeRectWithAspectRatioInsideRect.
[imageView setFrame:AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame)]
You can, of course, reinvent the wheel and do a bunch of calculations by hand.

iOS - Get framing of Visible part of UIImage from UIImageView

I am trying to make a transition like APP Tinder.
Detail:
In Screen One there is a Vertical Rectangular UIImaveView with contentMode = Aspect Fill, so it hides some portion of Image to adujust Aspect Ratio.
In Screen Two (Detail Screen) the same image after transition has to to be passed, but the ImageView in Second screen is Square One.
I want to make a Morphing kind of Transition in which User should think that the same ImageView from Screen One become square one in Second one without stretching the Image.So What should i do?
Currently i am trying to get Frame of UIImage that is in visible area of UIImageView so that I can do some Logical stuff to achieve this. but can anyone help me to Get the Frame of Visible Portion of UIImage.
EDIT
Please Find out the Attached Image for understanding
I think there's a little ambiguity in the question: a frame must be specified in a coordinate system. But I think you're looking for a rect relative to the original, unclipped image.
If that's right, then the rect can be computed as follows. Say the image is called image, and the image view is imageView. The size of the rect is the size of the image view:
imageView.bounds.size
And, since aspect fill will center the oversized dimension, it's origin is:
CGPointMake((image.size.width - imageView.bounds.size.width) / 2.0, 0.0);

Again edit of edited UIImage in iOS app

In my iOS app, I am putting several UIImages on one back UIImage and then saving the overall back image with all subimages added on it by taking screenshot programmatically.
Now I want to change the subviews UIImages position of that saved image. So want to know how to detect the subview images position as I have taken whole image as screenshot.
Record their frame as converted to window coordinates. The pixels of the image should be the same as the frame origin (for normal) or double for retina. The screenshot is of the whole screen, so its dimensions are equivalent to the window frame. UIView has some convenience methods to convert arbitrary view frames to other view (or window) coordinates.
EDIT: to deal with content fit, you have to do the math yourself. You know the frame of the imageView, and you can ask the image for its size. Knowing the aspect ratio of each will let you determine in which dimension the image completely fits, and then you can compute the other dimension (which will be a value less than the imageView frame. Divide the difference of the view dimension minus the image dimension by two, and that lets you know the offset to the image inside the view. Now you can save the frame of the image as its displayed in the view.

Resources