How to calculate relative tap location in UIImageView - ios

I have a UIImageView which loads images with different aspect ratios. To display the full image properly I am using UIViewContentModeScaleAspectFit.
I need to be able to allow users to tap on image and tag friends. How do I calculate the X and Y percentages of the tap location relative to the actual image content - since UIImageView contains padding at top-bottom or sides to satisfy the UIViewContentModeScaleAspectFit constraint. I need to be able to isolate that out of the percentage calculation.
Also, the inverse needs to be done when the UIImageView needs to be rendered with the image and tags.

The easy way is with the built in function AVMakeRectWithAspectRatioInsideRect.
[imageView setFrame:AVMakeRectWithAspectRatioInsideRect(image.size, imageView.frame)]
You can, of course, reinvent the wheel and do a bunch of calculations by hand.

Related

UIImageView - anyway to use 2 content modes at the same time?

So in my scenario, I have a square that is (for understanding's sake) 100x100 and need to display an image that is 300x800 inside of it.
What I want to do is be able to have the image scale just as it would with UIViewContentMode.ScaleAspectFill so that the width scales properly to 100.
However, after that, I would like to then "move" the image up to the top of the image instead of it putting it inside the imageView right in the center, basically what UIViewContentMode.Top does. However that doesn't scale it first.
Is there anyway to do this type of behavior with the built in tools? Anyway to add multiple contentModes?
I already had a helper function that I wrote that scaled an image to a specific size passed in, so I just wrote a function that calculated the scaled image that would fit into the smaller square I had similar to the size AspectFill would do, and then I wrote code that would crop it with the rectangle size I needed at (0,0).

Position an UIImageView over another with scaling

Is there a method on UIImageView that tells me the position of its image within its bounds? Say I have an image of a car, like this:
This image is 600x243, and, where the rear wheel should be, there's a hole which is 118,144,74,74 (x,y,w,h).
I want to let the user see different rear wheel options, and I have a few wheel images to choose from (all square, so they are easily scaled to match the hole in the car).
I wanted to place the car image in a UIImageView whose size is arbitrary based on layout, and I wanted to see the whole car at the natural aspect ratio. So I set the image view's content mode to UIViewContentModeScaleAspectFit, and that worked great.
For example, here's the car in an imageView that is 267x200:
I think doing this scaled the image from w=600 to w=267, or, by a factor of 267/600=0.445, and (I think) that means that the height changed from 200 to 200*0.445=89. And I think it's true that the hole was scaled by that factor, too
But I want to add a UIImageView subview to show the wheel, this is where I get confused. I know the image size, I know the imageView size, and I know the hole frame in terms of the original image size. How do I get the hole frame after the image is scaled?
I've tried something like this:
determine the position of the car image in its UIImageView. That's something like:
float ratio=carImage.width/carImageView.frame.size.width; // 0.445
CGFloat yPos=(carImageView.frame.size.height-carImage.height)/2; // there should be a method for this?
determine the scaled frame of the hole:
CGFloat holeX = ratio*118;
CGFloat holeY = yPos + ratio*144;
CGFloat holeEdge = ratio*74;
CGRect holeRect = CGRectMake(holeX,holeY,holeEdge,holeEdge);
But there must be a better way. These calculations (if they are right) are only right for a car image view that is taller than the car. The code needs to be different if the image view is wider.
I think I can work out the logic for a wider view, but it still might be wrong. For example, that yPos calculation. Do the docs say that, for content mode = AspectFit, the image is centered inside the larger dimension? I don't see that any place.
Please tell me there's a better way, or, if not, is it proven that my idea here will work for arbitrary size images, image views, holes?
Thanks.
The easiest solution (by far) is to simply use the same sizes for both the car image and the wheel option images.
Just give the wheel options a transparent padding (easy to do in nearly every graphics editing program), and overlay them over the car with the same frame.
You may increase your asset sizes by a minuscule amount.. but it'll save you one hell of a headache trying to work out positions and sizings, especially as you're scaling the car image.

iOS Image Resizing / Dealing With Blank Space

I have simply dragged UIImageView into storyboard and made it square. I added a pink background to show the effects of the leftover space in the ImageView. In each case I added either a taller image (1st image) and a wider image (2nd image), as well as a text label. Here are my results.
So the obvious question is....how can I get rid of this extra (pink) space and keep the integrity of the photo (that is, to not have to stretch or lose part of the image)? If I wanted to be able to scroll through photos, it would be nice to have them all the same width to the edge so they look neat and orderly (if they were portrait), and if I wanted to have text under each, I'd want the text to be closer to it, rather than have all the blank (pink) space in between if it were landscape. And obviously different size images will give different sizes of blank space.
So I'm thinking what I could do is before displaying the image, get the size of it, then just have a designated distance from either the label or the edge of screen, depending on the orientation of the picture, and then creating/changing the size of the UIImageView with a bit of math and using the image dimensions before inserting the picture into the ImageView. Is this possible? Is there another method I can't quite figure out?
Just look at any decent photo app and they are nice and neatly organized/displayed despite being different sizes, orientations, etc and I'm wondering how to pull this off. I obviously haven't gotten too deep into using images past simply showing them in a pre-determined ImageView.
Thanks for the help/suggestions!
Try this... set your UIImageView to AspectFit (not AspectFill since that will lose some of the image) and using constraints do the following:
centre the UIImageView in the container both horizontally and vertically
set the UILabel to float below the UIImageView by whatever distance you desire ("standard" is usually good)
set the left, right, and top constraints on the UIImageView to be >= whatever distance you desire
set the bottom constraint on the UILabel to be (once again) >= whatever distance you desire
The effect of this should be that the UIImageView will properly resize itself to its intrinsic size and the constraints should properly position it and the label.

iOS swipe filters over static image

I'm looking for a way to swipe image filters over top of a still image.
Mainly the base image stays in place, and the filters slide in over top when you swipe left or right.
Right now I have a base UIImageView and a Collection View over top of it which in theory would hold the filters (texture and gradient images).
I've read that UIImageViews and UIViews can't be live composited on top of each other, and that you must make the image before displaying it. So I can pre-make the image beforehand in code, then can I wipe-reveal the filter image to get the same effect? Using masks?
Code examples are nice, but a high level description on how to approach this would be helpful.
The app Spark has this functionality for videos, I'm looking to do something similar for photos.
So I can pre-make the image beforehand in code, then can I wipe-reveal the filter image to get the same effect? Using masks?
Yes, but no need for a mask. Pre-make the filtered image and put it in an image view. Let's say this filtered effect is to be swiped in from the left. Then make the image view's content mode be Left, and put it at the left of the real image, with width zero. As the swipe happens, animate the width of the image view to the width of the image. This will cause the filtered image to be revealed from the left side.

Zoom image until it fits ImageView Completely

I have a simple UIImageView and an image. I want to fit the image in that UIImageView but I don't want the image aspect ratio to change and I also don't want any dead spaces. ( black bars on the sides etc') I don't care if the image is zoomed in all the way as long as those 2 rules are applied.
Is there a build in setting for that? I tried all the Scale To Fill and Aspect Fill etc' but couldn't find what I'm looking for.
For example: UIImageView is 300x300
image is 200x250. The image will zoom in until all the areas of the UIImageView are filled.
For a UIImageView you can use Aspect Fill in the properties to do this.
But you may have to tick the box "Clip Subviews" otherwise the image will spill outside the image view frame.

Resources