Where on the screen is my UIImage? - ios

I have a UIImage contained in a UIImageView. It's set to use the UIViewContentModeScaleAspectFit contentMode. The UIImageView is the size of the screen. The image is not, hence the scaleAspectFit mode. What I can't figure out is, where on the screen is the UIImage? What's it's frame? Where does the top left of the image appear on the screen? I can see it on the screen, but not in code. This should be simple, but I can't figure it out.

Try this in your UIImageView:
It will will compute the frame of the image, assuming you are using UIViewContentModeScaleAspectFit.
- (CGRect) imageFrame
{
float horizontalScale = self.frame.size.width / self.image.size.width;
float verticalScale = self.frame.size.height / self.image.size.height;
float scale = MIN(horizontalScale, verticalScale);
float xOffset = (self.frame.size.width - self.image.size.width * scale) / 2;
float yOffset = (self.frame.size.height - self.image.size.height * scale) / 2;
return CGRectMake(xOffset,
yOffset,
self.image.size.width * scale,
self.image.size.height * scale);
}
What it does is works out how much you need to shrink/enlarge the UIImage to fit it in the UIImageView in each dimension, and then picks the smallest scaling factor, to ensure that the image fits in the allotted space.
With that you know the size of the UI Image, and it's easy to calculate the X, Y offset w.r.t. the containing frame.

Related

How can I set max width and height on UIImageView while maintaining 1:1 ratio?

I am trying to do this in storyboard but can't seem to figure it out. I have a QR code (square UIImageView) centered (vertically and horizontally) in another UIView that I want to expand a bit depending on phone size, but I don't want it to go over 150x150 otherwise it looks odd. Here is what I tried. Any help greatly appreciated.
You want to:
Center it in its superview
Keep 1:1 aspect ratio
Keep it less-than-or-equal-to 150x150
So, centering is obvious (yay, auto layout). 1:1 ratio is also easy. The Trick is:
Pin all 4 sides to the superview sides at >= 0.
Set the Width <= 150.
And then ---
Set another Width constraint, this time Width Equals: 150 but with Priority of less-than 1000. I used 999 and it did the job.
Pinning the sides and top/bottom with >= 0 gets it to shrink when the superview is smaller than 150 (in either direction).
I have been using this for a long time.
(UIImage*) scaleImage:(UIImage*)image toSize:(CGSize)newSize {
CGSize scaledSize = newSize;
float scaleFactor = 1.0;
if( image.size.width > image.size.height ) {
scaleFactor = image.size.width / image.size.height;
scaledSize.width = newSize.width;
scaledSize.height = newSize.height / scaleFactor;
}
else {
scaleFactor = image.size.height / image.size.width;
scaledSize.height = newSize.height;
scaledSize.width = newSize.width / scaleFactor;
}
UIGraphicsBeginImageContextWithOptions( scaledSize, NO, 0.0 );
CGRect scaledImageRect = CGRectMake( 0.0, 0.0, scaledSize.width,
scaledSize.height );
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}

Scale image inside UIView without changing bounds?

Is it possible to scale an image inside a UIView without changing the UIView's bounds? (That is, while still clipping the image to the UIView's bounds, even as the image scales larger than the UIView.)
I found some code on a different SO post that scales the image in a UIView:
view.transform = CGAffineTransformScale(CGAffineTransformIdentity, _scale, _scale);
However, this seems to affect the view's bounds -- making them larger -- so that the UIView's drawing now stomps over other nearby UIViews as its contents grow larger. Can I make its contents scale larger, while keeping the clipping bounds the same?
The easiest way to scale image is to use UIImageView by setting its contentMode property.
If you have to use UIView to show the image, you may try to redraw the image in UIView.
1.subclass UIView
2.draw your image in drawRect
//the followed code draw the origin size of the image
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
[_yourImage drawAtPoint:CGPointMake(0,0)];
}
//if you want to draw as much as the size of the image, you should calculate the rect that the image draws into
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
[_yourImage drawInRect:_rectToDraw];
}
- (void)setYourImage:(UIImage *)yourImage
{
_yourImage = yourImage;
CGFloat imageWidth = yourImage.size.width;
CGFloat imageHeight = yourImage.size.height;
CGFloat scaleW = imageWidth / self.bounds.size.width;
CGFloat scaleH = imageHeight / self.bounds.size.height;
CGFloat max = scaleW > scaleH ? scaleW : scaleH;
_rectToDraw = CGRectMake(0, 0, imageWidth * max, imageHeight * max);
}

Resize image maintaining aspect ratio

I am developing an app where in users click images and upload them. There is a button in my app and on clicking it, the camera mode opens. User can click image and see preview. In preview the image looks good and it occupies whole screen. Later I have to display this image in a UIImageView of width 110 and height 111. When I display it in this, the image is getting distorted and cropped at edges. My main objective is to maintain aspect ratio.
I tried doing this.
- (void)displayCapturedImage
{
//[self.imageView setImage:self.capturedImage];
CGSize smallSize = CGSizeMake(110, 111);
[self.imageView setImage:[self imageWithImage:self.capturedImage scaledToSize:smallSize]];
}
- (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
but this didn't work for me. Image is still distorted.
You need to calculate the scale ratio yourself. Determine how much smaller the image needs to be and apply that to both the width and height.
Something like
float widthFactor = targetWidth / width;
float heightFactor = targetHeight / height;
if ( widthFactor < heightFactor )
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
if ( widthFactor < heightFactor )
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
else if ( widthFactor > heightFactor )
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;

Translating CIDetector (face detection) results into UIImageView coordinates

I've been struggling to translate the CIDetector (face detection) results into coordinates relative to the UIImageView displaying the image so I can draw the coordinates using CGPaths.
I've looked at all questions here and all the tutorials I could find and most of them use small images that are not scaled when displayed in a UIImageView (example). The problem I am having is with using large images which are scaled using aspectFit when displayed in a UIImageView and determining the correct scale + translation values.
I am getting inconsistent results when testing with images of different sizes/aspect ratios, so I think my routine is flawed. I'd been struggling with this for a while so if anyone has some tips or can x-ray what I am doing wrong, that would be a great help.
What I am doing:
get the face coordinates
use the frameForImage routine below (found here on SO) to get the scale and bounds of the UIImageView image
create transform for scale + translation
apply transform to the CIDetector result
// my routine for determining transform values
NSDictionary* data = [self frameForImage:self.imageView.image inImageViewAspectFit:self.imageView];
CGRect scaledImageBounds = CGRectFromString([data objectForKey:#"bounds"]);
float scale = [[data objectForKey:#"scale"] floatValue];
CGAffineTransform transform = CGAffineTransformMakeScale(scale, -scale);
transform = CGAffineTransformTranslate(transform,
scaledImageBounds.origin.x / scale,
-(scaledImageBounds.origin.y / scale + scaledImageBounds.size.height / scale));
CIDetector results transformed using:
mouthPosition = CGPointApplyAffineTransform(mouthPosition, transform);
// example of bad result: scale seems incorrect
// routine below found here on SO for determining bound for image scaled in UIImageView using 'aspectFit`
-(NSDictionary*)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)myImageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = myImageView.frame.size.width / myImageView.frame.size.height;
float scale;
CGRect boundingRect;
if(imageRatio < viewRatio)
{
scale = myImageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (myImageView.frame.size.width - width) * 0.5;
boundingRect = CGRectMake(topLeftX, 0, width, myImageView.frame.size.height);
}
else
{
scale = myImageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (myImageView.frame.size.height - height) * 0.5;
boundingRect = CGRectMake(0, topLeftY, myImageView.frame.size.width, height);
}
NSDictionary * data = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:scale], #"scale",
NSStringFromCGRect(boundingRect), #"bounds",
nil];
return data;
}
I completely understand what you are trying to do, but let me offer you a different way to achieve what you want.
you have an over sized image
you know the size of the imageView
ask the image for its CGImage
determine a 'scale' factor, which is the imageView width divided by the image width
multiple this value and your image height, then subtract the result from the imageViewHeight, to get the "empty" height in the imageView, lets call this 'fillHeight'
divide 'fillHeight' by 2 and round to get the 'offset' value used below
using context provided by UIGraphicsBeginImageContextWithOptions(imageView.size, NO, 0), paint the background whatever color you want, then draw your CGImage
CGContextDrawImage (context, CGRectMake(0, offset, imageView.size.width, rintf( image.size.height*scale )), [image CGImage]);
get this new image using:
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
set the image: imageView.image = image;
Now you can exactly map back to your image as you know the EXACT scaling ratio and offsets.
This might be the simple answer you are looking for. If you're x and y coordinates are inverted, you can mirror them yourself. In the below snippet im looping through my returned features and needing to invert the y coordinates, and x coordinate if it's front-facing camera:
for (CIFaceFeature *f in features) {
float newy = -f.bounds.origin.y + self.frame.size.height - f.bounds.size.height;
float newx = f.bounds.origin.x;
if( isMirrored ) {
newx = -f.bounds.origin.x + self.frame.size.width - f.bounds.size.width;
}
[[soups objectAtIndex:rnd] drawInRect:CGRectMake(newx, newy, f.bounds.size.width, f.bounds.size.height)];
}

Is it possible to display an image in Core Graphics with an Aspect Fit resize?

A CALayer can do it, and a UIImageView can do it. Can I directly display an image with aspect-fit with Core Graphics? The UIImage drawInRect does not allow me to set the resize mechanism.
If you're already linking AVFoundation, an aspect-fit function is provided in that framework:
CGRect AVMakeRectWithAspectRatioInsideRect(CGSize aspectRatio, CGRect boundingRect);
For instance, to scale an image to fit:
UIImage *image = …;
CRect targetBounds = self.layer.bounds;
// fit the image, preserving its aspect ratio, into our target bounds
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size,
targetBounds);
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
You need to do the math yourself. For example:
// desired maximum width/height of your image
UIImage *image = self.imageToDraw;
CGRect imageRect = CGRectMake(10, 10, 42, 42); // desired x/y coords, with maximum width/height
// calculate resize ratio, and apply to rect
CGFloat ratio = MIN(imageRect.size.width / image.size.width, imageRect.size.height / image.size.height);
imageRect.size.width = imageRect.size.width * ratio;
imageRect.size.height = imageRect.size.height * ratio;
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
Alternatively, you can embed a UIImageView as a subview of your view, which gives you easy to use options for this. For similar ease of use but better performance, you can embed a layer containing the image in your view's layer. Either of these approaches would be worthy of a separate question, if you choose to go down that route.
Of course you can. It'll draw the image in whatever rect you pass. So just pass an aspect-fitted rect. Sure, you have to do a little bit of math yourself, but that's pretty easy.
here's the solution
CGSize imageSize = yourImage.size;
CGSize viewSize = CGSizeMake(450, 340); // size in which you want to draw
float hfactor = imageSize.width / viewSize.width;
float vfactor = imageSize.height / viewSize.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = imageSize.width / factor;
float newHeight = imageSize.height / factor;
CGRect newRect = CGRectMake(xOffset,yOffset, newWidth, newHeight);
[image drawInRect:newRect];
-- courtesy https://stackoverflow.com/a/1703210

Resources