UIImageView: Change size to image size? - ios

I've an UIImageView with content mode Aspect Fit of size 220x155. I'm dynamically inserting different images in different resolutions, but all larger than the size of the UIImageView. As the content mode is set to Aspect Fit, the image is scaled with respect to the ratio to fit the UIImageView.
My problem is, that if for instance the image inside the UIImageView is scaled to 220x100, I would like the UIImageView to shrink from a height of 155 to 100 too to avoid space between my elements.
How can I do this?

If I got you right, it would be something like this: get image size by:
UIImage * img = [UIImage imageNamed:#"someImage.png"];
CGSize imgSize = img.size;
calculate scale ratio on width
float ratio=yourImageView.frame.size.width/imgSize.width;
check scaled height (using same ratio to keep aspect)
float scaledHeight=imgSize.height*ratio;
if(scaledHeight < yourImageView.frame.size.height)
{
//update height of your imageView frame with scaledHeight
}

Based on Michael's answer, here's a complete method
+ (CGSize)makeSize:(CGSize)originalSize fitInSize:(CGSize)boxSize
{
float widthScale = 0;
float heightScale = 0;
widthScale = boxSize.width/originalSize.width;
heightScale = boxSize.height/originalSize.height;
float scale = MIN(widthScale, heightScale);
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
return newSize;
}

Edit for Steven Stefanik's answer: This method breaks when originalSize is {0, 0}. Maybe consider these changes
-(CGSize)makeSize:(CGSize)originalSize fitInSize:(CGSize)boxSize
{
if (originalSize.height == 0) {
originalSize.height = boxSize.height;
}
if (originalSize.width == 0) {
originalSize.width = boxSize.width;
}
float widthScale = 0;
float heightScale = 0;
widthScale = boxSize.width/originalSize.width;
heightScale = boxSize.height/originalSize.height;
float scale = MIN(widthScale, heightScale);
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
return newSize;
}

Last edit:
It seems I misunderstood the question.
To get the actual size you could try:
CGSize imageSize = img.size;
CGSize viewSize = imageView.frame.size;
CGSize actualSize;
if (imageSize.width > imageSize.height) {
actualSize.width = imageSize.width > viewSize.width ? viewSize.width : imageSize.width;
actualSize.height = imageSize.height * actualSize.width / imageSize.width;
}
else {
actualSize.height = imageSize.height > viewSize.height ? viewSize.height : imageSize.height;
actualSize.width = imageSize.width * actualSize.height / imageSize.height;
}

Related

iOS Image Resize

I want to compress a (900 * 900) picture to (600*600) on an iPhone 6 plus.
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 0)
the generated UIImage becomes (1800 * 1800).
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), NO, 0)
or
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 1)
the resulting image will be blurred, which is obviously wrong.
Is there a good way to solve this problem?
This is the code
- (UIImage*)imageByScalingForSize:(CGSize)targetSize withSourceImage:(UIImage *)sourceImage
{
UIImage *newImage = nil;
//Omit scaledWidth/scaledHeight calculation
CGFloat scaledWidth = targetSize.width;
CGFloat scaledHeight = targetSize.height;
//UIGraphicsBeginImageContext(targetSize);
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0);
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0, 0);
thumbnailRect.size.width= scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Try this lib!
UIImage+Zoom.h
#import <UIKit/UIKit.h>
enum {
enSvResizeScale, // image scaled to fill
enSvResizeAspectFit, // image scaled to fit with fixed aspect. remainder is transparent
enSvResizeAspectFill, // image scaled to fill with fixed aspect. some portion of content may be cliped
};
typedef NSInteger SvResizeMode;
#interface UIImage (Zoom)
- (UIImage*)resizeImageToSize:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode;
#end
UIImage+Zoom.m
#import "UIImage+Zoom.h"
#implementation UIImage (Zoom)
/*
* #brief resizeImage
* #param newsize the dimensions(pixel) of the output image
*/
- (UIImage*)resizeImageToSize:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode
{
CGRect drawRect = [self caculateDrawRect:newSize resizeMode:resizeMode];
UIGraphicsBeginImageContext(newSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, newSize.width, newSize.height));
CGContextSetInterpolationQuality(context, 0.8);
[self drawInRect:drawRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
// caculate drawrect respect to specific resize mode
- (CGRect)caculateDrawRect:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode
{
CGRect drawRect = CGRectMake(0, 0, newSize.width, newSize.height);
CGFloat imageRatio = self.size.width / self.size.height;
CGFloat newSizeRatio = newSize.width / newSize.height;
switch (resizeMode) {
case enSvResizeScale:
{
// scale to fill
break;
}
case enSvResizeAspectFit: // any remain area is white
{
CGFloat newHeight = 0;
CGFloat newWidth = 0;
if (newSizeRatio >= imageRatio) { // max height is newSize.height
newHeight = newSize.height;
newWidth = newHeight * imageRatio;
}
else {
newWidth = newSize.width;
newHeight = newWidth / imageRatio;
}
drawRect.size.width = newWidth;
drawRect.size.height = newHeight;
drawRect.origin.x = newSize.width / 2 - newWidth / 2;
drawRect.origin.y = newSize.height / 2 - newHeight / 2;
break;
}
case enSvResizeAspectFill:
{
CGFloat newHeight = 0;
CGFloat newWidth = 0;
if (newSizeRatio >= imageRatio) { // max height is newSize.height
newWidth = newSize.width;
newHeight = newWidth / imageRatio;
}
else {
newHeight = newSize.height;
newWidth = newHeight * imageRatio;
}
drawRect.size.width = newWidth;
drawRect.size.height = newHeight;
drawRect.origin.x = newSize.width / 2 - newWidth / 2;
drawRect.origin.y = newSize.height / 2 - newHeight / 2;
break;
}
default:
break;
}
return drawRect;
}
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 0)
the generated UIImage becomes (1800 * 1800).
That is not quite true. It's true that the bitmap underlying the image is 1800 by 1800, but that is not the only thing about the image that it's important; the UIImage also has a scale and that scale is 3. Thus, the UIImage is in fact treated (with respect to the screen geometry) as 600 by 600, plus it is triple-resolution to match the triple-resolution of the Plus screen — which is exactly what you want.
(The scaling will work best, of course, if you also start with the triple-resolution version of your original image. But that's a different matter.)

Uploading images to AWS taking too long -compression?

So I set my S3 bucket to US Standard, but it takes about 10-15 seconds for images to upload to the bucket. Now that I think about it, I'm guessing it's because I'm not compressing the images, instead I'm just storing the image taken with the camera into a filepath and then uploading it. I was wondering in order to compress the image would I use UIImageJPEGRepresentation, write the NSData to the file, and then upload that file to S3? Or is there a better/different way? Or if the slow uploading isn't because of compression, would it happen to be the region chosen for the bucket? I'm not entirely sure if compressing the images will speed up the time it takes to upload, it could be a huge reason for the latency though
Compress images to reduce file size
Use this method to compress image
Call this method like:
First Set your image size
CGSize newSize = CGSizeMake(200, 200);
* UIImage *profileImage = [AppManager resizeImageToSize:newSize image:croppedImage];
#pragma mark - resize Image ToSize
+ (UIImage *)resizeImageToSize:(CGSize)targetSize image: (UIImage*)captureImage
{
UIImage *sourceImage = captureImage;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// make image center aligned
if (widthFactor < heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor > heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
UIGraphicsBeginImageContext(targetSize);
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil)
CCLogs(#"could not scale image");
return newImage;
}

UIImageView aspect fit to height

I need to set the UIImageView size aspectfit according to height .
means my requirement is that my height of image should be fix but the width of image should be change according to height i know that in UIViewContentModeScaleAspectFit it will take width and hieght automatically but what if i need fix height and dynamic width like aspectfit according to height.
UIImage *originalImage = [UIImage imageNamed:#"image.png"];
double width = originalImage.size.width;
double height = originalImage.size.height;
double apect = width/height;
double nWidth = 320.f/ apect;
self.img.frame = CGRectMake(0, 0, nWidth, 320.f);
self.img.center = self.view.center;
self.img.image = originalImage;
-(CGSize)resizeImage:(CGSize)imageSize toTargetedHeight:(float)height {
NSLog(#"actualImageSize : %#", NSStringFromCGSize(imageSize));
float scaleFactor = height / imageSize.height;
float newHeight = imageSize.height * scaleFactor;
float newWidth = imageSize.width * scaleFactor;
CGSize newSize = CGSizeMake(newWidth, newHeight);
NSLog(#"convertedImageSize : %#", NSStringFromCGSize(newSize));
return newSize;
}
This will create your image with specific height with maintaining its aspect ratio.

Instagram iOS hooks with non-square images

Now that the Instagram app can handle and post non-square images from within that app, I was hoping that I could send non-square images to the Instagram app from my app using the same provided iPhone hooks I've been using (https://instagram.com/developer/iphone-hooks/?hl=en). However, it still seems to be cropping my images to square and not giving me the option to expand them to their non-square size (unlike when I load a non-square photo from the library from directly within the Instagram app and it lets me expand it to its non-square original dimensions). Anyone had any luck sending non-square images? I'm hoping there's some tweak that will make it work.
I too was hoping that they would take non-square photos with the update, but you're stuck with the old solution for posting non-square photos.... matte them to square with white.
https://github.com/ShareKit/ShareKit/blob/master/Classes/ShareKit/Sharers/Services/Instagram/SHKInstagram.m
- (UIImage *)imageByScalingImage:(UIImage*)image proportionallyToSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor < heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
} else if (widthFactor > heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
// this is actually the interesting part:
UIGraphicsBeginImageContext(targetSize);
[(UIColor*)SHKCONFIG(instagramLetterBoxColor) set];
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0,0,targetSize.width,targetSize.height));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil) NSLog(#"could not scale image");
return newImage ;
}

Finding the biggest centered square from a landscape or a portrait UIImage and scale it to a size

I need to find the biggest centered square from a portrait or a landscape image scaled to a size.
E.g. if I get an image of size 1200x800 and I need to get the centered square down to size 300x300.
I found an answer on this question on stackoverflow which has been widely copied. However that answer is incorrect, so want to post the correct answer which is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.height / image.size.height;
delta = ratio*(image.size.width - image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.width;
delta = ratio*(image.size.height - image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width),
(ratio * image.size.height));
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Incorrect answer which I found earlier is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The problem with this code is that it does not crop correctly.
Both the codes can be tried on following image:
https://s3.amazonaws.com/anandprakash/ImageWithPixelGrid.jpg
Correct Algo generates following image on the above base url:
https://s3.amazonaws.com/anandprakash/ScreenshotCorrectAlgo.png
Wrong Algo generates following image on the above base url - notice the extra 50px on the width on each side.
https://s3.amazonaws.com/anandprakash/ScreenshotWrongAlgo.png
Same answer above as a Swift extension on UIImage:
private extension UIImage {
func cropBiggestCenteredSquareImage(withSide side: CGFloat) -> UIImage {
if self.size.height == side && self.size.width == side {
return self
}
let newSize = CGSizeMake(side, side)
let ratio: CGFloat
let delta: CGFloat
let offset: CGPoint
if self.size.width > self.size.height {
ratio = newSize.height / self.size.height
delta = ratio * (self.size.width - self.size.height)
offset = CGPointMake(delta / 2, 0)
}
else {
ratio = newSize.width / self.size.width
delta = ratio * (self.size.height - self.size.width)
offset = CGPointMake(0, delta / 2)
}
let clipRect = CGRectMake(-offset.x, -offset.y, ratio * self.size.width, ratio * self.size.height)
if UIScreen.mainScreen().respondsToSelector(#selector(NSDecimalNumberBehaviors.scale)) {
UIGraphicsBeginImageContextWithOptions(newSize, true, 0.0)
} else {
UIGraphicsBeginImageContext(newSize);
}
UIRectClip(clipRect)
self.drawInRect(clipRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}

Resources