So I set my S3 bucket to US Standard, but it takes about 10-15 seconds for images to upload to the bucket. Now that I think about it, I'm guessing it's because I'm not compressing the images, instead I'm just storing the image taken with the camera into a filepath and then uploading it. I was wondering in order to compress the image would I use UIImageJPEGRepresentation, write the NSData to the file, and then upload that file to S3? Or is there a better/different way? Or if the slow uploading isn't because of compression, would it happen to be the region chosen for the bucket? I'm not entirely sure if compressing the images will speed up the time it takes to upload, it could be a huge reason for the latency though
Compress images to reduce file size
Use this method to compress image
Call this method like:
First Set your image size
CGSize newSize = CGSizeMake(200, 200);
* UIImage *profileImage = [AppManager resizeImageToSize:newSize image:croppedImage];
#pragma mark - resize Image ToSize
+ (UIImage *)resizeImageToSize:(CGSize)targetSize image: (UIImage*)captureImage
{
UIImage *sourceImage = captureImage;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// make image center aligned
if (widthFactor < heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else if (widthFactor > heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
UIGraphicsBeginImageContext(targetSize);
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil)
CCLogs(#"could not scale image");
return newImage;
}
Related
I want to compress a (900 * 900) picture to (600*600) on an iPhone 6 plus.
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 0)
the generated UIImage becomes (1800 * 1800).
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(200, 200), NO, 0)
or
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 1)
the resulting image will be blurred, which is obviously wrong.
Is there a good way to solve this problem?
This is the code
- (UIImage*)imageByScalingForSize:(CGSize)targetSize withSourceImage:(UIImage *)sourceImage
{
UIImage *newImage = nil;
//Omit scaledWidth/scaledHeight calculation
CGFloat scaledWidth = targetSize.width;
CGFloat scaledHeight = targetSize.height;
//UIGraphicsBeginImageContext(targetSize);
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0);
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0, 0);
thumbnailRect.size.width= scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Try this lib!
UIImage+Zoom.h
#import <UIKit/UIKit.h>
enum {
enSvResizeScale, // image scaled to fill
enSvResizeAspectFit, // image scaled to fit with fixed aspect. remainder is transparent
enSvResizeAspectFill, // image scaled to fill with fixed aspect. some portion of content may be cliped
};
typedef NSInteger SvResizeMode;
#interface UIImage (Zoom)
- (UIImage*)resizeImageToSize:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode;
#end
UIImage+Zoom.m
#import "UIImage+Zoom.h"
#implementation UIImage (Zoom)
/*
* #brief resizeImage
* #param newsize the dimensions(pixel) of the output image
*/
- (UIImage*)resizeImageToSize:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode
{
CGRect drawRect = [self caculateDrawRect:newSize resizeMode:resizeMode];
UIGraphicsBeginImageContext(newSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, CGRectMake(0, 0, newSize.width, newSize.height));
CGContextSetInterpolationQuality(context, 0.8);
[self drawInRect:drawRect blendMode:kCGBlendModeNormal alpha:1];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
// caculate drawrect respect to specific resize mode
- (CGRect)caculateDrawRect:(CGSize)newSize resizeMode:(SvResizeMode)resizeMode
{
CGRect drawRect = CGRectMake(0, 0, newSize.width, newSize.height);
CGFloat imageRatio = self.size.width / self.size.height;
CGFloat newSizeRatio = newSize.width / newSize.height;
switch (resizeMode) {
case enSvResizeScale:
{
// scale to fill
break;
}
case enSvResizeAspectFit: // any remain area is white
{
CGFloat newHeight = 0;
CGFloat newWidth = 0;
if (newSizeRatio >= imageRatio) { // max height is newSize.height
newHeight = newSize.height;
newWidth = newHeight * imageRatio;
}
else {
newWidth = newSize.width;
newHeight = newWidth / imageRatio;
}
drawRect.size.width = newWidth;
drawRect.size.height = newHeight;
drawRect.origin.x = newSize.width / 2 - newWidth / 2;
drawRect.origin.y = newSize.height / 2 - newHeight / 2;
break;
}
case enSvResizeAspectFill:
{
CGFloat newHeight = 0;
CGFloat newWidth = 0;
if (newSizeRatio >= imageRatio) { // max height is newSize.height
newWidth = newSize.width;
newHeight = newWidth / imageRatio;
}
else {
newHeight = newSize.height;
newWidth = newHeight * imageRatio;
}
drawRect.size.width = newWidth;
drawRect.size.height = newHeight;
drawRect.origin.x = newSize.width / 2 - newWidth / 2;
drawRect.origin.y = newSize.height / 2 - newHeight / 2;
break;
}
default:
break;
}
return drawRect;
}
If I use:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(600, 600), NO, 0)
the generated UIImage becomes (1800 * 1800).
That is not quite true. It's true that the bitmap underlying the image is 1800 by 1800, but that is not the only thing about the image that it's important; the UIImage also has a scale and that scale is 3. Thus, the UIImage is in fact treated (with respect to the screen geometry) as 600 by 600, plus it is triple-resolution to match the triple-resolution of the Plus screen — which is exactly what you want.
(The scaling will work best, of course, if you also start with the triple-resolution version of your original image. But that's a different matter.)
Now that the Instagram app can handle and post non-square images from within that app, I was hoping that I could send non-square images to the Instagram app from my app using the same provided iPhone hooks I've been using (https://instagram.com/developer/iphone-hooks/?hl=en). However, it still seems to be cropping my images to square and not giving me the option to expand them to their non-square size (unlike when I load a non-square photo from the library from directly within the Instagram app and it lets me expand it to its non-square original dimensions). Anyone had any luck sending non-square images? I'm hoping there's some tweak that will make it work.
I too was hoping that they would take non-square photos with the update, but you're stuck with the old solution for posting non-square photos.... matte them to square with white.
https://github.com/ShareKit/ShareKit/blob/master/Classes/ShareKit/Sharers/Services/Instagram/SHKInstagram.m
- (UIImage *)imageByScalingImage:(UIImage*)image proportionallyToSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO) {
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor < heightFactor)
scaleFactor = widthFactor;
else
scaleFactor = heightFactor;
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor < heightFactor) {
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
} else if (widthFactor > heightFactor) {
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
// this is actually the interesting part:
UIGraphicsBeginImageContext(targetSize);
[(UIColor*)SHKCONFIG(instagramLetterBoxColor) set];
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(0,0,targetSize.width,targetSize.height));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if(newImage == nil) NSLog(#"could not scale image");
return newImage ;
}
I am very new to IOS and the first task given to me is Image cropping. means If I am using an image as banner image and the given frame size is more then or smaller thn the size of the image . my code should automatically resize image in respective aspect ratio of the image and then set image in that frame.
I have done so much R&D and after that i have written code.
-(UIImage *)MyScaleNEwMethodwithImage:(UIImage *)image andframe:(CGRect)frame{
float bmHeight= image.size.height;
float bmWidth= image.size.width;
UIImage *RecievedImage=image;
if (bmHeight>bmWidth) {
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmWidth;
float cropedHeight= (bmHeight-newbmHeight)/2;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmHeight);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
CGRect cliprect = CGRectMake(cropedimageHeight1, 0,bmWidth-cropedimageHeight1,bmHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
else
{
CGRect cliprect = CGRectMake(0,cropedHeight,bmWidth,bmHeight-cropedHeight);
CGImageRef imref = CGImageCreateWithImageInRect([image CGImage],cliprect);
UIImage *newSubImage = [UIImage imageWithCGImage:imref];
return newSubImage;
}
}
else
{
float ratio = frame.size.height/frame.size.width;
float newbmHeight = ratio*bmHeight;
float cropedHeight= (bmHeight-newbmHeight)/4;
if (cropedHeight<0) {
float ratio1= frame.size.width/frame.size.height;
float newbmHeight1= (ratio1*bmWidth);
float cropedimageHeight1 = (bmWidth- newbmHeight1)/2;
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(0,cropedimageHeight1,bmWidth,(bmHeight-cropedimageHeight1))];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage], CGRectMake(0,cropedimageHeight1/2,bmWidth/2,(bmHeight-cropedimageHeight1)/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
} else {
UIImageView *DummyImage=[[UIImageView alloc]initWithFrame:CGRectMake(cropedHeight,0,bmWidth-cropedHeight,bmHeight)];
[DummyImage setImage:RecievedImage];
CGImageRef imageRef = CGImageCreateWithImageInRect([DummyImage.image CGImage],CGRectMake(cropedHeight,2*cropedHeight,(bmWidth-cropedHeight),bmHeight/2));
// or use the UIImage wherever you like
[DummyImage setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
UIImage *ScaledImage=[UIImage imageWithCGImage:imageRef];
return ScaledImage;
}
}
}
In my frame i am getting required image but when screen changes i can see full image. i want to cut the unwanted image.
Thise piece of code may help you out
-(CGRect) cropImage:(CGRect)frame
{
NSAssert(self.contentMode == UIViewContentModeScaleAspectFit, #"content mode should be aspect fit");
CGFloat wScale = self.bounds.size.width / self.image.size.width;
CGFloat hScale = self.bounds.size.height / self.image.size.height;
float x, y, w, h, offset;
if (wScale<hScale) {
offset = (self.bounds.size.height - (self.image.size.height*widthScale))/2;
x = frame.origin.x / wScale;
y = (frame.origin.y-offset) / wScale;
w = frame.size.width / wScale;
h = frame.size.height / wScale;
} else {
offset = (self.bounds.size.width - (self.image.size.width*heightScale))/2;
x = (frame.origin.x-offset) / hScale;
y = frame.origin.y / hScale;
w = frame.size.width / hScale;
h = frame.size.height / hScale;
}
return CGRectMake(x, y, w, h);
}
The code which I had referred to crop the image as per aspect ratio is :
typedef enum {
MGImageResizeCrop,
MGImageResizeCropStart,
MGImageResizeCropEnd,
MGImageResizeScale
} MGImageResizingMethod;
- (UIImage *)imageToFitSize:(CGSize)fitSize method:(MGImageResizingMethod)resizeMethod
{
float imageScaleFactor = 1.0;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([self respondsToSelector:#selector(scale)]) {
imageScaleFactor = [self scale];
}
#endif
float sourceWidth = [self size].width * imageScaleFactor;
float sourceHeight = [self size].height * imageScaleFactor;
float targetWidth = fitSize.width;
float targetHeight = fitSize.height;
BOOL cropping = !(resizeMethod == MGImageResizeScale);
// Calculate aspect ratios
float sourceRatio = sourceWidth / sourceHeight;
float targetRatio = targetWidth / targetHeight;
// Determine what side of the source image to use for proportional scaling
BOOL scaleWidth = (sourceRatio <= targetRatio);
// Deal with the case of just scaling proportionally to fit, without cropping
scaleWidth = (cropping) ? scaleWidth : !scaleWidth;
// Proportionally scale source image
float scalingFactor, scaledWidth, scaledHeight;
if (scaleWidth) {
scalingFactor = 1.0 / sourceRatio;
scaledWidth = targetWidth;
scaledHeight = round(targetWidth * scalingFactor);
} else {
scalingFactor = sourceRatio;
scaledWidth = round(targetHeight * scalingFactor);
scaledHeight = targetHeight;
}
float scaleFactor = scaledHeight / sourceHeight;
// Calculate compositing rectangles
CGRect sourceRect, destRect;
if (cropping) {
destRect = CGRectMake(0, 0, targetWidth, targetHeight);
float destX, destY;
if (resizeMethod == MGImageResizeCrop) {
// Crop center
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round((scaledHeight - targetHeight) / 2.0);
} else if (resizeMethod == MGImageResizeCropStart) {
// Crop top or left (prefer top)
if (scaleWidth) {
// Crop top
destX = 0.0;
destY = 0.0;
} else {
// Crop left
destX = 0.0;
destY = round((scaledHeight - targetHeight) / 2.0);
}
} else if (resizeMethod == MGImageResizeCropEnd) {
// Crop bottom or right
if (scaleWidth) {
// Crop bottom
destX = round((scaledWidth - targetWidth) / 2.0);
destY = round(scaledHeight - targetHeight);
} else {
// Crop right
destX = round(scaledWidth - targetWidth);
destY = round((scaledHeight - targetHeight) / 2.0);
}
}
sourceRect = CGRectMake(destX / scaleFactor, destY / scaleFactor,
targetWidth / scaleFactor, targetHeight / scaleFactor);
} else {
sourceRect = CGRectMake(0, 0, sourceWidth, sourceHeight);
destRect = CGRectMake(0, 0, scaledWidth, scaledHeight);
}
// Create appropriately modified image.
UIImage *image = nil;
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 40000
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0) {
UIGraphicsBeginImageContextWithOptions(destRect.size, NO, 0.0); // 0.0 for scale means "correct scale for device's main screen".
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect); // cropping happens here.
image = [UIImage imageWithCGImage:sourceImg scale:0.0 orientation:self.imageOrientation]; // create cropped UIImage.
[image drawInRect:destRect]; // the actual scaling happens here, and orientation is taken care of automatically.
CGImageRelease(sourceImg);
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
#endif
if (!image) {
// Try older method.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, fitSize.width, fitSize.height, 8, (fitSize.width * 4),
colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef sourceImg = CGImageCreateWithImageInRect([self CGImage], sourceRect);
CGContextDrawImage(context, destRect, sourceImg);
CGImageRelease(sourceImg);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
image = [UIImage imageWithCGImage:finalImage];
CGImageRelease(finalImage);
}
return image;
}
Check if this helps you.
I need to find the biggest centered square from a portrait or a landscape image scaled to a size.
E.g. if I get an image of size 1200x800 and I need to get the centered square down to size 300x300.
I found an answer on this question on stackoverflow which has been widely copied. However that answer is incorrect, so want to post the correct answer which is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.height / image.size.height;
delta = ratio*(image.size.width - image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.width;
delta = ratio*(image.size.height - image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width),
(ratio * image.size.height));
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Incorrect answer which I found earlier is as follows:
+ (UIImage*) cropBiggestCenteredSquareImageFromImage:(UIImage*)image withSide:(CGFloat)side
{
// Get size of current image
CGSize size = [image size];
if( size.width == size.height && size.width == side){
return image;
}
CGSize newSize = CGSizeMake(side, side);
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
The problem with this code is that it does not crop correctly.
Both the codes can be tried on following image:
https://s3.amazonaws.com/anandprakash/ImageWithPixelGrid.jpg
Correct Algo generates following image on the above base url:
https://s3.amazonaws.com/anandprakash/ScreenshotCorrectAlgo.png
Wrong Algo generates following image on the above base url - notice the extra 50px on the width on each side.
https://s3.amazonaws.com/anandprakash/ScreenshotWrongAlgo.png
Same answer above as a Swift extension on UIImage:
private extension UIImage {
func cropBiggestCenteredSquareImage(withSide side: CGFloat) -> UIImage {
if self.size.height == side && self.size.width == side {
return self
}
let newSize = CGSizeMake(side, side)
let ratio: CGFloat
let delta: CGFloat
let offset: CGPoint
if self.size.width > self.size.height {
ratio = newSize.height / self.size.height
delta = ratio * (self.size.width - self.size.height)
offset = CGPointMake(delta / 2, 0)
}
else {
ratio = newSize.width / self.size.width
delta = ratio * (self.size.height - self.size.width)
offset = CGPointMake(0, delta / 2)
}
let clipRect = CGRectMake(-offset.x, -offset.y, ratio * self.size.width, ratio * self.size.height)
if UIScreen.mainScreen().respondsToSelector(#selector(NSDecimalNumberBehaviors.scale)) {
UIGraphicsBeginImageContextWithOptions(newSize, true, 0.0)
} else {
UIGraphicsBeginImageContext(newSize);
}
UIRectClip(clipRect)
self.drawInRect(clipRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
I've an UIImageView with content mode Aspect Fit of size 220x155. I'm dynamically inserting different images in different resolutions, but all larger than the size of the UIImageView. As the content mode is set to Aspect Fit, the image is scaled with respect to the ratio to fit the UIImageView.
My problem is, that if for instance the image inside the UIImageView is scaled to 220x100, I would like the UIImageView to shrink from a height of 155 to 100 too to avoid space between my elements.
How can I do this?
If I got you right, it would be something like this: get image size by:
UIImage * img = [UIImage imageNamed:#"someImage.png"];
CGSize imgSize = img.size;
calculate scale ratio on width
float ratio=yourImageView.frame.size.width/imgSize.width;
check scaled height (using same ratio to keep aspect)
float scaledHeight=imgSize.height*ratio;
if(scaledHeight < yourImageView.frame.size.height)
{
//update height of your imageView frame with scaledHeight
}
Based on Michael's answer, here's a complete method
+ (CGSize)makeSize:(CGSize)originalSize fitInSize:(CGSize)boxSize
{
float widthScale = 0;
float heightScale = 0;
widthScale = boxSize.width/originalSize.width;
heightScale = boxSize.height/originalSize.height;
float scale = MIN(widthScale, heightScale);
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
return newSize;
}
Edit for Steven Stefanik's answer: This method breaks when originalSize is {0, 0}. Maybe consider these changes
-(CGSize)makeSize:(CGSize)originalSize fitInSize:(CGSize)boxSize
{
if (originalSize.height == 0) {
originalSize.height = boxSize.height;
}
if (originalSize.width == 0) {
originalSize.width = boxSize.width;
}
float widthScale = 0;
float heightScale = 0;
widthScale = boxSize.width/originalSize.width;
heightScale = boxSize.height/originalSize.height;
float scale = MIN(widthScale, heightScale);
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
return newSize;
}
Last edit:
It seems I misunderstood the question.
To get the actual size you could try:
CGSize imageSize = img.size;
CGSize viewSize = imageView.frame.size;
CGSize actualSize;
if (imageSize.width > imageSize.height) {
actualSize.width = imageSize.width > viewSize.width ? viewSize.width : imageSize.width;
actualSize.height = imageSize.height * actualSize.width / imageSize.width;
}
else {
actualSize.height = imageSize.height > viewSize.height ? viewSize.height : imageSize.height;
actualSize.width = imageSize.width * actualSize.height / imageSize.height;
}