Resize UIImage with aspect ratio? - ios

I'm using this code to resize an image on the iPhone:
CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0);
UIGraphicsBeginImageContext(screenRect.size);
[value drawInRect:screenRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *tmpValue = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Which is working great, as long as the aspect ratio of the image matches that of the new resized image. I'd like to modify this so that it keeps the correct aspect ratio and just puts a black background anywhere the image doesn't show up. So I would still end up with a 320x480 image but with black on the top and bottom or sides, depending on the original image size.
Is there an easy way to do this similar to what I'm doing? Thanks!

After you set your screen rect, do something like the following to decide what rect to draw the image in:
float hfactor = value.bounds.size.width / screenRect.size.width;
float vfactor = value.bounds.size.height / screenRect.size.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = value.bounds.size.width / factor;
float newHeight = value.bounds.size.height / factor;
// Then figure out if you need to offset it to center vertically or horizontally
float leftOffset = (screenRect.size.width - newWidth) / 2;
float topOffset = (screenRect.size.height - newHeight) / 2;
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
If you don't want to enlarge images smaller than the screenRect, make sure factor is greater than or equal to one (e.g. factor = fmax(factor, 1)).
To get the black background, you would probably just want to set the context color to black and call fillRect before drawing the image.

I know this is very old, but thanks for that post -- it redirected me from attempting to use scale to drawing the image. In case it is of benefit to anyone, I made an extension class I'll throw in here. It allows you to resize an image like this:
UIImage imgNew = img.Fit(40.0f, 40.0f);
I don't need a fit option, but it could easily be extended to support Fill as well.
using CoreGraphics;
using System;
using UIKit;
namespace SomeApp.iOS.Extensions
{
public static class UIImageExtensions
{
public static CGSize Fit(this CGSize sizeImage,
CGSize sizeTarget)
{
CGSize ret;
float fw;
float fh;
float f;
fw = (float) (sizeTarget.Width / sizeImage.Width);
fh = (float) (sizeTarget.Height / sizeImage.Height);
f = Math.Min(fw, fh);
ret = new CGSize
{
Width = sizeImage.Width * f,
Height = sizeImage.Height * f
};
return ret;
}
public static UIImage Fit(this UIImage image,
float width,
float height,
bool opaque = false,
float scale = 1.0f)
{
UIImage ret;
ret = image.Fit(new CGSize(width, height),
opaque,
scale);
return ret;
}
public static UIImage Fit(this UIImage image,
CGSize sizeTarget,
bool opaque = false,
float scale = 1.0f)
{
CGSize sizeNewImage;
CGSize size;
UIImage ret;
size = image.Size;
sizeNewImage = size.Fit(sizeTarget);
UIGraphics.BeginImageContextWithOptions(sizeNewImage,
opaque,
1.0f);
using (CGContext context = UIGraphics.GetCurrentContext())
{
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
context.DrawImage(new CGRect(CGPoint.Empty, sizeNewImage),
image.CGImage);
ret = UIGraphics.GetImageFromCurrentImageContext();
}
UIGraphics.EndImageContext();
return ret;
}
}
}
As per the post above, it starts a new context for an image, then for that image it figures out aspect and then paints into the image. If you haven't done any Swift xcode dev time, UIGraphics is a bit backwards to most systems I work with but not bad. One issue is that bitmaps by default paint bottom to top. To get around that,
context.ScaleCTM(1, -1);
context.TranslateCTM(0, -sizeNewImage.Height);
Changes the orientation of drawing to the more common top-left to bottom-right... but then you need to move the origin as well hence the TranslateCTM.
Hopefully, it saves someone some time.
Cheers

Related

How to resize and layout image/attachment in TextKit?

I'm building a magazine app via TextKit, here is a TextKit demo (check out the developer branch). It loads a NSAttributeString from a rtfd file as text storage object, all the pages have the same size with custom NSTextContainer object, the pagination feature is done.
When I tried to add image to the source rtfd file, the image attachment shows in UITextView directly without any additional code, that's great! However, some big images will be clipped by default in text view frame. I tried all kinds of delegate methods and override methods to resize and re-layout it but failed at the end.
- (void)setAttachmentSize:(CGSize)attachmentSize forGlyphRange:(NSRange)glyphRange
- (CGSize)attachmentSizeForGlyphAtIndex:(NSUInteger)glyphIndex;
The setter method is called in glyph layout process, the latter getter method is called in glyph draw process from the call stack.
- (BOOL)layoutManager:(NSLayoutManager *)layoutManager shouldSetLineFragmentRect:(inout CGRect *)lineFragmentRect lineFragmentUsedRect:(inout CGRect *)lineFragmentUsedRect baselineOffset:(inout CGFloat *)baselineOffset inTextContainer:(NSTextContainer *)textContainer forGlyphRange:(NSRange)glyphRange
{
NSTextAttachment *attachment = ...;
NSUInteger characterIndex = [layoutManager characterIndexForGlyphAtIndex:glyphRange.location];
UIImage *image = [attachment imageForBounds:*lineFragmentRect textContainer:textContainer characterIndex:characterIndex];
CGSize imageSize = GetScaledToFitSize(image.size, self.textContainerSize);
CGFloat ratio = imageSize.width / imageSize.height;
CGRect rect = *lineFragmentRect, usedRect = *lineFragmentUsedRect;
CGFloat dy = *baselineOffset - imageSize.height;
if (dy > 0) {
*baselineOffset -= dy;
usedRect.size.height -= dy;
usedRect.size.width = ratio * usedRect.size.height;
}
if (!CGRectContainsRect(usedRect, rect)) {
if (rect.size.height > usedRect.size.height) {
*baselineOffset -= rect.size.height - usedRect.size.height;
rect.size.height = usedRect.size.height;
rect.size.width = ratio * usedRect.size.height;
}
if (rect.size.width > usedRect.size.width) {
//...
}
}
*lineFragmentRect = rect;
*lineFragmentUsedRect = usedRect;
return YES;
}
This delegate method could resize the layout size but not affect to the final width and image scale. I tried serval solutions with no luck. It seems there aren't many threads about images on TextKit on SO and Apple example code.
I have ever done similar work for image attachment auto resize. How about handle it just after you get the attributed string.
That is, enumerate the original string with NSAttachmentAttributeName, replace the attachment with a subclass of NSTextAttachment, with implies NSTextAttachmentContainer protocol.
- (CGRect)attachmentBoundsForTextContainer:(nullable NSTextContainer *)textContainer proposedLineFragment:(CGRect)lineFrag glyphPosition:(CGPoint)position characterIndex:(NSUInteger)charIndex {
CGFloat lineWidth = CGRectGetWidth(lineFrag);
CGSize size = self.bounds.size;
size.height *= (size.width > 0) ? (lineWidth / size.width) : 0;
size.width = lineWidth;
return CGRectMake(0, 0, size.width, size.height);
}
Code above resize attachment to fit the width, you don't need to resize image, since it will be auto resize to the bound when drawing.
Hoping to be helpful.

Load an image in UIImageView and resize the container - swift

I have an s3 link using which I need to load an image in UIImageView. I don't have the dimensions of image with me. I have defined mainImageView (leading and trailing constraints) in storyboard
In the code, I am loading the image using:
mainImageView.setImageWith(URL(string: ("https:" + (content?.imagePath)!)), placeholderImage: nil)
Do we have any way of resizing the container once the image loads? I want the container to take the dimensions (height and width) of the image. I heard that there is something called content wrap in Android to achieve the same. However, I am unable to find an equivalent in iOS.
If you are using Autolayout, you dont actually have to do much as UIImageView has an intrinsic size which makes it take the width and height of the image. In your .xib or .storyboard you need to position the image view so that it can resolve its position(horizontal and vertical). For size you can provide a default image(ow autolayout will show error).
When you will change the image at runtime, the imageview will take the size of its image.
You can achieve it using:
imgView.frame = [self frameForImage:self.image inImageViewAspectFit:imgView];
function implementation:
-(CGRect)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)imageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = imageView.frame.size.width / imageView.frame.size.height;
if(imageRatio < viewRatio) {
float scale = imageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (imageView.frame.size.width - width) * 0.5;
return CGRectMake(topLeftX, 0, width, imageView.frame.size.height);
} else {
float scale = imageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (imageView.frame.size.height - height) * 0.5;
return CGRectMake(0, topLeftY, imageView.frame.size.width, height);
}
}
You can resize the image by modifying its frame to the image's size:
mainImageView.frame = CGRect(origin: mainImageView.frame.origin, size: mainImageView.image.size)
If you are using Auto Layout, you need to modify the height and width constraints accordingly or let the layout to its job.

Zoom a rotated image inside scroll view to fit (fill) frame of overlay rect

Through this question and answer I've now got a working means of detecting when an arbitrarily rotated image isn't completely outside a cropping rect.
The next step is to figure out how to correctly adjust it's containing scroll view zoom to ensure that there are no empty spaces inside the cropping rect. To clarify, I want to enlarge (zoom in) the image; the crop rect should remain un-transformed.
The layout hierarchy looks like this:
containing UIScrollView
UIImageView (this gets arbitrarily rotated)
crop rect overlay view
... where the UIImageView can also be zoomed and panned inside the scrollView.
There are 4 gesture events that occur that need to be accounted for:
Pan gesture (done): accomplished by detecting if it's been panned incorrectly and resets the contentOffset.
Rotation CGAffineTransform
Scroll view zoom
Adjustment of the cropping rect overlay frame
As far as I can tell, I should be able to use the same logic for 2, 3, and 4 to adjust the zoomScale of the scroll view to make the image fit properly.
How do I properly calculate the zoom ratio necessary to make the rotated image fit perfectly inside the crop rect?
To better illustrate what I'm trying to accomplish, here's an example of the incorrect size:
I need to calculate the zoom ratio necessary to make it look like this:
Here's the code I've got so far using Oluseyi's solution below. It works when the rotation angle is minor (e.g. less than 1 radian), but anything over that and it goes really wonky.
CGRect visibleRect = [_scrollView convertRect:_scrollView.bounds toView:_imageView];
CGRect cropRect = _cropRectView.frame;
CGFloat rotationAngle = fabs(self.rotationAngle);
CGFloat a = visibleRect.size.height * sinf(rotationAngle);
CGFloat b = visibleRect.size.width * cosf(rotationAngle);
CGFloat c = visibleRect.size.height * cosf(rotationAngle);
CGFloat d = visibleRect.size.width * sinf(rotationAngle);
CGFloat zoomDiff = MAX(cropRect.size.width / (a + b), cropRect.size.height / (c + d));
CGFloat newZoomScale = (zoomDiff > 1) ? zoomDiff : 1.0 / zoomDiff;
[UIView animateWithDuration:0.2
delay:0.05
options:NO
animations:^{
[self centerToCropRect:[self convertRect:cropRect toView:self.zoomingView]];
_scrollView.zoomScale = _scrollView.zoomScale * newZoomScale;
} completion:^(BOOL finished) {
if (![self rotatedView:_imageView containsViewCompletely:_cropRectView])
{
// Damn, it's still broken - this happens a lot
}
else
{
// Woo! Fixed
}
_didDetectBadRotation = NO;
}];
Note I'm using AutoLayout which makes frames and bounds goofy.
Assume your image rectangle (blue in the diagram) and crop rectangle (red) have the same aspect ratio and center. When rotated, the image rectangle now has a bounding rectangle (green) which is what you want your crop scaled to (effectively, by scaling down the image).
To scale effectively, you need to know the dimensions of the new bounding rectangle and use a scale factor that fits the crop rect into it. The dimensions of the bounding rectangle are rather obviously
(a + b) x (c + d)
Notice that each segment a, b, c, d is either the adjacent or opposite side of a right triangle formed by the bounding rect and the rotated image rect.
a = image_rect_height * sin(rotation_angle)
b = image_rect_width * cos(rotation_angle)
c = image_rect_width * sin(rotation_angle)
d = image_rect_height * cos(rotation_angle)
Your scale factor is simply
MAX(crop_rect_width / (a + b), crop_rect_height / (c + d))
Here's a reference diagram:
Fill frame of overlay rect:
For a square crop you need to know new bounds of the rotated image which will fill the crop view.
Let's take a look at the reference diagram:
You need to find the altitude of a right triangle (the image number 2). Both altitudes are equal.
CGFloat sinAlpha = sin(alpha);
CGFloat cosAlpha = cos(alpha);
CGFloat hypotenuse = /* calculate */;
CGFloat altitude = hypotenuse * sinAlpha * cosAlpha;
Then you need to calculate the new width for the rotated image and the desired scale factor as follows:
CGFloat newWidth = previousWidth + altitude * 2;
CGFloat scale = newWidth / previousWidth;
I have implemented this method here.
I will answer using sample code, but basically this problem becomes really easy, if you will think in rotated view coordinate system.
UIView* container = [[UIView alloc] initWithFrame:CGRectMake(80, 200, 100, 100)];
container.backgroundColor = [UIColor blueColor];
UIView* content2 = [[UIView alloc] initWithFrame:CGRectMake(-50, -50, 150, 150)];
content2.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.5];
[container addSubview:content2];
[self.view setBackgroundColor:[UIColor blackColor]];
[self.view addSubview:container];
[container.layer setSublayerTransform:CATransform3DMakeRotation(M_PI / 8.0, 0, 0, 1)];
//And now the calculations
CGRect containerFrameInContentCoordinates = [content2 convertRect:container.bounds fromView:container];
CGRect unionBounds = CGRectUnion(content2.bounds, containerFrameInContentCoordinates);
CGFloat midX = CGRectGetMidX(content2.bounds);
CGFloat midY = CGRectGetMidY(content2.bounds);
CGFloat scaleX1 = (-1 * CGRectGetMinX(unionBounds) + midX) / midX;
CGFloat scaleX2 = (CGRectGetMaxX(unionBounds) - midX) / midX;
CGFloat scaleY1 = (-1 * CGRectGetMinY(unionBounds) + midY) / midY;
CGFloat scaleY2 = (CGRectGetMaxY(unionBounds) - midY) / midY;
CGFloat scaleX = MAX(scaleX1, scaleX2);
CGFloat scaleY = MAX(scaleY1, scaleY2);
CGFloat scale = MAX(scaleX, scaleY);
content2.transform = CGAffineTransformScale(content2.transform, scale, scale);

Table View UIImage rendering issue

I'm running into a rendering issue with my tableView UIImages and was wondering if anyone has encountered the same problem and knows how to fix it.
Here is my cellForRowAtIndexPath
-(UITableViewCell *)tableView:(UITableView *)tableView
cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
cell.textLabel.text = exerciseDisplayName;
cell.textLabel.numberOfLines = 0;
cell.textLabel.lineBreakMode = NSLineBreakByWordWrapping;
[tableView setSeparatorInset:UIEdgeInsetsZero];
UtilityMethods *commonMethods = [[UtilityMethods alloc]init];
UIImage *rowImage = [commonMethods imageForRow:tempPlaceholder.bodyPart];
cell.imageView.image = rowImage;
return cell;
}
Here is my height for row.
-(CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath
{
return 96;
}
There are lots of lines and squiggles in the images in the table. I was wondering if anyone knowns any UIImage properties that I might need to apply to my image to fix the problem. Increasing the height for row in table fixes the problem at the expense of increasing the height of the table row. The number that seems to work is 128 in heightForRow. When using 128 the squiggles are much less noticeable. Now I'm pretty sure this has something to do with how iOS is rendering the image. Ive taken the image and resized it to 76x76 using Microsoft Paint just to see if I would see the same problem, and the images appear just fine without all the squiggles. The images are .png format. The original size of the images is 1024x1024. Ive just resized them downwards as I've needed them. If anyone has any tips or advice on how to fix this I'd really appreciate it.
You are going to need to resample the image to the size you need. Viewing a large image in a small space looks rather bad on iOS devices (most any really). But if you use built in functions to create a new UIImage of the proper size everything looks much better. Scaling down a UIImage when displaying will always look worse than creating a new image of the proper size and displaying that. The way to do this is as follows (taken from here):
- (UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize
{
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContextWithOptions(targetSize, 0, NO); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
That function does a bit more than you are looking for, but you should be able to cut it does to only what you need.
Make sure to use the UIGraphicsBeginImageContextWithOptions function instead of UIGraphicsBeginImageContext so you deal with retina displays properly, otherwise this will make your images more blurry than they should be and you will have a second problem to deal with.

Performance issues when cropping UIImage (CoreGraphics, iOS)

The basic idea of what we are trying to do is that we have a large UIImage, and we want to slice it into several pieces. The user of the function can pass in a number of rows and number of columns, and the image will be cropped accordingly (ie. 3 rows and 3 columns slices the image into 9 pieces). The problem is, we're having performance issues when trying to accomplish this with CoreGraphics. The largest grid we require is 5x5, and it takes several seconds for the operation to complete (which registeres as lagtime to the user.) This is of course far from optimal.
My colleague and I have spent quite a while on this, and have searched the web for answers unsuccessfully. Neither of us are extremely experienced with Core Graphics, so I'm hoping there's some silly mistake in the code that will fix our problems. It's left to you, SO users, to please help us figure it out!
We used the tutorial at http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ to base revisions of our code on.
The function below:
-(void) getImagesFromImage:(UIImage*)image withRow:(NSInteger)rows withColumn:(NSInteger)columns
{
CGSize imageSize = image.size;
CGFloat xPos = 0.0;
CGFloat yPos = 0.0;
CGFloat width = imageSize.width / columns;
CGFloat height = imageSize.height / rows;
int imageCounter = 0;
//create a context to do our clipping in
UIGraphicsBeginImageContext(CGSizeMake(width, height));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGRect clippedRect = CGRectMake(0, 0, width, height);
CGContextClipToRect(currentContext, clippedRect);
for(int i = 0; i < rows; i++)
{
xPos = 0.0;
for(int j = 0; j < columns; j++)
{
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect rect = CGRectMake(xPos, yPos, width, height);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
image.size.width,
image.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, image.CGImage);
//pull the image from our cropped context
UIImage* croppedImg = UIGraphicsGetImageFromCurrentImageContext();
//PuzzlePiece is a UIView subclass
PuzzlePiece* newPP = [[PuzzlePiece alloc] initWithImageAndFrameAndID:croppedImg :rect :imageCounter];
[slicedImages addObject:newPP];
imageCounter++;
xPos += (width);
}
yPos += (height);
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
}
ANY advice greatly appreciated!!
originalImageView is an IBOutlet ImageView. This image will be cropped.
#import <QuartzCore/QuartzCore.h>
QuartzCore is needed for the white border around each slice for better understanding.
-(UIImage*)getCropImage:(CGRect)cropRect
{
CGImageRef image = CGImageCreateWithImageInRect([originalImageView.image CGImage],cropRect);
UIImage *cropedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return cropedImage;
}
-(void)prepareSlices:(uint)row:(uint)col
{
float flagX = originalImageView.image.size.width / originalImageView.frame.size.width;
float flagY = originalImageView.image.size.height / originalImageView.frame.size.height;
float _width = originalImageView.frame.size.width / col;
float _height = originalImageView.frame.size.height / row;
float _posX = 0.0;
float _posY = 0.0;
for (int i = 1; i <= row * col; i++) {
UIImageView *croppedImageVeiw = [[UIImageView alloc] initWithFrame:CGRectMake(_posX, _posY, _width, _height)];
UIImage *img = [self getCropImage:CGRectMake(_posX * flagX,_posY * flagY, _width * flagX, _height * flagY)];
croppedImageVeiw.image = img;
croppedImageVeiw.layer.borderColor = [[UIColor whiteColor] CGColor];
croppedImageVeiw.layer.borderWidth = 1.0f;
[self.view addSubview:croppedImageVeiw];
[croppedImageVeiw release];
_posX += _width;
if (i % col == 0) {
_posX = 0;
_posY += _height;
}
}
originalImageView.alpha = 0.0;
}
originalImageView.alpha = 0.0; you won't see the originalImageView any more.
Call it like this:
[self prepareSlices:4 :4];
It should make 16 slices addSubView on self.view. We have a puzzle app. This is working code from there.

Resources