I am trying to crop my images in iOS for awhile now. The code I have works well but isn't fast enough. When I supply it with around 20-25 images it takes 7-10 seconds to process it. I have tried every possible way to fix this, but haven't been successful. I am not sure what am I missing.
- (UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)targetSize {
UIImage *sourceImage = image;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
Your original question was not correctly stating the problem and so it now does: the reason that these operations take so long is the number of CPU cycles needed to scale an image (not crop it, which is much simpler and faster). When scaling, the system needs to use blending of some number of pixels that surround an area, which consume lots of cpu cycles. You can make this go faster by using a combination of techniques, but there is no single answer.
1) Use blocks and dispatch these image operations on a concurrent dispatch queue, to get parallelism. I believe the latest iPad has 4 cores that you can put to use that way. [UIGraphicsBeginImageContext is thread safe].
2) Get the ContextRef pointer, and set the interpolation setting to the lowest setting:
UIGraphicsBeginImageContext(targetSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
...
3) Cheat - don't scale except by powers of two. In this technique, you would determine the "best" power of two to shrink the image by, the expand the width and height to fit your target size. If you can use a power of two, you can use the CGImageRef from the UIImage, get the pixel pointer, and copy every other pixel / every other row, and create a smaller image really quickly (using CGImageCreate). It may not be as high a quality that you would get with letting the system scale your image, but it will be faster. This is obviously a fair amount of code, but you can make the operations really fast this way.
4) Redefine your task. Instead of trying to resize a group of images, change your app so that you only show one or two resized images at a time, and while the user is looking at them, do the other image operations on a background queue. This is for completeness, I assume you already thought of this.
PS: if this works for you, no need for the bounty, help someone else instead.
Use drawInRect is slowly. You can try CGImageCreateWithImageInRect which will be faster(at least 10x faster than drawInRect).
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], theRect); // e.g. theRect={{100,100, {200, 200}}
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
Related
I am using the code below first to create an image thumb (using a category) and then tailor the thumb to the VC in question, for example, make it round.
Somehow, the aspect ratio of images is not getting preserved with some getting squashed vertically...so a face looks like a sideways oval while others get squashed horizontally, so a round ball looks like an upright football. In the code for individual VCs, I am using UIViewContentModeScaleAspectFill and setting clip to bounds to yes but to no avail. Also tried checking these in Storybord but still no luck.
Can anyone see what might be wrong with code below?
//code in viewDidLoad
UIImage *thumbnail = [selectedImage createThumbnailToFillSize:CGSizeMake(side, side)];
//see createThumbNail method below
self.contactImage.image = thumbnail;
//image has been selected and trimmed to thumb. Now format it
CGSize itemSize = CGSizeMake(64, 64);
UIGraphicsBeginImageContextWithOptions(itemSize, NO, UIScreen.mainScreen.scale);
CGRect imageRect = CGRectMake(0.0, 0.0, itemSize.width, itemSize.height);
self.contactImage.contentMode = UIViewContentModeScaleAspectFill;
self.contactImage.clipsToBounds = YES;
[self.contactImage.image drawInRect:imageRect];
self.contactImage.image = UIGraphicsGetImageFromCurrentImageContext();
self.contactImage.layer.cornerRadius=60.0;
UIGraphicsEndImageContext();
//Generic category to create thumb
-(UIImage *) createThumbnailToFillSize:(CGSize)size
{
CGSize mainImageSize = size;
UIImage *thumb;
CGFloat widthScaler = size.width / mainImageSize.width;
CGFloat heightScaler = size.height / mainImageSize.height;
CGSize repositionedMainImageSize = mainImageSize;
CGFloat scaleFactor;
// Determine if we should shrink based on width or hight
if(widthScaler > heightScaler)
{
// calculate based on width scaler
scaleFactor = widthScaler;
repositionedMainImageSize.height = ceil(size.height / scaleFactor);
}
else {
// calculate based on height scaler
scaleFactor = heightScaler;
repositionedMainImageSize.width = ceil(size.width / heightScaler);
}
UIGraphicsBeginImageContext(size);
CGFloat xInc = ((repositionedMainImageSize.width-mainImageSize.width) / 2.f) *scaleFactor;
CGFloat yInc = ((repositionedMainImageSize.height-mainImageSize.height) / 2.f) *scaleFactor;
[self drawInRect:CGRectMake(xInc, yInc, mainImageSize.width * scaleFactor, mainImageSize.height * scaleFactor)];
thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumb;
}
I'm trying to add a video player icon on top of a thumbnail of a video.
I get the image from the YouTube API, then crop it to be square, then resize it to be the proper size. I then add my player icon image on top of it.
The problem lies in the fact that the player icon is much smaller than it should be on the thumbnail (it's 28x28pt when on screen it's much smaller). See in the below image where I added it to the cell to show the size it should be, versus the thumbnail size:
I crop it to a square with this method:
/**
* Given a UIImage, return it with a square aspect ratio (via cropping, not smushing).
*/
- (UIImage *)createSquareVersionOfImage:(UIImage *)image {
CGFloat originalWidth = image.size.width;
CGFloat originalHeight = image.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
// Determine the offset needed to crop the center of the image out.
CGFloat xOffsetToBeCentered = (originalWidth - smallestDimension) / 2;
CGFloat yOffsetToBeCentered = (originalHeight - smallestDimension) / 2;
// Create the square, making sure the position and dimensions are set appropriately for retina displays.
CGRect square = CGRectMake(xOffsetToBeCentered * image.scale, yOffsetToBeCentered * image.scale, smallestDimension * image.scale, smallestDimension *image.scale);
CGImageRef squareImageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:squareImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(squareImageRef);
return squareImage;
}
Resize it with this method:
/**
* Resize the given UIImage to a new size and return the newly resized image.
*/
- (UIImage *)resizeImage:(UIImage *)image toSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And add it on top of the other image with this method:
/**
* Adds a UIImage on top of another UIImage and returns the result. The top image is centered.
*/
- (UIImage *)addImage:(UIImage *)additionalImage toImage:(UIImage *)backgroundImage {
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[additionalImage drawInRect:CGRectMake((backgroundImage.size.width - additionalImage.size.width) / 2, (backgroundImage.size.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
And this is how it is implemented:
UIImage *squareThumbnail = [self resizeImage:[self createSquareVersionOfImage:responseObject] toSize:CGSizeMake(110.0, 110.0)];
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
UIImage *squareThumbnailWithPlayerIcon = [self addImage:playerIcon toImage:squareThumbnail];
But in the end, the icon is always too small. The sizing things confuse me when working with images, as I'm used to it figuring out retina screen related things automatically, and for example in the above code block, I'm not sure why I set it to 110.0, 110.0 as it's a 55x55 UIImageView and I thought it scales automatically (but if I put it to 55 it's stretched terribly).
The reason you have to put 110 in your resizeImage call is because you are creating a CGGraphics context with a scale of 1.0. The graphics context for views in a view hierarchy on retina displays have a scale of 2.0 (provided you did nothing to scale anything else).
I believe that new UIImage that you create is now a "normal" image (Sorry I can't remember the technical term). It is not an #2x image. So its size that you will get when you ask for size will not scale for #2x.
Note this answer:
UIGraphicsGetImageFromCurrentImageContext retina resolutions?
I haven't tested this, but it should work. If it doesn't it should at least be more straightforward to debug.
//images should be passed in with their original scales
-(UIImage*)compositedImageWithSize:(CGSize)newSize bg:(UIImage*)backgroundImage fgImage:(UIImage*)foregroundImage{
//match the scale of screen.
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
//instead of resizing the image ahead of time, we just draw it into the context at the appropriate size. The context will clip the image.
CGRect aspectFillRect = CGRectZero;
if(newSize.width/newSize.height > backgroundImage.size.width/backgroundImage.size.height){
aspectFillRect.y = 0;
aspectFillRect.height = newSize.height;
CGFloat scaledWidth = (newSize.height / backgroundImage.size.height) * newSize.width;
aspectFillRect.x = (newSize.width - scaledWidth)/2.0;
aspectFillRect.width = scaledWidth;
}else{
aspectFillRect.x = 0;
aspectFillRect.width = newSize.width;
CGFloat scaledHeight = (newSize.width / backgroundImage.size.width) * newSize.height;
aspectFillRect.y = (newSize.height - scaledHeight)/2.0;
aspectFillRect.height = scaledHeight;
}
[backgroundImage drawInRect:aspectFillRect];
//pass in the 2x image for the fg image so it provides a better resolution
[foregroundImage drawInRect:CGRectMake((newSize.width - additionalImage.size.width) / 2, (newSize.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
You would skip all those methods you were calling before and do:
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
//pass in the non-retina scale of the image
UIImage *result = [self compositedImageWithSize:CGSizeMake(55.0, 55.0)
bg:responseObject
fg:playerIcon];
Hope this helps!
I have a Drawing App of sorts, I would like to create a Snapshot of the Canvas UIView (both on and off screen) and then scale it down. The code I have for doing that take bloody for ever on an iPad 3. Simulator there is no delay. The Canvas is 2048x2048.
Is there another way I should be doing this? Or something I have a miss in the code?
Thank you!
-(UIImage *) createScreenShotThumbnailWithWidth:(CGFloat)width{
// Size of our View
CGSize size = editorContentView.bounds.size;
//First Grab our Screen Shot at Full Resolution
UIGraphicsBeginImageContext(size);
[editorContentView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Calculate the scal ratio of the image with the width supplied.
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = width / size.width;
} else {
ratio = width / size.height;
}
//Setup our rect to draw the Screen shot into
CGSize newSize = CGSizeMake(ratio * size.width, ratio * size.height);
//Send back our screen shot
return [self imageWithImage:screenShot scaledToSize:newSize];
}
Did you use the "Time Profiler" Instrument ("Product" Menu -> "Profile") to check where in your code you spend the most of your time? (use it with your Device of course, not the Simulator, to have realistic profiling). I'd guess it is not in the image capture portion you quoted in your question, but in your rescaling method imageWithImage:scaledToSize: method.
Instead of rendering the image at its whole size in a context, then rescaling the image to the final size, you should render the layer in the context directly at the expected size by applying some affine transform to the context.
So simply use CGContextConcatCTM(someScalingAffineTransform); on UIGraphicsGetCurrentContext() right after your line UIGraphicsBeginImageContext(size);, to apply an scaling affine transform that will make the layer be rendered at a different scale/size.
This way it will be directly rendered as the expected size which will be much faster, instead of being rendered at 100% and then having you to rescale it afterwards in a time-consuming way
Thank you AliSoftware, Here is the Code I ended up using:
-(UIImage *) createScreenShotThumbnailWithWidth:(CGFloat)width{
if (IoUIDebug & IoUIDebugSelectorNames) {
NSLog(#"%# - %#", INTERFACENAME, NSStringFromSelector(_cmd) );
}
// Size of our View
CGSize size = editorContentView.bounds.size;
//Calculate the scal ratio of the image with the width supplied.
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = width / size.width;
} else {
ratio = width / size.height;
}
CGSize newSize = CGSizeMake(ratio * size.width, ratio * size.height);
//Create GraphicsContext with our new size
UIGraphicsBeginImageContext(newSize);
//Create Transform to scale down the Context
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformScale(transform, ratio, ratio);
//Apply the Transform to the Context
CGContextConcatCTM(UIGraphicsGetCurrentContext(),transform);
//Render our Image into the the Scaled Graphic Context
[editorContentView.layer renderInContext:UIGraphicsGetCurrentContext()];
//Save a copy of the Image of the Graphic Context
UIImage* screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}
I've been struggling to translate the CIDetector (face detection) results into coordinates relative to the UIImageView displaying the image so I can draw the coordinates using CGPaths.
I've looked at all questions here and all the tutorials I could find and most of them use small images that are not scaled when displayed in a UIImageView (example). The problem I am having is with using large images which are scaled using aspectFit when displayed in a UIImageView and determining the correct scale + translation values.
I am getting inconsistent results when testing with images of different sizes/aspect ratios, so I think my routine is flawed. I'd been struggling with this for a while so if anyone has some tips or can x-ray what I am doing wrong, that would be a great help.
What I am doing:
get the face coordinates
use the frameForImage routine below (found here on SO) to get the scale and bounds of the UIImageView image
create transform for scale + translation
apply transform to the CIDetector result
// my routine for determining transform values
NSDictionary* data = [self frameForImage:self.imageView.image inImageViewAspectFit:self.imageView];
CGRect scaledImageBounds = CGRectFromString([data objectForKey:#"bounds"]);
float scale = [[data objectForKey:#"scale"] floatValue];
CGAffineTransform transform = CGAffineTransformMakeScale(scale, -scale);
transform = CGAffineTransformTranslate(transform,
scaledImageBounds.origin.x / scale,
-(scaledImageBounds.origin.y / scale + scaledImageBounds.size.height / scale));
CIDetector results transformed using:
mouthPosition = CGPointApplyAffineTransform(mouthPosition, transform);
// example of bad result: scale seems incorrect
// routine below found here on SO for determining bound for image scaled in UIImageView using 'aspectFit`
-(NSDictionary*)frameForImage:(UIImage*)image inImageViewAspectFit:(UIImageView*)myImageView
{
float imageRatio = image.size.width / image.size.height;
float viewRatio = myImageView.frame.size.width / myImageView.frame.size.height;
float scale;
CGRect boundingRect;
if(imageRatio < viewRatio)
{
scale = myImageView.frame.size.height / image.size.height;
float width = scale * image.size.width;
float topLeftX = (myImageView.frame.size.width - width) * 0.5;
boundingRect = CGRectMake(topLeftX, 0, width, myImageView.frame.size.height);
}
else
{
scale = myImageView.frame.size.width / image.size.width;
float height = scale * image.size.height;
float topLeftY = (myImageView.frame.size.height - height) * 0.5;
boundingRect = CGRectMake(0, topLeftY, myImageView.frame.size.width, height);
}
NSDictionary * data = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat:scale], #"scale",
NSStringFromCGRect(boundingRect), #"bounds",
nil];
return data;
}
I completely understand what you are trying to do, but let me offer you a different way to achieve what you want.
you have an over sized image
you know the size of the imageView
ask the image for its CGImage
determine a 'scale' factor, which is the imageView width divided by the image width
multiple this value and your image height, then subtract the result from the imageViewHeight, to get the "empty" height in the imageView, lets call this 'fillHeight'
divide 'fillHeight' by 2 and round to get the 'offset' value used below
using context provided by UIGraphicsBeginImageContextWithOptions(imageView.size, NO, 0), paint the background whatever color you want, then draw your CGImage
CGContextDrawImage (context, CGRectMake(0, offset, imageView.size.width, rintf( image.size.height*scale )), [image CGImage]);
get this new image using:
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
set the image: imageView.image = image;
Now you can exactly map back to your image as you know the EXACT scaling ratio and offsets.
This might be the simple answer you are looking for. If you're x and y coordinates are inverted, you can mirror them yourself. In the below snippet im looping through my returned features and needing to invert the y coordinates, and x coordinate if it's front-facing camera:
for (CIFaceFeature *f in features) {
float newy = -f.bounds.origin.y + self.frame.size.height - f.bounds.size.height;
float newx = f.bounds.origin.x;
if( isMirrored ) {
newx = -f.bounds.origin.x + self.frame.size.width - f.bounds.size.width;
}
[[soups objectAtIndex:rnd] drawInRect:CGRectMake(newx, newy, f.bounds.size.width, f.bounds.size.height)];
}
A CALayer can do it, and a UIImageView can do it. Can I directly display an image with aspect-fit with Core Graphics? The UIImage drawInRect does not allow me to set the resize mechanism.
If you're already linking AVFoundation, an aspect-fit function is provided in that framework:
CGRect AVMakeRectWithAspectRatioInsideRect(CGSize aspectRatio, CGRect boundingRect);
For instance, to scale an image to fit:
UIImage *image = …;
CRect targetBounds = self.layer.bounds;
// fit the image, preserving its aspect ratio, into our target bounds
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size,
targetBounds);
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
You need to do the math yourself. For example:
// desired maximum width/height of your image
UIImage *image = self.imageToDraw;
CGRect imageRect = CGRectMake(10, 10, 42, 42); // desired x/y coords, with maximum width/height
// calculate resize ratio, and apply to rect
CGFloat ratio = MIN(imageRect.size.width / image.size.width, imageRect.size.height / image.size.height);
imageRect.size.width = imageRect.size.width * ratio;
imageRect.size.height = imageRect.size.height * ratio;
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
Alternatively, you can embed a UIImageView as a subview of your view, which gives you easy to use options for this. For similar ease of use but better performance, you can embed a layer containing the image in your view's layer. Either of these approaches would be worthy of a separate question, if you choose to go down that route.
Of course you can. It'll draw the image in whatever rect you pass. So just pass an aspect-fitted rect. Sure, you have to do a little bit of math yourself, but that's pretty easy.
here's the solution
CGSize imageSize = yourImage.size;
CGSize viewSize = CGSizeMake(450, 340); // size in which you want to draw
float hfactor = imageSize.width / viewSize.width;
float vfactor = imageSize.height / viewSize.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = imageSize.width / factor;
float newHeight = imageSize.height / factor;
CGRect newRect = CGRectMake(xOffset,yOffset, newWidth, newHeight);
[image drawInRect:newRect];
-- courtesy https://stackoverflow.com/a/1703210