Capture a Screen Shot of UIView - Slow Performance - ios

I have a Drawing App of sorts, I would like to create a Snapshot of the Canvas UIView (both on and off screen) and then scale it down. The code I have for doing that take bloody for ever on an iPad 3. Simulator there is no delay. The Canvas is 2048x2048.
Is there another way I should be doing this? Or something I have a miss in the code?
Thank you!
-(UIImage *) createScreenShotThumbnailWithWidth:(CGFloat)width{
// Size of our View
CGSize size = editorContentView.bounds.size;
//First Grab our Screen Shot at Full Resolution
UIGraphicsBeginImageContext(size);
[editorContentView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Calculate the scal ratio of the image with the width supplied.
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = width / size.width;
} else {
ratio = width / size.height;
}
//Setup our rect to draw the Screen shot into
CGSize newSize = CGSizeMake(ratio * size.width, ratio * size.height);
//Send back our screen shot
return [self imageWithImage:screenShot scaledToSize:newSize];
}

Did you use the "Time Profiler" Instrument ("Product" Menu -> "Profile") to check where in your code you spend the most of your time? (use it with your Device of course, not the Simulator, to have realistic profiling). I'd guess it is not in the image capture portion you quoted in your question, but in your rescaling method imageWithImage:scaledToSize: method.
Instead of rendering the image at its whole size in a context, then rescaling the image to the final size, you should render the layer in the context directly at the expected size by applying some affine transform to the context.
So simply use CGContextConcatCTM(someScalingAffineTransform); on UIGraphicsGetCurrentContext() right after your line UIGraphicsBeginImageContext(size);, to apply an scaling affine transform that will make the layer be rendered at a different scale/size.
This way it will be directly rendered as the expected size which will be much faster, instead of being rendered at 100% and then having you to rescale it afterwards in a time-consuming way

Thank you AliSoftware, Here is the Code I ended up using:
-(UIImage *) createScreenShotThumbnailWithWidth:(CGFloat)width{
if (IoUIDebug & IoUIDebugSelectorNames) {
NSLog(#"%# - %#", INTERFACENAME, NSStringFromSelector(_cmd) );
}
// Size of our View
CGSize size = editorContentView.bounds.size;
//Calculate the scal ratio of the image with the width supplied.
CGFloat ratio = 0;
if (size.width > size.height) {
ratio = width / size.width;
} else {
ratio = width / size.height;
}
CGSize newSize = CGSizeMake(ratio * size.width, ratio * size.height);
//Create GraphicsContext with our new size
UIGraphicsBeginImageContext(newSize);
//Create Transform to scale down the Context
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformScale(transform, ratio, ratio);
//Apply the Transform to the Context
CGContextConcatCTM(UIGraphicsGetCurrentContext(),transform);
//Render our Image into the the Scaled Graphic Context
[editorContentView.layer renderInContext:UIGraphicsGetCurrentContext()];
//Save a copy of the Image of the Graphic Context
UIImage* screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot;
}

Related

iOS, Thumbnail from image, Big Nerd Ranch, chapter 19

I am going through 19th chapter of Big Nerd Ranch, iOS textbook and can not understand several parts of the function that takes in a big image and creates a thumbnail out of it. Have a look:
- (void)setThumbnailFromImage:(UIImage *)image
{
CGSize origImageSize = image.size;
// The rectangle of the thumbnail
CGRect newRect = CGRectMake(0, 0, 40, 40);
// Figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio = MAX(newRect.size.width / origImageSize.width,
newRect.size.height / origImageSize.height);
// Create a transparent bitmap context with a scaling factor
// equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// Create a path that is a rounded rectangle
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect
cornerRadius:5.0];
// Make all subsequent drawing clip to this rounded rectangle
[path addClip];
// Center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width = ratio * origImageSize.width;
projectRect.size.height = ratio * origImageSize.height;
projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
[image drawInRect:projectRect];
// Get the image from the image context; keep it as our thumbnail
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
self.thumbnail = smallImage;
// Cleanup image context resources; we're done
UIGraphicsEndImageContext();
}
From my understanding we are getting the MAX of the two ratios and then we put a smaller edge of the original image equal to newRect's edge (which is 40 in our case), the other edge seemingly should stick out of the newRect since the edge would be larger than the edge of newRect, when we UIGraphicsGetImageFromCurrentImageContext(). That's my vague 'understanding'.
Could anyone please explain what is this whole code doing in a detailed way, especially, the centering part? If you know some tutorials that might be relevant, it would also be great.
I just took or added to the previous comments and tried to explain each part more clearly. You seemed to get the basic idea, so I hope this helps solidify everything.
- (void)setThumbnailFromImage:(UIImage *)image
{
CGSize origImageSize = image.size;
//Create new rectangle of your desired size
CGRect newRect = CGRectMake(0, 0, 40, 40);
//Divide both the width and the height by the width and height of the original image to get the proper ratio.
//Take whichever one is greater so that the converted image isn't distorted through incorrect scaling.
float ratio = MAX(newRect.size.width / origImageSize.width,
newRect.size.height / origImageSize.height);
// Create a transparent bitmap context with a scaling factor
// equal to that of the screen
// Basically everything within this builds the image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// Create a path that is a rounded rectangle -- essentially a frame for the new image
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect
cornerRadius:5.0];
// Applying path
[path addClip];
// Center the image in the thumbnail rectangle
CGRect projectRect;
// Scale the image with previously determined ratio
projectRect.size.width = ratio * origImageSize.width;
projectRect.size.height = ratio * origImageSize.height;
// I believe the anchor point of the new image is (0.5, 0.5), so here he is setting the position to be in the middle
// Half of the width and height added to whatever origin you have (in this case 0) will give the proper coordinates
projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
// Add the scaled image
[image drawInRect:projectRect];
// Retrieving the image that has been created and saving it in memory
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
self.thumbnail = smallImage;
// Cleanup image context resources; we're done
UIGraphicsEndImageContext();
}

ios 7:How to show images from photo album without strectch while placing it in uiimageview in larger size

My task is to get the images from photo album and show it in one ViewController which has one UIImageView in full screen. The problem am facing here is while showing the small size photos captured in camera are getting stretched when it comes to UIImageView
I have tried with all aspect mode for keeping the image as normal. but no luck
Thanks in advance!!
If you want the image selected from Photo Album with same size then
self.photoImgView.contentMode=UIViewContentModeScaleAspectFit;// is best
self.photoImgView.contentMode=UIViewContentModeScaleAspectFill; //if you want to fill the large imageview size with selected photo
self.photoImgView.contentMode=UIViewContentModeScaleToFill; // if you want to fill the large imageview size with selected photo
Else you can try with below solution for resizing image before assigning to imageview.
UIImage *uiImage=[info valueForKey:#"UIImagePickerControllerOrignalImage"];//UIImagePickerControllerEditedImage
CGSize size;
NSData *imageData;
imageData= UIImageJPEGRepresentation(uiImage, 1.0);
UIImage *galleryImage=[self squareImageWithImage:[UIImage imageWithData:imageData] scaledToSize:CGSizeMake(320, 320)];
photoImgView.image=galleryImage;
-(UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Hope it helps you..
Set the content mode of your image view. Try this:
imageView.contentMode = UIViewContentModeScaleAspectFit;
Hope this helps.. :)
I assume when you say
"I have tried with all aspect mode for keeping the image as normal. but no luck"
You mean you used the contentMode or UIImageView?
e.g.
self.myImageView.contentMode = UIViewContentModeScaleAspectFit;
or one of the others AspectFill is another common one fit letterboxes I think and fill scales past the edge until the entire screen is filled.
But if your image is too small then it is going to look stretched even if the aspect ratio is correct, as there just wont be enough data for a full image
I would use:
UIImageView *someImage = [UIImageView new];
someImage.contentMode = UIViewContentModeScaleAspectFill;
someImage.clipsToBounds = YES;

Why when I add add a UIImage on top of another UIImage for a new image does the added image shrink?

I'm trying to add a video player icon on top of a thumbnail of a video.
I get the image from the YouTube API, then crop it to be square, then resize it to be the proper size. I then add my player icon image on top of it.
The problem lies in the fact that the player icon is much smaller than it should be on the thumbnail (it's 28x28pt when on screen it's much smaller). See in the below image where I added it to the cell to show the size it should be, versus the thumbnail size:
I crop it to a square with this method:
/**
* Given a UIImage, return it with a square aspect ratio (via cropping, not smushing).
*/
- (UIImage *)createSquareVersionOfImage:(UIImage *)image {
CGFloat originalWidth = image.size.width;
CGFloat originalHeight = image.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
// Determine the offset needed to crop the center of the image out.
CGFloat xOffsetToBeCentered = (originalWidth - smallestDimension) / 2;
CGFloat yOffsetToBeCentered = (originalHeight - smallestDimension) / 2;
// Create the square, making sure the position and dimensions are set appropriately for retina displays.
CGRect square = CGRectMake(xOffsetToBeCentered * image.scale, yOffsetToBeCentered * image.scale, smallestDimension * image.scale, smallestDimension *image.scale);
CGImageRef squareImageRef = CGImageCreateWithImageInRect([image CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:squareImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(squareImageRef);
return squareImage;
}
Resize it with this method:
/**
* Resize the given UIImage to a new size and return the newly resized image.
*/
- (UIImage *)resizeImage:(UIImage *)image toSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
And add it on top of the other image with this method:
/**
* Adds a UIImage on top of another UIImage and returns the result. The top image is centered.
*/
- (UIImage *)addImage:(UIImage *)additionalImage toImage:(UIImage *)backgroundImage {
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[additionalImage drawInRect:CGRectMake((backgroundImage.size.width - additionalImage.size.width) / 2, (backgroundImage.size.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
And this is how it is implemented:
UIImage *squareThumbnail = [self resizeImage:[self createSquareVersionOfImage:responseObject] toSize:CGSizeMake(110.0, 110.0)];
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
UIImage *squareThumbnailWithPlayerIcon = [self addImage:playerIcon toImage:squareThumbnail];
But in the end, the icon is always too small. The sizing things confuse me when working with images, as I'm used to it figuring out retina screen related things automatically, and for example in the above code block, I'm not sure why I set it to 110.0, 110.0 as it's a 55x55 UIImageView and I thought it scales automatically (but if I put it to 55 it's stretched terribly).
The reason you have to put 110 in your resizeImage call is because you are creating a CGGraphics context with a scale of 1.0. The graphics context for views in a view hierarchy on retina displays have a scale of 2.0 (provided you did nothing to scale anything else).
I believe that new UIImage that you create is now a "normal" image (Sorry I can't remember the technical term). It is not an #2x image. So its size that you will get when you ask for size will not scale for #2x.
Note this answer:
UIGraphicsGetImageFromCurrentImageContext retina resolutions?
I haven't tested this, but it should work. If it doesn't it should at least be more straightforward to debug.
//images should be passed in with their original scales
-(UIImage*)compositedImageWithSize:(CGSize)newSize bg:(UIImage*)backgroundImage fgImage:(UIImage*)foregroundImage{
//match the scale of screen.
CGFloat scale = [[UIScreen mainScreen] scale];
UIGraphicsBeginImageContextWithOptions(newSize, NO, scale);
//instead of resizing the image ahead of time, we just draw it into the context at the appropriate size. The context will clip the image.
CGRect aspectFillRect = CGRectZero;
if(newSize.width/newSize.height > backgroundImage.size.width/backgroundImage.size.height){
aspectFillRect.y = 0;
aspectFillRect.height = newSize.height;
CGFloat scaledWidth = (newSize.height / backgroundImage.size.height) * newSize.width;
aspectFillRect.x = (newSize.width - scaledWidth)/2.0;
aspectFillRect.width = scaledWidth;
}else{
aspectFillRect.x = 0;
aspectFillRect.width = newSize.width;
CGFloat scaledHeight = (newSize.width / backgroundImage.size.width) * newSize.height;
aspectFillRect.y = (newSize.height - scaledHeight)/2.0;
aspectFillRect.height = scaledHeight;
}
[backgroundImage drawInRect:aspectFillRect];
//pass in the 2x image for the fg image so it provides a better resolution
[foregroundImage drawInRect:CGRectMake((newSize.width - additionalImage.size.width) / 2, (newSize.height - additionalImage.size.height) / 2, additionalImage.size.width, additionalImage.size.height)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
You would skip all those methods you were calling before and do:
UIImage *playerIcon = [UIImage imageNamed:#"video-thumbnail-overlay"];
//pass in the non-retina scale of the image
UIImage *result = [self compositedImageWithSize:CGSizeMake(55.0, 55.0)
bg:responseObject
fg:playerIcon];
Hope this helps!

Is it possible to display an image in Core Graphics with an Aspect Fit resize?

A CALayer can do it, and a UIImageView can do it. Can I directly display an image with aspect-fit with Core Graphics? The UIImage drawInRect does not allow me to set the resize mechanism.
If you're already linking AVFoundation, an aspect-fit function is provided in that framework:
CGRect AVMakeRectWithAspectRatioInsideRect(CGSize aspectRatio, CGRect boundingRect);
For instance, to scale an image to fit:
UIImage *image = …;
CRect targetBounds = self.layer.bounds;
// fit the image, preserving its aspect ratio, into our target bounds
CGRect imageRect = AVMakeRectWithAspectRatioInsideRect(image.size,
targetBounds);
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
You need to do the math yourself. For example:
// desired maximum width/height of your image
UIImage *image = self.imageToDraw;
CGRect imageRect = CGRectMake(10, 10, 42, 42); // desired x/y coords, with maximum width/height
// calculate resize ratio, and apply to rect
CGFloat ratio = MIN(imageRect.size.width / image.size.width, imageRect.size.height / image.size.height);
imageRect.size.width = imageRect.size.width * ratio;
imageRect.size.height = imageRect.size.height * ratio;
// draw the image
CGContextDrawImage(context, imageRect, image.CGImage);
Alternatively, you can embed a UIImageView as a subview of your view, which gives you easy to use options for this. For similar ease of use but better performance, you can embed a layer containing the image in your view's layer. Either of these approaches would be worthy of a separate question, if you choose to go down that route.
Of course you can. It'll draw the image in whatever rect you pass. So just pass an aspect-fitted rect. Sure, you have to do a little bit of math yourself, but that's pretty easy.
here's the solution
CGSize imageSize = yourImage.size;
CGSize viewSize = CGSizeMake(450, 340); // size in which you want to draw
float hfactor = imageSize.width / viewSize.width;
float vfactor = imageSize.height / viewSize.height;
float factor = fmax(hfactor, vfactor);
// Divide the size by the greater of the vertical or horizontal shrinkage factor
float newWidth = imageSize.width / factor;
float newHeight = imageSize.height / factor;
CGRect newRect = CGRectMake(xOffset,yOffset, newWidth, newHeight);
[image drawInRect:newRect];
-- courtesy https://stackoverflow.com/a/1703210

Rotate a image (not UIImageView) by one degree each time when a button is clicked

There are several post that I've found but none of them are useful for me.
I want to rotate a image (either clockwise/anti-clockwise at a time). I've done this by following code but when I assign a rotated image to a image view then image become smaller after every click.
I've debugged and found that at every rotation (either clockwise/anti-clockwise) image size is increased. I know when image is rotated then image size is little bit increased but here image size is increased much grater than expectation.
//code for image rotation
- (UIImage *)imageRotatedByDegrees:(UIImage*)oldImage deg:(CGFloat)degrees{
// calculate the size of the rotated view's containing box for our drawing space
UIView *rotatedViewBox = [[UIView alloc] initWithFrame:CGRectMake(0,0,oldImage.size.width, oldImage.size.height)];
CGAffineTransform t = CGAffineTransformMakeRotation(degrees * M_PI / 180);
rotatedViewBox.transform = t;
CGSize rotatedSize = rotatedViewBox.frame.size;
// Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize);
CGContextRef bitmap = UIGraphicsGetCurrentContext();
// Move the origin to the middle of the image so we will rotate and scale around the center.
CGContextTranslateCTM(bitmap, rotatedSize.width/2, rotatedSize.height/2);
// // Rotate the image context
CGContextRotateCTM(bitmap, (degrees * M_PI / 180));
// Now, draw the rotated/scaled image into the context
CGContextScaleCTM(bitmap, 1.0, -1.0);
CGContextDrawImage(bitmap, CGRectMake(-oldImage.size.width / 2, -oldImage.size.height / 2, oldImage.size.width, oldImage.size.height), [oldImage CGImage]);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (IBAction)btnRotateImageTapped:(UIButton *)sender {
static NSInteger degree = 0;
if (sender.tag == 471) { //rotate left btn tag
degree += 1;
} else if (sender.tag == 472) { //rotate right btn tag
degree += -1;
}
UIImage *img = [self imageRotatedByDegrees:self.imgViewTeethTemplate.image deg:degree];
self.imgViewTeethTemplate.image = img;
}
I don't know is this right way or not. If not can anyone help me out of this. Any help is must appreciated.
I think it is because you are calculating the new size at each rotation. You should copy your image into a new one that is 1/cos(45deg) larger, (that is 2/sqrt(2)) and then do the rotations of this new image as required. You do not need to worry about loosing part of your original image as it will be contained in the larger one whatever the rotation is.

Resources