Optimizing Image Drawing for iPad 3 - ios

I am trying to find the most optimized way to draw images in iOS on the iPad 3. I am generating a reflection for a third party version of coverflow that I am implementing in my app. The reflection is created using NSOperationQueue and then added via UIImageView in the main thread. Because the coverflow part is already using resources for the animations as you scroll through the images, with each new image that is added, there is a bit of a "pop" in the scrolling and it makes the app feel kind of laggy/glitchy. Testing on iPad 1 and 2 the animation is perfectly smooth and looks great.
How can I further optimize the drawing to avoid this. Any ideas are appreciated. I have been looking into "tiling" the reflection so that it presents a little of the reflection at a time, but I'm not sure what the best approach is.
Here is the drawing code:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Carousel-Ref-Mask.jpg" ofType:nil]];
//
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:self.name ofType: nil]];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked scale:[[UIScreen mainScreen]scale] orientation:UIImageOrientationUp];
CGImageRelease(masked);
if (maskedImage) {
[mainView performSelectorOnMainThread:#selector(imageDidLoad:)
withObject:[NSArray arrayWithObjects:maskedImage, endView, nil]
waitUntilDone:YES];
} else
NSLog(#"Unable to find sample image: %#", self.name);
The Mask is just a gradient png that I am using to mask the image. Also, if I just draw this offscreen but don't add it, there isn't hardly any lag. The lag comes from actually adding it on the main thread.

So after spending a great deal of time researching this issue and trying out different approaches (and spending a good while with the "Time" profiler in Instruments), I found that the lag was from the image decoding on the main thread when the image was displayed. By decoding on the background with all CoreGraphics calls I was able to cut the time in half. This still wasn't good enough.
I further found that the reflection being created in my code was taking a long time to display due to the transparency or alpha pixels. I therefor drew it in a context and filled the context with solid black. And then I made the view itself transparent instead of the image. This reduced the time it took on the main thread by 83%—Mission Accomplished

Related

Fastest way to take screenShot of UIView

I've searched a lot but only found two methods to take screen shot of UIView.
first renderInContext:
I've used it in a way
CGContextRef context = [self createBitmapContextOfSize:CGSizeMake(nImageWidth, nImageHeight)];
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, nImageHeight);
CGContextConcatCTM(context, flipVertical);
[self.layer setBackgroundColor:[UIColor clearColor].CGColor];
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
Second drawViewHierarchyInRect: which I've used as
UIImage *background = nil;
UIGraphicsBeginImageContextWithOptions (self.bounds.size, NO, self.window.screen.scale);
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
{
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:YES];
}
background = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I know that the second one is faster than first and it work for me for iPhone because the view has low size. but when I capturing from iPad the video become jerky.
Can Any body tell me faster way of taking screen shot.
any help would be highly appreciated
Regarding performance, the Apple Docs state the following:
In addition to -drawViewHierarchyInRect:afterScreenUpdates:, UIView
now provides another two snapshot related methods,
-snapshotViewAfterScreenUpdates: and -resizableSnapshotViewFromRect:afterScreenUpdates:withCapInsets:. UIScreen also has -snapshotViewAfterScreenUpdates:.
Unlike UIView's -drawViewHierarchyInRect:afterScreenUpdates:, these
methods return a UIView object. If you are looking for a new snapshot
view, use one of
these methods. It will be more efficient than calling
-drawViewHierarchyInRect:afterScreenUpdates: to render the view contents into a bitmap image yourself. You can use the returned view
as a visual stand-in for the current view/screen in your app. For
example, you might use a snapshot view for animations where updating a
large view hierarchy might be expensive.
There is a third method for taking a snapshot that is much much quicker than either of these but it returns a UIView.
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
If you are just using the snapshot to place as a background "image" etc... then I'd use this instead.
However, this is only available for iOS8.
To use it just do...
UIView *snapshotView = [someView snapshotViewAfterScreenUpdates:YES];
This Method will return you A snapshot images of particular view
-(UIImage *)createSnapShotImagesFromUIview
{
UIGraphicsBeginImageContext(CGSizeMake(view.frame.size.width,view.frame.size.height));
CGContextRef context = UIGraphicsGetCurrentContext();
[mapView.layer renderInContext:context];
UIImage *img_screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img_screenShot;
}

iOS: Adding an outline/stroke to an image context with a transparent background

The images that goes through here are PNGs of different shapes with a transparent background. In addition to merging them (which works fine), I'd like to give the new image a couple of pixels thick outline. But I can't seem to manage that.
(So just to clarify, I'm after an outline around the actual shapes in the context, not a rectangle around the entire image.)
+ (UIImage *)mergeBackgroundImage:(UIImage *)backgroundImage withOverlayingImage:(UIImage *)overlayImage{
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, backgroundImage.scale);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
[overlayImage drawInRect:CGRectMake(backgroundImage.size.width - overlayImage.size.width, backgroundImage.size.height - overlayImage.size.height, overlayImage.size.width, overlayImage.size.height)];
//Add stroke.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
Thanks for your time!
Markus
If you make a CALayer who's backing is set to a CGImage of your image, you can then use it as a masking layer for your layer that requires an outline1. And once you've done that, you can render your layer into another context, and then get another UIImage from that.
// edit: Something like what's describe in this answer.

Core Graphics and Open GL Drawing

I have a drawing app where I'm using the openGL paint code to draw the strokes, but want to transfer it to another image after the stroke is complete, then clear the OpenGL view. and for that, I'm using CoreGraphics. I'm running into a problem however, where the OpenGL view is being cleared before the image is being transferred via CG (even though I clear it afterwards)
(And I want it the other way, ie the image to be drawn first then the painting image to be erased, to avoid any kind of flickering)
(paintingView is the openGL view)
Here is the code:
// Save the previous line drawn to the "main image"
UIImage *paintingViewImage = [[UIImage alloc] init];
paintingViewImage = [_paintingView snapshot];
UIGraphicsBeginImageContext(self.mainImage.frame.size);
[self.mainImage.image drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
// Get the image from the painting view
[paintingViewImage drawInRect:CGRectMake(0, 0, self.mainImage.frame.size.width, self.mainImage.frame.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
self.mainImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.paintingView erase];
So the paintingView is being erased before the mainImage.image variable is being set to the CurrentImage Context.
I'm a only a beginner with these, so any thoughts helpful.
Thanks
You're probably better off using FBOs (OpenGL frame buffer objects). You draw into one FBO, then switch drawing to a new FBO while you save off the previous one. You can ping-pong back-and-forth between the 2 FBOs. Here are the docs for using FBOs on iOS.

convert jpg UIImage to bitmap UIImage?

I'm trying to let a user pan/zoom through a static image with a selection rectangle on the main image, and a separate UIView for the "magnified" image.
The "magnified" UIView implements drawRect:
// rotate selectionRect if image isn't portrait internally
CGRect tmpRect = selectionRect;
if ((image.imageOrientation == UIImageOrientationLeft || image.imageOrientation == UIImageOrientationLeftMirrored || image.imageOrientation == UIImageOrientationRight || image.imageOrientation == UIImageOrientationRightMirrored)) {
tmpRect = CGRectMake(selectionRect.origin.y,image.size.width - selectionRect.origin.x - selectionRect.size.width,selectionRect.size.height,selectionRect.size.width);
}
// crop and draw
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], tmpRect);
[[UIImage imageWithCGImage:imageRef scale:image.scale orientation:image.imageOrientation] drawInRect:rect];
CGImageRelease(imageRef);
The performance on this is atrocious. It spends 92% of its time in [UIImage drawInRect].
Deeper, that's 84.5% in ripc_AcquireImage and 7.5% in ripc_RenderImage.
ripc_AcquireImage is 51% decoding the jpg, 30% upsampling.
So...I guess my question is what's the best way to avoid this? One option is to not take in a jpg to start, and that is a real solution for some things [ala captureStillImageAsynchronouslyFromConnection without JPG intermediary ]. But if I'm getting a UIImage off the camera roll, say...is there a clean way to convert the UIImage-jpg? Is converting it even the right thing (is there a "cacheAsBitmap" flag somewhere that would do that, essentially?)
Apparently this isn't just a JPG issue—it's anything that's not backed by "ideal native representation", I think.
I'm having good success (significant performance improvements) with just the following:
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
image = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
For both things taken from the camera roll and from the camera.

How to get a CGImageRef from Context-Drawn Images?

Ok using coregraphics, I'm building up an image which will later be used in a CGContextClipToMask operation. It looks something like the following:
UIImage *eyes = [UIImage imageNamed:#"eyes"];
UIImage *mouth = [UIImage imageNamed:#"mouth"];
UIGraphicsBeginImageContext(CGSizeMake(150, 150));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(context, 0, 0, 0, 1);
CGContextFillRect(context, bounds);
[eyes drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
[mouth drawInRect:bounds blendMode:kCGBlendModeMultiply alpha:1];
// how can i now get a CGImageRef here to use in a masking operation?
UIGraphicsEndImageContext();
Now, as you can see by the comment, I'm wondering how I'm actually going to USE the image I've built up. The reason why I'm using core graphics here and not just building up a UIImage is that the transparency I'm creating is very important. If I just grab a UIImage from the context, when it's used as a mask, it will just apply to everything... Further to the point, will I have any problems using a partially-transparent mask using this method?
CGImageRef result = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
You can call the UIGraphicsGetImageFromCurrentImageContext function, which will return a UIImage object. You can hold onto and use the UIImage, or ask it for its CGImage.

Resources