iOS8 scale glitch when calling drawViewHierarchyInRect afterScreenUpdates:YES - ios

I was converting a project from iOS7 to iOS8 which uses custom transitions and needs to capture the modal after it finishes loading afterScreenUpdates:YES and was seeing that the entire screen scale up for a second and scale back down. I also see this happening in the Flickr app for iOS between sections and on Yelp app when transitioning to a photo on iOS8.
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 22.0);
[self.view drawViewHierarchyInRect:self.view.frame afterScreenUpdates:YES];
UIGraphicsEndImageContext();
Adding a larger scale factor helps emphasize the glitch more... but i'm just calling this on a button press in the example.
EDIT This appears to happen on iPhone 6 and 6 plus not on the 5.
Sample project github

Do you NEED it to draw after the screen updates? because I'm using:
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
and it seems to work fine on iOS7 and iOS8. I imagine this isn't a great solution for capturing images regularly (like multiple times a second) but it seems to work for a once off blur.

You have to provide #3x launch images for the 6 and 6 plus. The 6 being scaled at 750x1334, and the 6 plus image being scaled at 1242x2208.

Even it looks like bug in API, you can call drawViewHierarchyInRect with afterScreenUpdates set to NO to build snapshot AFTER screen updates if you use cunstruction like this:
typedef void (^CompletionHandlerWithId)(id result);
-(void)imageContaining:(CGRect)rect afterScreenUpdates:(bool)afterScreenUpdates opaque:(BOOL)opaque completion:(CompletionHandlerWithId)completion
{
bool success __block;
UIImage *snapshotImage __block = nil;
CompletionHandler block = ^{
// Create the image
UIGraphicsBeginImageContextWithOptions(self.bounds.size, opaque, [[UIScreen mainScreen] scale]);
success = [self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
if (success)
{
snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
CGImageRef imageRef = CGImageCreateWithImageInRect(
[snapshotImage CGImage],
CGRectMake(
snapshotImage.scale*rect.origin.x,
snapshotImage.scale*rect.origin.y,
snapshotImage.scale*rect.size.width,
snapshotImage.scale*rect.size.height));
// or use the UIImage wherever you like
snapshotImage = [UIImage imageWithCGImage:imageRef scale:snapshotImage.scale orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
}
UIGraphicsEndImageContext();
if (completion)
{
if (! success)
{
NSLog(#"Error: [UIView drawViewHierarchyInRect] failed on %#", self);
(completion)(nil);
}
else
{
NSLog(#"Success: [UIView drawViewHierarchyInRect] on %#", self);
(completion)(snapshotImage);
}
}
};
if (afterScreenUpdates)
[CATransaction setCompletionBlock:^{
(block)();
}];
else
(block)();
}

This bug also exists when you run on an iPad2 running iOS7.
Fix: set afterScreenUpdates: to NO
My app has some moving UIButtons, so I don't allow the blur transition until after the movement has stopped. As far as I have found so far, there is no difference in YES or NO.

Appears to be fixed in iOS9 / XCODE 7 builds

I found a solution for me to solve this problem.
I add #3x launch images to my project. And choose launch Screen file to "Main".
This will make app run at original resolution, smaller bounds when run at iphone6, but not glitch when calling drawViewHierarchyInRect. Like this.
Then, scale my view to fullscreen when viewDidLoad.
- (void)viewDidLoad{
[super viewDidLoad];
UIScreen *mainScreen = [UIScreen mainScreen];
CGRect tempFrame=mainScreen.bounds;
double aspect=tempFrame.size.width/320;
self.view.transform = CGAffineTransformScale(CGAffineTransformIdentity, aspect, aspect);
}
Hope helpful :)

Related

UIPickerView get darker on screenshot

I had to alter the navigation on certain circumstances, and due to complexity of the transitions I had take an paint and screenshot until the transition is finished. In almost cases, that works pretty well, but there is a point that disturb me. I have a view controller with two picker views:
But the screenshot is not working well on this VC. I get this:
The code that takes the screenshot is the following in both cases:
- (UIImage *)takeScreenshot {
CALayer *layer = [[UIApplication sharedApplication] keyWindow].layer;
UIGraphicsBeginImageContext(layer.frame.size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
return screenshot;
}
Anyone knows how could be happened?
You could try to use a different method for screenshot. Apple introduced in iOS 7 some methods for fast view screenshot.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Here is an answer from Apple that provides more info on how the 2 methods works. While the respective user encountered some pb and was advised to use the old way of snapshotting the view, I never had any problem with it. Maybe they fixed it since then.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"image.png" atomically:YES];
if you have a retina display then replace the first line with the below code:-
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO,[UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.window.bounds.size);

VM: CG raster Data memory keep growing

so i am trying to make an app that will let the user change the color of the UIImage, for that i am using this function i found
- (UIImage *)imageWithTintColor:(UIColor *)color fraction:(CGFloat)fraction
{
if (color)
{
UIImage *image;
if ([UIScreen instancesRespondToSelector:#selector(scale)])
{
UIGraphicsBeginImageContextWithOptions([self size], NO, 0.f);
}
else
{
UIGraphicsBeginImageContext([self size]);
}
CGRect rect = CGRectZero;
rect.size = [self size];
[color set];
UIRectFill(rect);
[self drawInRect:rect blendMode:kCGBlendModeDestinationIn alpha:1.0];
if (fraction > 0.0)
{
[self drawInRect:rect blendMode:kCGBlendModeSourceAtop alpha:fraction];
}
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
return self;
}
everything works but the CG raster Data is growing in memory
I found the problem, and it was my bad logic, i am using 2 views one to show and one to work with ex:resize, move, rotate. And each time i was addingSubview to both where one of them need to hold just 1 at a time, a simple:
for (UIView *view in 2cndView.subviews)
{
[view removeFromSuperview];
}
did the trick for me
I have been fighting with my app, that suddenly would not launch properly, for some time now. It turned out that when I had switched a number of images' Render as to Template in the Image asset file, it caused the app to totally bomb out. CG Raster Data was growing exponentially and finally caused the app to stop and Xcode just said
Lost connection with iPhone.. check connections etc
It would appear that during every launch the images get reprocessed for this 'Template' setting, which consumed a disgusting amount of RAM and actually left it unable to boot. To solve this, I lowered the resolution of the images - as simple as that.

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

Screenshot of iPad detail view orientation wrong

I'm trying to capture a screenshot of the detail view in a landscape master/detail layout on iPad.
This is the code I've tried using.
UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow];
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
[keyWindow.layer renderInContext:context];
UIImage *capturedScreen = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Two problems occur with this.
- the screen capture orientation is incorrect. I get an image that is on it's side.
- The width=703 & height=768 dimensions are reversed by the screen capture so I end up with some of the master view in the detail screen shot.
What am I doing wrong here? Thanks!
try this way
-(UIImage *)captureScreenForRect:(CGRect)frame
{
CALayer *layer;
layer = self.view.layer;
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextClipToRect (UIGraphicsGetCurrentContext(),self.view.bounds);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenImage;
}
pass your detail view frame rect for above method. hope this will help you
The "official" screenshot method is here:
(https://developer.apple.com/library/ios/qa/qa1703/_index.html)
If you need this for a transition or for a graphical effect and your using iOS 7 - I suggest you don't actually create an image.
An image can be heavy to generate (for example on iPAd Retina 3rd Gen) and heavy on the memory.
Starting iOS 7 Apple gives you a much quicker Snapshot function on UIView (Which by the way is also the way they implement custom transitions in view controllers , blur effect etc) that is done much quicker then creating an actual image.
On a UIView you can perform:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
Or if you need the full view hierarchy for a blurred view:
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates

CALayer renderInContext: causing unknown crash

The entire block of code consists of the following:
CGSize layerSize = [webview sizeThatFits:CGSizeZero];
if ([UIScreen instancesRespondToSelector:#selector(scale)] && [[UIScreen mainScreen] scale] == 2.0f) {
UIGraphicsBeginImageContextWithOptions(layerSize, NO, 2.0f);
}
else {
UIGraphicsBeginImageContext(layerSize);
}
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but after testing, this line is the one causing the problem:
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
The app crashes with no reason listed in the console, and using #try #catch #finally comes up with nothing. I imported Quartzcore in AppDelegate.h, if that has anything to do with it. The app works fine in the simulator, but crashes when run on a real device.
#Greg : seems like a memory overflow issue on device since device is memory constrained while simulator runs with different memory configuration , i am running into the same - this can happen for long web pages , any ideas how to solve it ?
does anyone what is max width and height [CALayer renderInContext] can handle on actual device (iphone retina or non-retina) before it crashes ?
try
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();

Resources