Efficiently take a snapshot of a large UIWebView? - ios

I've tried the following code to take a screenshot of a UIWebView whose bounds are larger than that of the screen (for example, around 1200pts), however the screenshot always comes out as the top part of the web view, stretched vertically downwards to cover the whole rect that it's supposed to draw. Any ideas? I could do it using -renderInContext:, but that's a slower and outdated method. Here's my current code:
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, scale);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;

Related

How to get a screenshot of a CATiledLayer based TilingView

I'm trying in my app to take a screenshot of the current screen contents, where my TilingView (based on CATiledLayers) displays a number of transparent large tiled images.
Also I added some subViews to the TilingView, which are magically captured in the screenshot, however the underlying contents of the TilingView is not captured!??
The following code-snippets takes a snapshot of the visible screen, which seems to work well for a NON CATiledLayer based view-hierarchy, but unfortunately doesn't work for my setup. Even if I pass the topmost superview of the TilingView (being the actual UIViewController.view), I see only in my snapshot the StatusBar, NavigationBar, the TilingViews subViews and the TabBar, but again NOT the TilingViews contents.
- (UIImage*)captureView:(UIView *)viewToCapture {
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[viewToCapture.layer renderInContext:context];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Does anybody know or see here what I'm missing? Do I need to delve deeper into the CG-related display stack with some, for me, unknown CG-API calls? Thanks in advance.
By searching some more in StackOverflow I've found code which seems to do what I wanted. Basically I need to change the above method into:
- (UIImage*)captureView:(UIView*)viewToCapture {
UIGraphicsBeginImageContextWithOptions(viewToCapture.bounds.size, NO, [UIScreen mainScreen].scale);
[viewToCapture drawViewHierarchyInRect:viewToCapture.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Thanks go to the question/answer at: How to get a screenshot of a view containing GPUImageView?

UIView to UIImage with layer borders

I have a UIView whose layer has two sublayers, each of which has a 1.5 pixel border around the outside. I am trying to create a UIImage from this view with the following code
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
The code does return a UIImage, but the image is clipped – that is, the image doesn't include the all of the borders on the sublayers. I've tried tweaking the sizes/bounds but to no effect. Any suggestions of what else I might try?
Thanks!
What happens if you send the parent layer a
drawInContext: message instead of telling the view to draw itself?

iOS: renderInContext and Landscape orientation issue

I'm trying to save the currently shown views on my iOS device for a certain app, and this is working properly. But I've got a problem as soon as I'm trying to save a UIImageView in Landscape orientation.
See the following image that describes my problem:
I'm using Auto layout for this app, and it runs on both iPhone and iPad. It seems like the ImageView is always saved as shown in portrait mode, and I'm a little bit stuck right now.
This is the code I use:
CGSize frameSize = self.view.frame.size;
if (UIInterfaceOrientationIsLandscape(self.interfaceOrientation)) {
frameSize = CGSizeMake(self.view.frame.size.height, self.view.frame.size.width);
}
UIGraphicsBeginImageContextWithOptions(frameSize, NO, 0.0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat scale = CGRectGetWidth(self.view.frame) / CGRectGetWidth(self.view.bounds);
CGContextScaleCTM(ctx, scale, scale);
[self.view.layer renderInContext:ctx];
[self.delegate photoSaved:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
Looking forward to your help!
I still have no idea what your exact issue is but using your screenshot code makes a bit strange image (not rotated or anything though, just too small). Can you try this code instead please.
+ (UIImage *)imageFromView:(UIView *)view {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, .0f);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Other then that you must understand there is a huge difference between UIImage and CGImage as the UIImage includes the orientation while CGImage does not. When dealing with image transformations it is usually with the CGImage and getting its width or height will discard the orientation. That means a CGImage will have flipped dimensions when its orientation is not up (UIImageOrientationUp). But usually when dealing with such images you create a CGImage from the context and then use [UIImage imageWithCGImage:ref scale:1.0f orientation:originalOrientation]. Only if you wish to explicitly rotate the image so it has no orientation (being UIImageOrientationUp) you need to rotate and translate the image and draw it onto the context.
Anyway, this orientation issues are quite fixed by now, UIImagePNGRepresentation respects the orientation and you have an image constructor from the CGImage already written above which is what used to be missing in the past if I remember correctly.

iOS7's drawViewHierarchyInRect doesn't work?

From what I've read, iOS7's new drawViewHierarchyInRect is supposed to be faster than CALayer's renderInContext. And according to this and this, it should be a simple matter of calling:
[myView drawViewHierarchyInRect:myView.frame afterScreenUpdates:YES];
instead of
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
However, when I try this, I just get blank images. Full code that does the capture, where "self" is a subclass of UIView,
// YES = opaque. Ignores alpha channel, so less memory is used.
// This method for some reasons renders the
UIGraphicsBeginImageContextWithOptions(self.bounds.size, YES, self.window.screen.scale); // Still slow.
if ( [AIMAppDelegate isOniOS7OrNewer] )
[self drawViewHierarchyInRect:self.frame afterScreenUpdates:YES]; // Doesn't work!
else
[self.layer renderInContext:UIGraphicsGetCurrentContext()]; // Works!
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
contentImageView.image = image; // this is empty if done using iOS7's way
and contentImageView is a UIImageView that is added as a subView to self during initialization.
Additionally, the drawing that I want captured in the image is contained in other sub-views that are also added to self as a sub-view during initialization (including contentImageView).
Any ideas why this is failing when using drawViewHierarchyInRect?
* Update *
I get an image if I draw a specific sub-view, such as:
[contentImageView drawViewHierarchyInRect:contentImageView.frame afterScreenUpdates:YES];
or
[self.curvesView drawViewHierarchyInRect:self.curvesView.frame afterScreenUpdates:YES];
however I need all the visible sub-views combined into one image.
Try it with self.bounds rather than self.frame—it’s possible you’re getting an image of your view rendered outside the boundaries of the image context you’ve created.

Merge two UIView into one UIImage

I've got two UIImageView: the first one is laying on the top of the other (eg. an overlay).
I want now to take a screenshot of the whole thing.
Note that before that step, I allow the user to change the overlay by panning,scaling and ROTATING it, so I must keep track of his editing.
So, here's the homework:
rotating the context basing on the view's transform rotation value
positioning on the origin, where the user finished to pan the overlay
calculate the size of the overlay view (it's always a rectangle, however!)
I'm gonna merge them inside a similar piece of code:
UIGraphicsBeginImageContext...
...
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
but... what does "best fit" instead of the "..."?
Example code is well accepted!
Thanks
UIGraphicsBeginImageContext(firstImage.size);
[firstImage drawAtPoint:CGPointMake(0,0)];
[secondImage drawAtPoint:CGPointMake(0,0)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Resources