Capturing CATiledLayer content to another layer or image context - ios

I'm trying to screen capture a view that uses CATiledLayers (for animation) but unable to get the image that I want.
I tried it on Apple's "PhotoScroller" sample application and added this:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:ctx];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, the tiles don't render in the resulting UIImage and all I get is the tile outlines.
Seems that CATiledLayer's renderInContext behaves differently from CALayer.
Am I doing anything wrong in trying to capture the tiles? Is my only solution to render the tiles individually myself?

In the end, rather than trying to render the tiles into another view just for animation, I just created a new instance of ImageScrollView, and animated the original one and the new one together before deallocating the original one.

Related

Modify original UIImage in UIGraphicsContext

I've seen a lot of examples where ones get new UIImage with modifications from input UIImage. It looks like:
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// draw there
I have similar problem, but I really want to modify input image. I suppose it would work faster since I would't draw original image every time. But I could not find any samples of it. How can I get image context of original image, where it's already drawn?
UIImage is immutable for numerous reasons (most of them around performance and memory). You must make a copy if you want to mess with it.
If you want a mutable image, just draw it into a context and keep using that context. You can create your own context using CGBitmapContextCreate.
That said, don't second-guess the system too much here. UIImage and Core Graphics have a lot of optimizations in them and there's a reason you see so many examples that copy the image. Don't "suppose it would work faster." You really have to profile it in your program.

Printing shinobi chart into PDF

I have several shinobicharts in my App that I want to print into a PDF file. Everything else, like normal views, labels and images work fine, even the grid, legend and gridlabels are displayed. The only thing missing are the series. So basically I get an empty chart printed into the PDF file.
I print the PDF as follows:
NSMutableData * pdfData=[NSMutableData data];
PDFPage1ViewController *pdf1 = [self.storyboard instantiateViewControllerWithIdentifier:#"PDF1"];
pdf1.array1 = array1;
pdf1.array2 = array2;
pdf1.array3 = array3;
pdf1.array4 = array4;
UIGraphicsBeginPDFContextToData(pdfData, CGRectZero,nil);
CGContextRef pdfContext=UIGraphicsGetCurrentContext();
UIGraphicsBeginPDFPage();
[pdf1.view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
The exact same code from PDF1PageViewController draws beautiful charts in a normal viewController, not missing the series.
The arrays contain the data which should be displayed.
[EDIT]
This code did it for me:
UIGraphicsBeginImageContextWithOptions(pdf1.view.bounds.size, NO, 0.0);
[pdf1.view drawViewHierarchyInRect:pdf1.view.bounds afterScreenUpdates:YES];
UIImage *pdf1Image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *pdf1ImageView = [[UIImageView alloc] initWithImage:pdf1Image];
[pdf1ImageView.layer renderInContext:pdfContext];
Although the activity wheel stops spinning after drawViewHierarchyInRect and the label displaying the current page being rendered also stops updating. Anyone knows how to fix this?
The reason that you're having this problem is that the series part of the charts are rendered in openGLES, and therefore don't get rendered as part of renderInContext:.
You have a couple of options which you can investigate using. the first is the addition of some snapshotting methods on UIView in iOS7. If you're app can be restricted to iOS7 only, then snapshotViewAfterScreenUpdates: will return you a UIView which is a snapshot of the content. I think that the following (untested) code will work:
UIGraphicsBeginPDFPage();
UIView *pdfPage = [pd1.view snapshotViewAfterScreenUpdates:YES];
[pdfPage.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
There are more details on this approach on the ShinobiControls blog at http://www.shinobicontrols.com/blog/posts/2014/02/24/taking-a-chart-snapshot-in-ios7
If restricting your app to iOS7 isn't an option then you can still achieve the result you want, but it is a little more complicated. Luckily, again, there is a blog post on the ShinobiControls blog (http://www.shinobicontrols.com/blog/posts/2012/03/26/taking-a-shinobichart-screenshot-from-your-app) which describes how to create a UIImage from a chart. This could easily be adapted to render into your PDF context, rather than the image context created in the post. There is an additional code snippet to accompany the post, available on github: https://github.com/ShinobiControls/charts-snippets/tree/master/iOS/objective-c/screenshot.
Hope this helps
sam

Draw image in UIView using CoreGraphics and draw text on it outside drawRect

I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.

Generate an Image from a Map in iOS

Here's what I want to do:
Get some destination.
Render it in the maps into a view.
Turn that into an image I can then save and use as a background.
Another option would be to put a map view down, turn off all input and just paint on top of it. Basically asking for a way to rasterize the map view. Wondering if anyone has done this.
You can render any view into an image with something like this:
UIGraphicsBeginImageContextWithOptions(mapView.bounds.size, NO, 0.0f);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

How to mask a UIView to highlight a selection?

The problem that I am facing is simple (and less abstract than the question itself). I am looking for a solution to highlight an area of an image (the selection) while the rest of the image is faded or grayed out. You can compare the effect with the interface you see in, for example, Photoshop when you crop an image. The image is grayed out and the the area that will be cropped is clear.
My initial idea was to use masking for this (hence the question), but I am not sure if this is a viable approach and, if it is, how to proceed.
Not sure if this is the best way, but it should work.
First, you create a screenshot of the view.
UIGraphicsBeginImageContextWithOptions(captureView.bounds.size, view.opaque, 0.0);
[captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This snippet is 'stolen' and slightly modified from here:
Low quality of capture view context on iPad
Then you could create a grayscale mask image of same dimensions as the original (screenshot).
Follow the clear & simple instructions on How to Mask an Image.
Then you create UIImageView, set the masked image as it's image and add it on top of your original view. You also might want to set the backgroundColor of this UIImageView to your liking.
EDIT:
A simpler way would probably be using view.layer.mask, which is "an optional layer whose alpha channel is used as a mask to select between the layer's background and the result of compositing the layer's contents with its filtered background." (from CALayer class reference)
Some literature:
UIView Class Reference
CALayer Class Reference
And a simple example how mask can be made with (possibly hidden) another UIView:
Masking a UIView

Resources