Make a UIImage from a MKMapView - ios

I want to create a UIImage from a MKMapView. My map is correctly displayed in the view, however the UIImage produced is just a gray image. Here's the relevant snippet.
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
mapImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Anyone know how to make a UIImage using MapKit?

I am using the same code that is tested with ios sdk 4.1 and works fine. So, when map is already displayed to user and user press the button this action will be called:
UIImage *image = [mapView renderToImage];
and here is the wrapper function realized as UIView extension:
- (UIImage*) renderToImage
{
UIGraphicsBeginImageContext(self.frame.size);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
So, the problem is not in that code part.

On iOS7, there is a new API on MapKit for this called MKMapSnapshotter. So you actually don't need to create a mapview, load the tiles and create a graphic context capturing yourself.
Take a look into it at https://developer.apple.com/library/ios/documentation/MapKit/Reference/MKMapSnapshotter_class/Reference/Reference.html

Here is the improved function for retina display:
#implementation UIView (Ext)
- (UIImage*) renderToImage
{
// IMPORTANT: using weak link on UIKit
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, 0.0);
} else {
UIGraphicsBeginImageContext(self.frame.size);
}
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

Hey Loren. There are multiple layers in the mapView. I think the first one is the map and the second on is the google layer. They might have changed something in the mapkit after 3.1. You can try
[[[mapView.layer sublayers] objectAtIndex:1] renderInContext:UIGraphicsGetCurrentContext()];
You can also try
CGRect rect = [mapView bounds];
CGImageRef mapImage = [mapView createSnapshotWithRect:rect];
Hope this helps.

Note that, mapView may not finish load so image may be grey.
as
mapViewDidFinishLoadingMap:
will not always be called, you should get UIImage in
mapViewDidFinishRenderingMap:fullyRendered:
so, the code just like this
- (UIImage *)renderToImage:(MKMapView *)mapView
{
UIGraphicsBeginImageContext(mapView.bounds.size);
[mapView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}

If you are calling this immediately after initializing the map (maybe in viewDidLoad?), you could get a gray image as the map is not finished drawing yet.
Try:
Calling the capture code using performSelector:withObject:afterDelay: using a short delay (even 0 seconds might work so it fires right after the current method is finished)
If you are adding annotations, call the capture code in the didAddAnnotationViews delegate method
Edit:
On the simulator, using performSelector, a zero delay works. On the device, a longer delay is required (about 5 seconds).
However, if you add annotations (and you capture in the didAddAnnotationViews method), it works immediately on both the simulator and device.

Related

UIPickerView get darker on screenshot

I had to alter the navigation on certain circumstances, and due to complexity of the transitions I had take an paint and screenshot until the transition is finished. In almost cases, that works pretty well, but there is a point that disturb me. I have a view controller with two picker views:
But the screenshot is not working well on this VC. I get this:
The code that takes the screenshot is the following in both cases:
- (UIImage *)takeScreenshot {
CALayer *layer = [[UIApplication sharedApplication] keyWindow].layer;
UIGraphicsBeginImageContext(layer.frame.size);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
return screenshot;
}
Anyone knows how could be happened?
You could try to use a different method for screenshot. Apple introduced in iOS 7 some methods for fast view screenshot.
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Here is an answer from Apple that provides more info on how the 2 methods works. While the respective user encountered some pb and was advised to use the old way of snapshotting the view, I never had any problem with it. Maybe they fixed it since then.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
[data writeToFile:#"image.png" atomically:YES];
if you have a retina display then replace the first line with the below code:-
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(self.window.bounds.size, NO,[UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(self.window.bounds.size);

Core Graphics image blending doesn't release memory

I am blending two images with the following code:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This code is called in an IBAction for a UISlider, so that it is called on every slider position change. This is the memory footprint for this code:
It takes 230 MB and then fails due to memory pressure.
How to make this code working properly?
Perform this action on background thread, if you have connected this IBAction with Value Changed event in xib. Make sure do all UI changes on main thread.
Otherwise connect this IBAction with Touch Up Inside event. After this method will call, when you leave the slider for one time in one drag. (not for all dragging values)
Hope it will solve your problem
After adding #autoreleasepool:
+ (UIImage *)xxx_blendedImageWithFirstImage:(UIImage *)image
secondImage:(UIImage *)secondImage
renderedInFrame:(CGRect)frame
alpha:(CGFloat)alpha {
UIImage *newImage = nil;
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, UIScreen.mainScreen.scale);
[image drawInRect:frame];
[secondImage drawInRect:frame blendMode:kCGBlendModeNormal alpha:alpha];
newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return newImage;
}
Maybe, there should be some manual release, however now the memory footprint looks like:

Add background to screenshot

I would like to add a background to my taken screenshot, but something went wrong. The return of the image is only the background without the screenshot. HereĀ“s my code, can you fix it please?
- (UIImage *)screenshot
{
UIImage *backgroundImage = [UIImage imageNamed:#"iphoneframe.png"];
UIGraphicsBeginImageContext(backgroundImage.size);
[backgroundImage drawInRect:CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height)];
UIGraphicsBeginImageContext(self.view.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIGraphicsEndImageContext();
[screenshot drawInRect:CGRectMake(backgroundImage.size.width - screenshot.size.width, backgroundImage.size.height - screenshot.size.height, screenshot.size.width, screenshot.size.height)];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
return image;
}
Background/Frame I want to put around the screenshot: http://oi45.tinypic.com/2qvv2tv.jpg
Prototype Image, which should be returned: http://i.stack.imgur.com/hg1nW.png
Update: The problem was solved with the tips from #panayot-panayotov and #jrturton - the idea was to remove second UIGraphicsBeginImageContext & place UIGraphicsEndImage Context(); above return Image;, but now the picture is like this: pbs.twimg.com/media/BnlkN9wIAAEG-ue.jpg:large - how can we fix this? Any ideas?
You're creating two image contexts, and drawing a different image in each one. Don't create the second context (UIGraphicsBeginImageContext...), and move the end context call until after you've taken out the final image, and you should be fine.
It's also worth noting that you should use the newer 'with options' version when making an image context - this will allow you to draw properly on retina devices.

How to create a UIImage from whatever is in the background

I am applying blur to various sections of my app using the UIImage+ImageEffects.h sample code that was provided in one of the apps from the WWDC in 2013. It works well and I'm able to recreate the iOS7 effects for any UIImage.
However, I would like to recreate the effect of the UINavigationBar blurred transparency but using any view that I choose. Similar to the screenshot shown below.
For instance, say that I have a UITableView that takes up half of the screen. I also have a UIImageView background as a separate view behind it that occupies the entire screen. I would only like to blur the UIImageView background for just that section of the screen that's under the tableview.
Here's my question. How do I create create a UIImage by taking a "screenshot" of whatever is behind a UIView that is displayed? Is this even possible?
Here is my screen hierarchy. Nothing complex. I would like the "Blurred Image View" to contain a blurred image of the section of the "Image View" that is sitting as the main UIImageView in the hierarchy.
If you are deploing only on iOS7 you can use the new api, that are a lot faster than -renderInContext and use the ImageEffects category on that image taken from the view. Add this a a category on UIView
#interface UIView (RenderView)
- (UIImage *) imageByRenderingView;
- (UIImage *) imageByRenderingViewOpaque:(BOOL) yesOrNO;
#end
#implementation UIView (RenderView)
- (UIImage *) imageByRenderingViewOpaque:(BOOL) yesOrNO {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, yesOrNO, 0);
if ([self respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[self drawViewHierarchyInRect:self.bounds afterScreenUpdates:NO];
}
else {
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
- (UIImage *) imageByRenderingView{
return [self imageByRenderingViewOpaque:NO];
}
This snippet is a UIView category ok also for system lower than iOS7. It takes an image on a view and its subviews.
Use the below piece of code for iOS < 7
#import <QuartzCore/QuartzCore.h>
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}

UIGraphicsGetImageFromCurrentImageContext in specific CGRect

I want to take screen shot of the specific location in specific size. I found this. But it takes whole screen. Where can i set the CGRect.
UIGraphicsBeginImageContext(self.window.bounds.size);
[self.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I was stuck on this a few days ago actually... Then after a while I managed to come up with a solution! I've implemented it in a category:
#import "UIView+RenderSubframe.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (RenderSubframe)
- (UIImage *) renderWithSubframe:(CGRect)frame {
UIGraphicsBeginImageContextWithOptions(frame.size, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
[self.layer renderInContext:c];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
#end
Voila!
If I'm not mistaken, this method doesn't actually render the unneeded part of the view at all, making this much more efficient than cropping the image afterwards.
In your case, you want to apply this category to a UIWindow class rather than a UIView.

Resources