view.layer drawInContext:UIGraphicsGetCurrentContext() not drawing - ios

I am trying to draw a UIView into a UIImage. Here is the code that I'm using:
UIGraphicsBeginImageContextWithOptions(myView.bounds.size, YES, 0.f);
[myView.layer drawInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I've verified myView.bounds.size to be valid. myView displays correctly on screen. However, img is completely black (I've tried displaying on both in an UIImageView and tried writing the JPEG representation to file.) The image dimensions are correct, the color space (in JPEG file output) is RGB, color profile is sRGB etc. which means that we're not dealing with a corrupted image (in the sense that image/bitmap data itself is valid). I've tested the case on both 6.0 simulator, 7.0 simulator, and 7.0.6 device, all the same. The layer doesn't have any sublayers, and I've tried setting masksToBounds to NO which didn't change anything.
What could be causing the view's layer not to draw?

You need to change:
[myView.layer drawInContext:UIGraphicsGetCurrentContext()];
to:
[myView.layer renderInContext:UIGraphicsGetCurrentContext()];
Note that drawInContext: does not actually do anything by default:
The default implementation of this method does not doing any drawing
itself. If the layer’s delegate implements the drawLayer:inContext:
method, that method is called to do the actual drawing.
Subclasses can override this method and use it to draw the layer’s
content. When drawing, all coordinates should be specified in points
in the logical coordinate space.
A UIView's layer delegate is set to the UIView, but it does not look like the UIView necessarily draws to the provided context. More investigation is necessary on this point.

update: per Rob Jones' comment, not these APIs return UIViews, not images.
There is a new API in iOS 7 that looks promising:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates NS_AVAILABLE_IOS(7_0);
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates withCapInsets:(UIEdgeInsets)capInsets NS_AVAILABLE_IOS(7_0); // Resizable snapshots will default to stretching the center
- (BOOL)drawViewHierarchyInRect:(CGRect)rect afterScreenUpdates:(BOOL)afterUpdates NS_AVAILABLE_IOS(7_0);
Check the UIView (UISnapshotting) category in UIView.h

Related

How to save image with transparent stroke

I followed #Rob answer and its drawn as I want...but when I saved this image....stroke not transparent anymore
Objective-C How to avoid drawing on same color image having stroke color of UIBezierPath
For save image I written this code
-(void)saveImage {
UIGraphicsBeginImageContextWithOptions(self.upperImageView.bounds.size, NO, 0);
if ([self.upperImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)])
[self.upperImageView drawViewHierarchyInRect:self.upperImageView.bounds afterScreenUpdates:YES]; // iOS7+
else
[self.upperImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndPDFContext();
UIImageWriteToSavedPhotosAlbum(self.upperImageView.image, self,#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
You are either setting the alpha of the image view with the paths to 1.0 somewhere, or you are using something that doesn't permit transparencies (e.g. UIGraphicsBeginImageContextWithOptions with YES for the opaque parameter, or staging the image in a format that doesn't preserve alpha, such as JPEG).
A few additional thoughts:
I notice that you're only drawing upperImageView. If you want the composite image, you need to draw both. Or are you only trying to save one of the image views?
(For those unfamiliar with that other question, the entire point was how to draw multiple paths over an image, and have the full set of those paths with the same reduced alpha, rather than having the intersection of paths lead to some additive behavior. This was accomplished by having two separate image views, one for the underlying image, and one for the drawn paths. The key to the answer to that question was that one should draw the paths at 100% alpha, but to add that as a layer to a view that, itself, has a reduced alpha.)
What is the alpha of the image view upon which you are drawing.
NB: In the answer to that other question, when saving a temporary copy of the combined paths. we had to temporarily set the alpha to 1.0. But when saving the final result here, we want to keep the "path" image view's alpha at its reduced value.
Unrelated, but you faithfully transcribed a typo (since fixed) in my original example where I accidentally called UIGraphicsEndPDFContext rather than UIGraphicsEndImageContext. You should use the latter.
So, considering two image views, one with the photograph and one with the drawn path, called imageImageView (with alpha of 1.0) and pathsImageView (with alpha of 0.5), respectively, I can save the snapshot like so:
- (void)saveSnapshot {
CGRect rect = self.pathsImageView.bounds;
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
if ([self.pathsImageView respondsToSelector:#selector(drawViewHierarchyInRect:afterScreenUpdates:)]) {
[self.imageImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES]; // iOS7+
[self.pathsImageView drawViewHierarchyInRect:rect afterScreenUpdates:YES];
} else {
[self.imageImageView.layer renderInContext:UIGraphicsGetCurrentContext()]; // pre iOS7
[self.pathsImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
When I did that, the resulting composite image was in my photo album:

How can improve UISlider response for blending mode?

I would use UISlider to adjust the alpha through below codes. However, the response of slider is very slow. How can I improve it?
Thank in advance.
-(IBAction)alphaSliderAction:(UISlider*)slider
{
//blending images
UIGraphicsBeginImageContextWithOptions(bgImage.size, YES, 0.0);
//
// Use existing opacity as is
[bgImage drawInRect:CGRectMake(0,0,bgImage.size.width,bgImage.size.height)];
[layerAImage drawInRect:CGRectMake(0,0,bgImage.size.width,bgImage.size.height) blendMode:kCGBlendModeOverlay alpha:slider.value];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
//
UIGraphicsEndImageContext();
//
imgView.image=finalImage;
}
You could try doing the drawing using CoreImage instead of CoreGraphics. The CoreImage CIOverlayBlendMode filter does the same thing but might be faster.
Don't use drawRect at all. Instead, install your image in an image view, and set the alpha property on the image view. You'll need to set the opaque flag on the image view to NO. That will do the blending using the GPU, which should be much faster.
Alternately you could install the image's CGImage in a layer's content property and add that layer to your view. Then you could set the opacity of the layer. You could use the implicit animation on CALayers this way.

Blur screen with iOS 7's snapshot API

I believe the NDA is down, so I can ask this question. I have a UIView subclass:
BlurView *blurredView = ((BlurView *)[self.view snapshotViewAfterScreenUpdates:NO]);
blurredView.frame = self.view.frame;
[self.view addSubview:blurredView];
It does its job so far in capturing the screen, but now I want to blur that view. How exactly do I go about this? From what I've read I need to capture the current contents of the view (context?!) and convert it to CIImage (no?) and then apply a CIGaussianBlur to it and draw it back on the view.
How exactly do I do that?
P.S. The view is not animated, so it should be OK performance wise.
EDIT: Here is what I have so far. The problem is that I can't capture the snapshot to a UIImage, I get a black screen. But if I add the view as a subview directly, I can see the snapshot is there.
// Snapshot
UIView *view = [self.view snapshotViewAfterScreenUpdates:NO];
// Convert to UIImage
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Apply the UIImage to a UIImageView
BlurView *blurredView = [[BlurView alloc] initWithFrame:CGRectMake(0, 0, 500, 500)];
[self.view addSubview:blurredView];
blurredView.imageView.image = img;
// Black screen -.-
BlurView.m:
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code
self.imageView = [[UIImageView alloc] init];
self.imageView.frame = CGRectMake(20, 20, 200, 200);
[self addSubview:self.imageView];
}
return self;
}
Half of this question didn't get answered, so I thought it worth adding.
The problem with UIScreen's
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
and UIView's
- (UIView *)resizableSnapshotViewFromRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
withCapInsets:(UIEdgeInsets)capInsets
Is that you can't derive a UIImage from them - the 'black screen' problem.
In iOS7 Apple provides a third piece of API for extracting UIImages, a method on UIView
- (BOOL)drawViewHierarchyInRect:(CGRect)rect
afterScreenUpdates:(BOOL)afterUpdates
It is not as fast as snapshotView, but not bad compared to renderInContext (in the example provided by Apple it is five times faster than renderInContext and three times slower than snapshotView)
Example use:
UIGraphicsBeginImageContextWithOptions(image.size, NULL, 0);
[view drawViewHierarchyInRect:rect];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Then to get a blurred version
UIImage* lightImage = [newImage applyLightEffect];
where applyLightEffect is one of those Blur category methods on Apple's UIImage+ImageEffects category mentioned in the accepted answer (the enticing link to this code sample in the accepted answer doesn't work, but this one will get you to the right page: the file you want is iOS_UIImageEffects).
The main reference is to WWDC2013 session 226, Implementing Engaging UI on iOS
By the way, there is an intriguing note in Apple's reference docs for renderInContext that hints at the black screen problem..
Important: The OS X v10.5 implementation of this method does not support the entire Core Animation composition model. QCCompositionLayer, CAOpenGLLayer, and QTMovieLayer layers are not rendered. Additionally, layers that use 3D transforms are not rendered, nor are layers that specify backgroundFilters, filters, compositingFilter, or a mask values. Future versions of OS X may add support for rendering these layers and properties.
The note hasn't been updated since 10.5, so I guess 'future versions' may still be a while off, and we can add our new CASnapshotLayer (or whatever) to the list.
Sample Code from WWDC ios_uiimageeffects
There is a UIImage category named UIImage+ImageEffects
Here is its API:
- (UIImage *)applyLightEffect;
- (UIImage *)applyExtraLightEffect;
- (UIImage *)applyDarkEffect;
- (UIImage *)applyTintEffectWithColor:(UIColor *)tintColor;
- (UIImage *)applyBlurWithRadius:(CGFloat)blurRadius
tintColor:(UIColor *)tintColor
saturationDeltaFactor:(CGFloat)saturationDeltaFactor
maskImage:(UIImage *)maskImage;
For legal reason I can't show the implementation here, there is a demo project in it. should be pretty easy to get start with.
To summarize how to do this with foundry's sample code, use the following:
I wanted to blur the entire screen just slightly so for my purposes so I'll use the main screen bounds.
CGRect screenCaptureRect = [UIScreen mainScreen].bounds;
UIView *viewWhereYouWantToScreenCapture = [[UIApplication sharedApplication] keyWindow];
//screen capture code
UIGraphicsBeginImageContextWithOptions(screenCaptureRect.size, NO, [UIScreen mainScreen].scale);
[viewWhereYouWantToScreenCapture drawViewHierarchyInRect:screenCaptureRect afterScreenUpdates:NO];
UIImage *capturedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//blur code
UIColor *tintColor = [UIColor colorWithWhite:1.0 alpha:0];
UIImage *blurredImage = [capturedImage applyBlurWithRadius:1.5 tintColor:tintColor saturationDeltaFactor:1.2 maskImage:nil];
//or use [capturedImage applyLightAffect] but I thought that was too much for me
//use blurredImage in whatever way you so desire!
Notes on the screen capture part
UIGraphicsBeginImageContextWithOptions() 2nd argument is opacity. It should be NO unless you have nothing with any alpha other than 1. If you return yes the screen capture will not look at transparency values so it will go faster but will probably be wrong.
UIGraphicsBeginImageContextWithOptions() 3rd argument is the scale. Probably want to put in the scale of the device like I did to make sure and differentiate between retina and non-retina. But I haven't really tested this and I think 0.0f also works.
drawViewHierarchyInRect:afterScreenUpdates: watch out what you return for the screen updates BOOL. I tried to do this right before backgrounding and if I didn't put NO the app would go crazy with glitches when I returned to the foreground. You might be able to get away with YES though if you're not leaving the app.
Notes on blurring
I have a very light blur here. Changing the blurRadius will make it blurrier, and you can change the tint color and alpha to make all sorts of other effects.
Also you need to add a category for the blur methods to work...
How to add the UIImage+ImageEffects category
You need to download the category UIImage+ImageEffects for the blur to work. Download it here after logging in: https://developer.apple.com/downloads/index.action?name=WWDC%202013
Search for "UIImageEffects" and you'll find it. Just pull out the 2 necessary files and add them to your project. UIImage+ImageEffects.h and UIImage+ImageEffects.m.
Also, I had to Enable Modules in my build settings because I had a project that wasn't created with xCode 5. To do this go to your target build settings and search for "modules" and make sure that "Enable Modules" and "Link Frameworks Automatically" are both set to yes or you'll have compiler errors with the new category.
Good luck blurring!
Check WWDC 2013 sample application "running with a snap".
The blurring is there implemented as a category.

Taking a Screenshot (UIImage) from UIView takes far too long

I have the following method to take a screenshot (UIImage) of a UIView which is far too slow
+ (UIImage *)imageWithView:(UIView *)view
{
CGSize size = view.bounds.size;
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
[view.layer renderInContext:context];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext()
return image;
}
On my iPad I now have an app that needs this method to make a copy of a view that is drag&dropped. This view is one with rounded corners and therefore is not opaque (which not makes a difference to if I would set the isOpaque param to YES I found out)...
Also the view that is screenshotted contains a UITableView with quite some complex entries in it...
Do you have any suggestions on how I can improve the speed of the screenshotting. Right now, for a bit bigger tableview (maybe 20 entries) it takes about 1 second (!!!)
And the view is already on screen, rendered correctly... so I just need the Pixels to but into an UIImageView...
I need to support iOS 6+.
I use this same code to take a screenshot of a really complex views. I think your bottleneck is using a big image for the drag&drop. Maybe you can resize the UIImage.
In my case the performance in a iPad2 is about 100ms for screenshot.

Use stretchable UIImage in CGContext

I'm searching a way to draw stretchable image as background of my custom cell background view. I would like to use drawRect method and draw an image stretched exactly as it would be stretched with stretchableImageWithLeftCapWidth in a UIImageView... how can i continue this code to make it happen ?
- (void)drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
UIImage *bgImg =[[UIImage imageNamed:#"bg_table_top"]stretchableImageWithLeftCapWidth:3 topCapHeight:0];
//How to draw the image stretched as the self.bounds size ?
....
}
Any reason not to let UIImageView do this? (Include one as a child of your custom cell.) It's true that reducing child views can be a performance improvement in tables, but UIImageView is also pretty good at getting good performance when drawing images.
My guess is otherwise you're going to have to do multiple draw calls, in order to get the ends and middle drawn correctly.

Resources