In my app, I use drawViewHierarchyInRect:afterScreenUpdates: in order to obtain a blurred image of my view (using Apple’s UIImage category UIImageEffects).
My code looks like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I noticed during development that many of my animations were delayed after using my app for a bit, i.e., my views were beginning their animations after a noticeable (but less than about a second) pause compared to a fresh launch of the app.
After some debugging, I noticed that the mere act of using drawViewHierarchyInRect:afterScreenUpdates: with screen updates set to YES caused this delay. If this message was never sent during a session of usage, the delay never appeared. Using NO for the screen updates parameter also made the delay disappear.
The strange thing is that this blurring code is completely unrelated (as far as I can tell) to the delayed animations. The animations in question do not use drawViewHierarchyInRect:afterScreenUpdates:, they are CAKeyframeAnimation animations. The mere act of sending this message (with screen updates set to YES) seems to have globally affected animations in my app.
What’s going on?
(I have created videos illustrating the effect: with and without an animation delay. Note the delay in the appearance of the "Check!" speech bubble in the navigation bar.)
UPDATE
I have created an example project to illustrate this potential bug. https://github.com/timarnold/AnimationBugExample
UPDATE No. 2
I received a response from Apple verifying that this is a bug. See answer below.
I used one of my Apple developer support tickets to ask Apple about my issue.
It turns out it is a confirmed bug (radar number 17851775). Their hypothesis for what is happening is below:
The method drawViewHierarchyInRect:afterScreenUpdates: performs its operations on the GPU as much as possible, and much of this work will probably happen outside of your app’s address space in another process. Passing YES as the afterScreenUpdates: parameter to drawViewHierarchyInRect:afterScreenUpdates: will cause a Core Animation to flush all of its buffers in your task and in the rendering task. As you may imagine, there’s a lot of other internal stuff that goes on in these cases too. Engineering theorizes that it may very well be a bug in this machinery related to the effect you are seeing.
In comparison, the method renderInContext: performs its operations inside of your app’s address space and does not use the GPU based process for performing the work. For the most part, this is a different code path and if it is working for you, then that is a suitable workaround. This route is not as efficient as it does not use the GPU based task. Also, it is not as accurate for screen captures as it may exclude blurs and other Core Animation features that are managed by the GPU task.
And they also provided a workaround. They suggested that instead of:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I should do this
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
Hopefully this is helpful for someone!
I tried all the latest snapshot methods using swift. Other methods didn't work for me in the background. But taking snapshot this way worked for me.
create an extension with parameters view layer and view bounds.
extension UIView {
func asImage(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(viewBounds.size)
viewLayer.render(in:UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
Usage
DispatchQueue.main.async {
let layer = self.selectedView.layer
let bounds = self.selectedView.bounds
DispatchQueue.global(qos: .background).async {
let image = self.selectedView.asImage(viewLayer: layer, viewBounds: bounds)
}
}
We need to calculate layer and bounds in the main thread, then other operations will work in the background thread. It will give smooth user experience without any lag or interruption in UI.
Why do you have this line (from your sample app):
animation.beginTime = CACurrentMediaTime();
Just remove it, and everything will be as you want it to be.
By setting animation time explicitly to CACurrentMediaTime() you ignore possible time transformations that can be present in layer tree. Either don't set it at all (by default animations will start now) or use time conversion method:
animation.beginTime = [view.layer convertTime:CACurrentMediaTime() fromLayer:nil];
UIKit adds time transformations to layer tree when you call afterScreenUpdates:YES to prevent jumps in ongoing animation, that would be caused otherwise by intermediate CoreAnimation commits. If you want to start animation at specific time (not now), use time conversion method mentioned above.
And while at it, strongly prefer using -[UIView snapshotViewAfterScreenUpdates:] and friends instead of -[UIView drawViewHierarchyInRect:] family (preferably specifying NO for afterScreenUpdates part). In most of the cases you don't really need a persistent image and view snapshot is what you actually want. Using view snapshot instead of rendered image has following benefits:
2x-10x faster
Uses 2x-3x less memory
It will always use correct colorspace and buffer format (e.g. on devices with wide color screen)
It will use correct scale and orientation, so you don't need to think how to position your image so it looks good.
It works with accessibility features better (e.g. with Smart Invert colors)
View snapshot will also capture out-of-process and secure views correctly (while drawViewHierarchyInRect will render them black or white).
When the afterScreenUpdates parameter is set to YES, the system has to wait until all pending screen updates have happened before it can render the view.
If you're kicking off animations at the same time then perhaps the rendering and the animations are trying to happen together and this is causing a delay.
It may be worth experimenting with kicking off your animations slightly later to prevent this. Obviously not too much later because that would defeat the object, but a small dispatch_after interval would be worth trying.
Have you tried running your code on a background thread?
Heres an example using gcd:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//background thread
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_sync(dispatch_get_main_queue(), ^(void) {
//update ui on main thread
});
});
Related
I'm trying to add a blur effect to a background view.
Here's my background view.
https://github.com/martinjuhasz/MJPopupViewController/blob/master/Source/MJPopupBackgroundView.m
I believe that the idea is to take a snapshot of the parent view and then add a blur effect to that image.
I've seen various approaches not sure what will work and what is the best approach.
Also I'm not sure where I'd create the snapshot.
The most common way is indeed probably just to take a snapshot and blur that.
You can take a snapshot by doing something like this:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0f);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Keep in mind this is a fast iOS7+ blurring method so if you want to support lower versions of iOS you will need to use a slower method, detailed in this answer.
There are a lot of ways to achieve blur, the 2 most popular are probably to use Apple's UIImageEffects WWDC sample code which can be found here, or to use GPUImage, which can be found here.
Ablow link points to present view and affect the background with blur effect
https://stackoverflow.com/a/52374977/5233180
I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.
Are there any built in abilities to maintain the contents of a CALayer between drawLayer:inContext: calls? Right now I am copying the layer to a buffer and redrawing an image from the buffer every time I called back in drawLayer:inContext: but I'm wondering if CALayer has a way to do this automatically...
I don't believe so. The 'drawInContext' will clear the underlying buffer so that you can draw to it. However, if you forego the drawInContext or drawRect methods, you can set your layer.contents to a CGImage and that will be retained.
I personally do this for almost all of my routines. I overwrite - (void) setFrame:(CGRect)frame to check to see if the frame size has changed. If it has changed I redraw the image using my normal drawing routines but into the context: UIGraphicsBeginImageContextWithOptions(size, _opaque, 0);. I can then grab that image and set it to the imageCache: cachedImage = UIGraphicsGetImageFromCurrentImageContext();. Then I set the layer.Contents to the CGImage. I use this to help cache my drawings, especially on the new iPad which is slow on many drawing routines the iPad 2 doesn't even blink on.
Other advantages to this method: You can share cached images between views if you set up a separate, shared cache. This can really help your memory footprint if you manage your cache well. (Tip: I use NSStringFromCGSize as a dictionary key for shared images). Also, you can actually spin off your drawing routines on a different thread, and then set your layer contents when it's done. This prevents your drawing routines from blocking the main thread (the current image may be stretched in this case though until the new image is set).
I have subclassed a UILabel with the following code, which works fine - but any animations involving the subclass runs a lot slower than normal UILabels. I'm assuming Quartz is the culprit, but is there anything I could do to speed things up a bit?
- (void)drawTextInRect:(CGRect)rect
{
CGSize shadowOffset = self.shadowOffset;
UIColor *textColor = self.textColor;
// Establish the Quartz 2D drawing destination:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 1);
CGContextSetLineJoin(context, kCGLineJoinRound);
// Draw the label’s outline:
CGContextSetTextDrawingMode(context, kCGTextStroke);
self.textColor = [UIColor whiteColor];
[super drawTextInRect:rect];
// Draw the label:
CGContextSetTextDrawingMode(context, kCGTextFill);
self.textColor = [UIColor textColor];
self.shadowOffset = CGSizeMake(0, 0);
[super drawTextInRect:rect];
self.shadowOffset = shadowOffset;
}
What #MobileOverlord said is certainly applicable, especially the parts about about profiling.
I will note that setting shouldRasterize=YES is not a catch-all solution (why wouldn't Apple have set it on as default if that were the case?). Yes, it can improve scrolling performance, but it can do so at the expense of memory use since you can end up with a bunch of large images sitting around in cache.
It also incurs overhead at the time of creation, I believe (but would have to check to be sure) including an off-screen rendering pass to actually create the rasterized copy. Depending on how the layer is used, this could actually hurt performance.
An additional factor to consider is whether your view has any transparency. If you can guarantee to the frameworks that your view is opaque (cf. setOpaque/isOpaque), they can optimize rendering by not considering all the complexities associated with alpha channels, etc. Similar considerations apply to CALayer.
Finally, outside the block of code you showed, did you do anything sneaky to the backing layer (e.g. set a shadow or corner radius)? That's a quick way to kill performance on animation too.
After you are finished drawing your label you can call shouldRasterize on it's layer and it should speed up your animation.
shouldRasterize A Boolean that indicates whether the layer is rendered
as a bitmap before compositing. Animatable
#property BOOL shouldRasterize Discussion When the value of this
property is YES, the layer is rendered as a bitmap in its local
coordinate space and then composited to the destination with any other
content. Shadow effects and any filters in the filters property are
rasterized and included in the bitmap. However, the current opacity of
the layer is not rasterized. If the rasterized bitmap requires scaling
during compositing, the filters in the minificationFilter and
magnificationFilter properties are applied as needed.
When the value of this property is NO, the layer is composited
directly into the destination whenever possible. The layer may still
be rasterized prior to compositing if certain features of the
compositing model (such as the inclusion of filters) require it.
The default value of this property is NO.
From CALayer Class Reference
The simulator is always going to give you way better results than a device will because it is able to use the full processing power and memory of your system. You'll usually get flawed results this way. Whenever you are doing CoreGraphics drawing in conjunction with CoreAnimation it is important to test the results on a real device.
For this you can try to run your app in Instruments Core Animation Tool to try to find culprits. Check out my tutorial on it.
Instruments – Optimizing Core Animation
It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.