I have written a simple 2D animation application on the iPad using Sprite Kit. The only difficult part of this application is to clip an image (.png) with a calculated polygon (a shadow). Upon investigation, it appeared to me that the only way to accomplish this was to use UIImage and a UIBezier path (Note: my path does not require the full capabilities of a Bezier path, a simple CGPath will work in my case.). I wrote the following method to accomplish this clipping:
- (UIImage *) maskImage: (UIImage *)originalImage toPath: (UIBezierPath *)path
{
UIGraphicsBeginImageContextWithOptions (originalImage.size, NO, 0.0f);
[path addClip];
[originalImage drawAtPoint: CGPointZero];
UIImage *maskedImage = UIGraphicsGetImageFromCurrentImageContext ();
UIGraphicsEndImageContext ();
return maskedImage;
}
The results of this operation are eventually passed to:
SKSpriteNode *node = [SKSpriteNode spriteNodeWithTexture: [SKTexture textureWithImage: [self buildShadowMask]]];
When this code is executed using the iOS simulator, it runs extremely quickly, in fact, in real time. However, when I run this on an actual iPad (iPad Air or iPad 4G), it runs extremely slowly. So, my questions are: Why is this method of clipping an image so slow on the actual iPad device? Is there anyway to clip an image (with a simple path) that would be faster? Is this an issue with UIGraphicsContext or Sprite or both? I would greatly appreciate any advice or guidance that anyone can give me. Thanks very much!
Related
In my app, I use drawViewHierarchyInRect:afterScreenUpdates: in order to obtain a blurred image of my view (using Apple’s UIImage category UIImageEffects).
My code looks like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I noticed during development that many of my animations were delayed after using my app for a bit, i.e., my views were beginning their animations after a noticeable (but less than about a second) pause compared to a fresh launch of the app.
After some debugging, I noticed that the mere act of using drawViewHierarchyInRect:afterScreenUpdates: with screen updates set to YES caused this delay. If this message was never sent during a session of usage, the delay never appeared. Using NO for the screen updates parameter also made the delay disappear.
The strange thing is that this blurring code is completely unrelated (as far as I can tell) to the delayed animations. The animations in question do not use drawViewHierarchyInRect:afterScreenUpdates:, they are CAKeyframeAnimation animations. The mere act of sending this message (with screen updates set to YES) seems to have globally affected animations in my app.
What’s going on?
(I have created videos illustrating the effect: with and without an animation delay. Note the delay in the appearance of the "Check!" speech bubble in the navigation bar.)
UPDATE
I have created an example project to illustrate this potential bug. https://github.com/timarnold/AnimationBugExample
UPDATE No. 2
I received a response from Apple verifying that this is a bug. See answer below.
I used one of my Apple developer support tickets to ask Apple about my issue.
It turns out it is a confirmed bug (radar number 17851775). Their hypothesis for what is happening is below:
The method drawViewHierarchyInRect:afterScreenUpdates: performs its operations on the GPU as much as possible, and much of this work will probably happen outside of your app’s address space in another process. Passing YES as the afterScreenUpdates: parameter to drawViewHierarchyInRect:afterScreenUpdates: will cause a Core Animation to flush all of its buffers in your task and in the rendering task. As you may imagine, there’s a lot of other internal stuff that goes on in these cases too. Engineering theorizes that it may very well be a bug in this machinery related to the effect you are seeing.
In comparison, the method renderInContext: performs its operations inside of your app’s address space and does not use the GPU based process for performing the work. For the most part, this is a different code path and if it is working for you, then that is a suitable workaround. This route is not as efficient as it does not use the GPU based task. Also, it is not as accurate for screen captures as it may exclude blurs and other Core Animation features that are managed by the GPU task.
And they also provided a workaround. They suggested that instead of:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I should do this
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
Hopefully this is helpful for someone!
I tried all the latest snapshot methods using swift. Other methods didn't work for me in the background. But taking snapshot this way worked for me.
create an extension with parameters view layer and view bounds.
extension UIView {
func asImage(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(viewBounds.size)
viewLayer.render(in:UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
Usage
DispatchQueue.main.async {
let layer = self.selectedView.layer
let bounds = self.selectedView.bounds
DispatchQueue.global(qos: .background).async {
let image = self.selectedView.asImage(viewLayer: layer, viewBounds: bounds)
}
}
We need to calculate layer and bounds in the main thread, then other operations will work in the background thread. It will give smooth user experience without any lag or interruption in UI.
Why do you have this line (from your sample app):
animation.beginTime = CACurrentMediaTime();
Just remove it, and everything will be as you want it to be.
By setting animation time explicitly to CACurrentMediaTime() you ignore possible time transformations that can be present in layer tree. Either don't set it at all (by default animations will start now) or use time conversion method:
animation.beginTime = [view.layer convertTime:CACurrentMediaTime() fromLayer:nil];
UIKit adds time transformations to layer tree when you call afterScreenUpdates:YES to prevent jumps in ongoing animation, that would be caused otherwise by intermediate CoreAnimation commits. If you want to start animation at specific time (not now), use time conversion method mentioned above.
And while at it, strongly prefer using -[UIView snapshotViewAfterScreenUpdates:] and friends instead of -[UIView drawViewHierarchyInRect:] family (preferably specifying NO for afterScreenUpdates part). In most of the cases you don't really need a persistent image and view snapshot is what you actually want. Using view snapshot instead of rendered image has following benefits:
2x-10x faster
Uses 2x-3x less memory
It will always use correct colorspace and buffer format (e.g. on devices with wide color screen)
It will use correct scale and orientation, so you don't need to think how to position your image so it looks good.
It works with accessibility features better (e.g. with Smart Invert colors)
View snapshot will also capture out-of-process and secure views correctly (while drawViewHierarchyInRect will render them black or white).
When the afterScreenUpdates parameter is set to YES, the system has to wait until all pending screen updates have happened before it can render the view.
If you're kicking off animations at the same time then perhaps the rendering and the animations are trying to happen together and this is causing a delay.
It may be worth experimenting with kicking off your animations slightly later to prevent this. Obviously not too much later because that would defeat the object, but a small dispatch_after interval would be worth trying.
Have you tried running your code on a background thread?
Heres an example using gcd:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//background thread
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_sync(dispatch_get_main_queue(), ^(void) {
//update ui on main thread
});
});
I'm trying to add a blur effect to a background view.
Here's my background view.
https://github.com/martinjuhasz/MJPopupViewController/blob/master/Source/MJPopupBackgroundView.m
I believe that the idea is to take a snapshot of the parent view and then add a blur effect to that image.
I've seen various approaches not sure what will work and what is the best approach.
Also I'm not sure where I'd create the snapshot.
The most common way is indeed probably just to take a snapshot and blur that.
You can take a snapshot by doing something like this:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0f);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:NO];
UIImage * snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return snapshotImage;
}
Keep in mind this is a fast iOS7+ blurring method so if you want to support lower versions of iOS you will need to use a slower method, detailed in this answer.
There are a lot of ways to achieve blur, the 2 most popular are probably to use Apple's UIImageEffects WWDC sample code which can be found here, or to use GPUImage, which can be found here.
Ablow link points to present view and affect the background with blur effect
https://stackoverflow.com/a/52374977/5233180
I am implementing the following graphics drawRect function but it uses more than 50% of the CPU - any idea on how I could solve that? I just draw a few random lines, but I want that they all have a different width.
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
#autoreleasepool {
CGContextRef context = UIGraphicsGetCurrentContext();
CGMutablePathRef path = CGPathCreateMutable();
float width = rect.size.width;
int nbLine=10; // i want to draw 10 paths
for (int iLine=1;iLine<nbLine;iLine++){
float Pathwidth=0.8*(nbLine-(float)iLine)/nbLine;
CGContextBeginPath(context);
CGContextSetLineWidth(context, Pathwidth); //each path should have its own width
CGPathMoveToPoint(path, NULL, 0,0);
for (int i=0;i<10;i++){
float x=width/(i+1);
float y=1;//for this example, I just put a fixed number here - it's normally an external variable
CGPathAddQuadCurveToPoint(path, NULL, x+width/10, y, x,0);
}
CGContextAddPath(context, path);
CGContextStrokePath(context);
}
CGPathRelease(path);
}
}
thank you !
There are a few things you can try.
Use instruments to find out exactly which line(s) are using the CPU.
Build the path in a UIBezierPath once and then draw them each time in drawRect.
Look at where setNeedsDisplay is being called from. Most likely each time it draws it isn't using up too much CPU. It is very possible that the problem is that it is rapidly drawing over and over.
If you are pressed for performance you can use a GLKView. Core Drawing is based in OpenGL with a whole bunch of optimizations set for graphical clarity and quality. But if those options are slowing you down to the point of non-usability then that may be your best bet.
My second suggestion would be to not call the draw so often. You said it gets called every 4 ms which is 250 times per second. The user can't see that fine of detail, so that is extravagant.
My third suggestion is to use a UIView, draw once, then modify it's transform based on your y variable. It appears as though you could do a simple y scaling to achieve what you are trying to do (no x changes, no width of line changes (after drawing it once)). I could be over simplifying based on your code, but it would be a good thing to try. You could also do a mix of this suggestion and your code and redraw if the y scale transform becomes too large.
I have a custom UITableViewCell subclass which shows and image and a text over it.
The image is downloaded while the text is readily available at the time the table view cell is displayed.
From various places, I read that it is better to just have one view and draw stuff in the view's drawRect method to improve performance as compared to have multiple subviews (in this case a UIImageView view and 2 UILabel views)
I don't want to draw the image in the custom table view cell's drawRect because
the image will probably not be available the first time its called,
I don't want to draw the whole image everytime someone calls drawRect.
The image in the view should only be done when someone asks for the image to be displayed (example when the network operation completes and image is available to be rendered). The text however is drawn in the -drawRect method.
The problems:
I am not able to show the image on the screen once it is downloaded.
The code I am using currently is-
- (void)drawImageInView
{
//.. completion block after downloading from network
if (image) { // Image downloaded from the network
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 1.0);
CGContextSetTextDrawingMode(context, kCGTextFill);
CGPoint posOnScreen = self.center;
CGContextDrawImage(context, CGRectMake(posOnScreen.x - image.size.width/2,
posOnScreen.y - image.size.height/2,
image.size.width,
image.size.height),
image .CGImage);
UIGraphicsEndImageContext();
}
}
I have also tried:
UIGraphicsBeginImageContext(rect.size);
[image drawInRect:rect];
UIGraphicsEndImageContext();
to no avail.
How can I make sure the text is drawn on the on top of the image when it is rendered. Should calling [self setNeedsDisplay] after UIGraphicsEndImageContext(); be enough to
ensure that the text is rendered on top of the image?
You're right on the fact that drawing text will make your application faster as there's no UILabel object overhead, but UIImageViews are highly optimized and you won't probably ever be able to draw images faster than this class. Therefore I highly recommend you do use UIImageViews to draw your images. Don't fall in the optimization pitfall: only optimize when you see that your application is not performing at it's max.
Once the image is downloaded, just set the imageView's image property to your image and you'll be done.
Notice that the stackoverflow page you linked to is almost four years old, and that question links to articles that are almost five years old. When those articles were written in 2008, the current device was an iPhone 3G, which was much slower (both CPU and GPU) and had much less RAM than the current devices in 2013. So the advice you read there isn't necessarily relevant today.
Anyway, don't worry about performance until you've measured it (presumably with the Time Profiler instrument) and found a problem. Just implement your interface in the simplest, most maintainable way you can. Then, if you find a problem, try something more complicated to fix it.
So: just use a UIImageView to display your image, and a UILabel to display your text. Then test to see if it's too slow.
If your testing shows that it's too slow, profile it. If you can't figure out how to profile it, or how to interpret the profiler output, or how to fix the problem, then come back and post a question, and include the profiler output.
I'm trying to screen capture a view that uses CATiledLayers (for animation) but unable to get the image that I want.
I tried it on Apple's "PhotoScroller" sample application and added this:
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:ctx];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
However, the tiles don't render in the resulting UIImage and all I get is the tile outlines.
Seems that CATiledLayer's renderInContext behaves differently from CALayer.
Am I doing anything wrong in trying to capture the tiles? Is my only solution to render the tiles individually myself?
In the end, rather than trying to render the tiles into another view just for animation, I just created a new instance of ImageScrollView, and animated the original one and the new one together before deallocating the original one.