Performance Issues When Using Many CALayer Masks - ios

I am trying to use CAShapeLayer to mask a CALayer in my iOS app as it takes a fraction of the CPU time to mask an image vs manually masking one in a bitmap context;
When I have several dozen or more images layered over each other, the CAShapeLayer masked UIImageView is slow to move to my touch.
Here is some example code:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"SomeImage.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:image];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), image.size.width * .25, image.size.height * .25);
CAShapeLayer *shape = [CAShapeLayer layer];
shape.path = path;
imageView.layer.mask = shape;
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
With the above code, imageView is very laggy. However, it reacts instantly if I mask it manually in a bitmap context:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Pad-Classic0.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width * .25, image.size.height * .25), NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddPath(ctx, path);
CGContextClip(ctx);
[image drawInRect:CGRectMake(-(image.size.width * .25), -(image.size.height * .25), image.size.width, image.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:finalImage];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), finalImage.size.width, finalImage.size.height);
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
By the way, here is the code to SLTUIImageView, it's just a simple subclass of UIImageView that responds to touches (for anyone who was wondering):
-(id)initWithImage:(UIImage *)image{
self = [super initWithImage:image];
if (self) {
self.userInteractionEnabled = YES;
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[self.superview bringSubviewToFront:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
self.center = [touch locationInView:self.superview];
}
Is it possible to somehow optimize how the CAShapeLayer is masking the UIImageView so that the performance is improved? I have tried to find out where the bottle-neck is using the Time Profiler in Instruments, but I can't tell exactly what is causing it.
I have tried setting shouldRasterize to YES on both layer and on layer.mask but neither seem to have any effect. I'm not sure what to do.
Edit:
I have done more testing and find that if I use just a regular CALayer to mask another CALayer (layer.mask = someOtherLayer) I have the same performance issues. It seems that the problem isn't specific to CAShapeLayer—rather it is specific to the mask property of CALayer.
Edit 2:
So after learning more about using the Core Animation tool in Instruments, I learned that the view is being rendered offscreen each time it moves. Setting shouldRaster to YES when the touch begins and turning it off when the touch ends makes the view stay green (thus keeping the cache) in instruments, but performance is still terrible. I believe this is because even though the view is being cached, if it isn't opaque, than it still has to be re-rendered with each frame.
One thing to emphasize is that if there are only a few views being masked (say even around ten) the performance is pretty good. However, when you increase that to 100 or more, the performance lags. I imagine this is because when one moves over the others, they all have to be re-rendered.
My conclusion is this, I have one of two options.
First, there must be someway to permanently mask a view (render it once and call it good). I know this can be done via the graphic or bitmap context route as I show in my example code, but when a layer masks its view, it happens instantly. When I do it in a bitmap context as shown, it is quite slow (as in it almost can't even be compared how much slower it is).
Second, there must be some faster way to do it via the bitmap context route. If there is an expert in masking images or views, their help would be very much appreciated.

You've gotten pretty far along and I believe are almost to a solution. What I would do is simply an extension of what you've already tried. Since you say many of these layers are "ending up" in final positions that remain constant relative to the other layers, and the mask.. So simply render all those "finished" layers to a single bitmap context. That way, every time you write out a layer to that single context, you'll have one less layer to worry about that is slowing down the animation/rendering process.

Quartz (drawRect:) is slower than CoreAnimation for many reasons: CALayer vs CGContext drawRect vs CALayer. But it is necessary to use it correctly.
In the documentation you can see some advices. ImprovingCoreAnimationPerformance
If you want a hight performance, maybe you can try using AsyncDisplayKit. This framework allows to create smooth and responsive apps.

Related

Drawing a self-erasing path with CGContextRef

I would like to draw a "disappearing stroke" on a UIImageView, which follows a touch event and self-erases after a fixed time delay. Here's what I have in my ViewController.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
CGPoint lp = lastPoint;
UIColor *color = [UIColor blackColor];
[self drawLine:5 from:lastPoint to:currentPoint color:color blend:kCGBlendModeNormal];
double delayInSeconds = 1.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self drawLine:brush from:lp to:currentPoint color:[UIColor clearColor] blend:kCGBlendModeClear];
});
lastPoint = currentPoint;
}
- (void)drawLine:(CGFloat)width from:(CGPoint)from to:(CGPoint)to color:(UIColor*)color blend:(CGBlendMode)mode {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.tempDrawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context, to.x, to.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, width);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextSetBlendMode(context, mode);
CGContextStrokePath(context);
self.tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.tempDrawImage setAlpha:1];
UIGraphicsEndImageContext();
}
The draw phase works nicely, but there are a couple problems with the subsequent erase phase.
While the line "fill" is correctly cleared, a thin stroke around the path remains.
The "erase phase" is choppy, nowhere near as smooth as the drawing phase. My best guess is that this is due to the cost of UIGraphicsBeginImageContext run in dispatch_after.
Is there a better approach to drawing a self-erasing line?
BONUS: What I'd really like is for the path to "shrink and vanish." In other words, after the delay, rather than just clearing the stroked path, I'd like to have it shrink from 5pt to 0pt while simultaneously fading out the opacity.
I would just let the view draw continuously with 60 Hz, and each time draw the entire line using points stored in an array. This way, if you remove the oldest points from the array, they will not be drawn anymore.
to hook up your view to display refresh rate (60 Hz), try this:
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
Store an age property along with each point, then just loop over the array and remove points which are older than your threshold.
e.g.
#interface AgingPoint <NSObject>
#property CGPoint point;
#property NSTimeInterval birthdate;
#end
// ..... later, in the draw call
NSTimeInterval now = CACurrentMediaTime();
AgingPoint *p = [AgingPoint new];
p.point = touchlocation; // get yr touch
p.birthdate = now;
// remove old points
while(myPoints.count && now - [myPoints[0] birthdate] > 1)
{
[myPoints removeObjectAtIndex: 0];
}
myPoints.add(p);
if(myPoints.count < 2)
return;
UIBezierPath *path = [UIBezierPath path];
[path moveToPoint: [myPoints[0] point]];
for (int i = 1; i < myPoints.count; i++)
{
[path lineToPoint: [myPoints[i] point];
}
[path stroke];
So on each draw call, make a new bezierpath, move to the first point, then add lines to all other points. Finally, stroke the line.
To implement the "shrinking" line, you could draw just short lines between consecutive pairs of points in your array, and use the age property to calculate stroke width. This is not perfect, as the individual segments will have the same width at start and end point, but it's a start.
Important: If you are going to draw a lot of points, performance will become an issue. This kind of path rendering with Quartz is not exactly tuned to render real fast. In fact, it is very, very slow.
Cocoa arrays and objects are also not very fast.
If you run into performance issues and you want to continue this project, look into OpenGL rendering. You will be able to have this run a lot faster with plain C structs pushed into your GPU.
There were a lot of great answers here. I think the ideal solution is to use OpenGL, as it'll inevitably be the most performant and provide the most flexibility in terms of sprites, trails, and other interesting visual effects.
My application is a remote controller of sorts, designed to simply provide a small visual aid to track motion, rather than leave persistent or high fidelity strokes. As such, I ended up creating a simple subclass of UIView which uses CoreGraphics to draw a UIBezierPath. I'll eventually replace this quick-fix solution with an OpenGL solution.
The implementation I used is far from perfect, as it leaves behind white paths which interfere with future strokes, until the user lifts their touch, which resets the canvas. I've posted the solution I used here, in case anyone might find it helpful.

Masking two images

I want to do a selective masking between two images in iOS similar to the mask function in Blender. There are two images 1 and 2 (resized to same dimensions). Initially only image 1 will be visible but wherever user touches any area upon image1, it becomes transparent and image 2 becomes visible in those regions.
I created a mask-like image using core graphics with touch move. It is basically a full black image with white portions wherever I touched. The alpha is set to 1.0 throughout. I can use this image as a mask and do the necessary by implementing my own image-processing methods which will iterate over each pixel, check it and set according values. Now this method will be called inside touch move and so my method might slow the entire process (specially for 8MP camera images).
I want to know how this can be achieved by using Quartz Core or Core Graphics which will be efficient enough to run in big images.
The code I have so far :
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
mouseSwiped = YES;
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:staticBG];
UIGraphicsBeginImageContext(staticBG.frame.size);
[maskView.image drawInRect:CGRectMake(0, 0, maskView.frame.size.width, maskView.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 20.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 1.0, 1.0, 0.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
maskView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = currentPoint;
mouseMoved++;
if (mouseMoved == 10)
mouseMoved = 0;
staticBG.image = [self maskImage:staticBG.image withMask:maskView.image];
//maskView.hidden = NO;
}
- (UIImage*) maskImage:(UIImage *)baseImage withMask:(UIImage *)maskImage
{
CGImageRef imgRef = [baseImage CGImage];
CGImageRef maskRef = [maskImage CGImage];
CGImageRef actualMask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask(imgRef, actualMask);
return [UIImage imageWithCGImage:masked];
}
The maskImage method is not working as it creates a mask image depending upon alpha values.
I went through this link : Creating Mask from Path but I cannot understand the answer.
First of all I will mention something that I hope you know already.
masking works by taking the alpha values only.
creating an image with core graphics at each touchMove is a pretty huge overhead & you should try to avoid or use some other way of doing things.
try to use a static mask image.
I would like to propose you look at this from an inverted point of view.
i.e Instead of trying to make a hole in the top image trying to view the bottom, why not place the bottom image on top & mask the bottom image so that it would show user's touch point covering up the top view at specific parts. ?
I've done an example for you to get an idea over here > http://goo.gl/Zlu31T
Good luck & do post back if anything is not clear. I do believe there are much better and optimised way of doing this though. "_"
Since you're doing this in real time, I suggest you fake it while editing, and if you need to output the image later on, you can mask it for real, since it might take some time( not much, just not fast enough to do it in real time). And by faking I mean to put the image1 as background and on top of it put the hidden image2. Once the user touches the point, set the bounds of the image2 UIImageView to
CGRect rect= CGRectMake(touch.x - desiredRect.size.width/2,
touch.y - desiredRect.size.height/2,
desiredRect.size.width,
desiredRect.size.height);
and make it visible.
desiredRect would be the portion of the image2 that you want to show. Upon lifting the finger, you can just hide the image2 UIImageView so the image1 is fully visible. It is the fastest way I could think right now if your goal isn't to output the image at that very moment.
Use this code it will help for masking the two UIImages
CGSize newSize = CGSizeMake(320, 377);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[ backGroundImageView.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity
[self.drawImage.image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageData = UIImagePNGRepresentation(newImage);

Scaling an image is slow on an iPad 4th gen, are there faster alternatives?

I'm trying to zoom and translate an image on the screen.
here's my drawRect:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, NO);
CGContextScaleCTM (context, senderScale, senderScale);
[self.image drawAtPoint:CGPointMake(imgposx, imgposy)];
CGContextRestoreGState(context);
}
When senderScale is 1.0, moving the image (imgposx/imgposy) is very smooth. But if senderScale has any other value, performance takes a big hit and the image stutters when I move it.
The image I am drawing is a UIImageobject. I create it with
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0);
and draw a simple UIBezierPath(stroke):
self.image = UIGraphicsGetImageFromCurrentImageContext();
Am I doing something wrong? Turning off the anti-aliasing did not improve things much.
Edit:
I tried this:
rectImage = CGRectMake(0, 0, self.frame.size.width * senderScale, self.frame.size.height * senderScale);
[image drawInRect:rectImage];
but it was just as slow as the other method.
If you want this to perform well, you should let the GPU do the heavy lifting by using CoreAnimation instead of drawing the image in your -drawRect: method. Try creating a view and doing:
myView.layer.contents = self.image.CGImage;
Then zoom and translate it by manipulating the UIView relative to its superview. If you draw the image in -drawRect: you're making it do the hard work of blitting the image for every frame. Doing it via CoreAnimation only blits once, and then subsequently lets the GPU zoom and translate the layer.

Cropping an image without using an image mask?

I'm trying to add a feature in my iPhone app that allows users to record the screen. To do this, I used an open source class called ScreenCaptureView that compiles a series of screenshots of the main view through the renderInContext: method. However, this method ignores masked CALayers, which is key to my app.
EDIT:
I need a way to record the screen so that the masks are included. Although this question specifically asks for a way to create the illusion of an image mask, I'm open to any other modifications I can make to record the screen successfully.
APP DESCRIPTION
In the app, I take a picture and create the effect of a moving mouth by animating the position of the jaw region, as shown below.
Currently, I have the entire face as one CALayer, and the chin region as a separate CALayer. To make the chin layer, I mask the chin region from the complete face using a CGPath. (This path is an irregular shape and must be dynamic).
- (CALayer *)getChinLayerFromPic:(UIImage *)pic frame:(CGRect)frame {
CGMutablePathRef mPath = CGPathCreateMutable();
CGPathMoveToPoint(mPath, NULL, p0.x, p0.y);
CGPoint midpt = CGPointMake( (p2.x + p0.x)/2, (p2.y+ p0.y)/2);
CGPoint c1 = CGPointMake(2*v1.x - midpt.x, 2*v1.y - midpt.y); //control points
CGPoint c2 = CGPointMake(2*v2.x - midpt.x, 2*v2.y - midpt.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c1.x, c1.y, p2.x, p2.y);
CGPathAddQuadCurveToPoint(mPath, NULL, c2.x, c2.y, p0.x, p0.y);
CALayer *chin = [CALayer layer];
CAShapeLayer *chinMask = [CAShapeLayer layer];
chin.frame = frame;
chin.contents = (id)[pic CGImageWithProperOrientation];
chinMask.path = mPath;
chin.mask = chinMask;
CGPathRelease(mPath);
return chin;
}
I then animate the chin layer with a path animation.
As mentioned before, the renderInContext: method ignores the mask, and returns an image of the entire face instead of just the chin. Is there any way I can create an illusion of masking the chin? I would like to use CALayers if possible, since it would be most convenient for animations. However, I'm open to any ideas, including other ways to capture the video. Thanks.
EDIT:
I'm turning the cropped chin into a UIImage, and then setting that new image as the layers contents, instead of directly masking the layer. However, the cropped region is the reverse of the specified path.
CALayer *chin = [CALayer layer];
chin.frame = frame;
CGImageRef imageRef = [pic CGImage];
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
int targetWidth = frame.size.width;
int targetHeight = frame.size.height;
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
CGContextAddPath(bitmap, mPath);
CGContextClip(bitmap);
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *chinPic = [UIImage imageWithCGImage:ref];
chin.contents = (id)[chinPic CGImageWithProperOrientation];
Why don't you draw the CALayer of the chin into a seperate CGImage and make a new UIImage with that?
Then you can add this CGImage to a seperate UIImageView, which you can just move around with a PanGestureRecognizer for example?
I suggest that you draw each part alone(you don't need masks), the face without a chin, and the chin, both with alpha pixels around, then just draw the chin over and move it on your path ..
Please inform me if this wasn't helpful to you, regards
If you just need this in order to capture the screen, it is much easier to use a programme such as
http://www.airsquirrels.com/reflector/
that connects your device to the computer via AirPlay, and then record the stream on the computer. This particular programme has screen recording built in, extremely convenient. I think there is a trial version you can use for recordings of up to 10 minutes.
Have you tried taking a look at layer.renderInContext doesn't take layer.mask into account? ?
In particular (notice the coordinate flip):
//Make the drawing right with coordinate switch
CGContextTranslateCTM(context, 0, cHeight);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(maskLayer.frame.origin.x * mod, maskLayer.frame.origin.y * modTwo, maskLayer.frame.size.width * mod,maskLayer.frame.size.height * modTwo), maskLayer.image.CGImage);
//Reverse the coordinate switch
CGAffineTransform ctm = CGContextGetCTM(context);
ctm = CGAffineTransformInvert(ctm);
CGContextConcatCTM(context, ctm);

drawInRect method works too slow

Now i have following:
- (void)drawRect
{
// some drawing
[bgImage drawinRect:self.bounds];
// some drawing
}
I have more than 40 views with text and some marks inside. I need to repaint all these views on user tapping - it should be really fast!
I instrumented my code and saw: 75% of all execution time is [MyView drawRect:] and 95% of my drawRect time is [bgImage drawinRect:self.bounds] call. I need to draw background within GPU nor CPU. How is it possible?
What i have tried:
Using subviews instead of drawRect. In my case it is very slow because of unkickable color blending.
Adding UIImageView as background don't helps, we can't draw on subviews ...
Adding image layer to CALayer? Is it possible?
UPDATE:
Now i am trying to use CALayer instead of drawInRect.
- (void)drawRect
{
// some drawing
CALayer * layer = [CALayer layer];
layer.frame = self.bounds;
layer.contents = (id)image.CGImage;
layer.contentsScale = [UIScreen mainScreen].scale;
layer.contentsCenter = CGRectMake(capInsects.left / image.size.width,
capInsects.top / image.size.height,
(image.size.width - capInsects.left - capInsects.right) / image.size.width,
(image.size.height - capInsects.top - capInsects.bottom) / image.size.height);
// some drawing
}
Nothing instead of this new layer is visible right now. All my painting is under this layer i think...
I would use UILabels; you can reduce the cost of blending by giving the labels a background color or setting shouldRasterize = YES on the layer of the view holding those labels. (You'll probably also want to set rasterizationScale to be the same as [[UIScreen mainScreen] scale]).
Another suggestion:
There are lots of open-source calendar components that have probably had to try and solve the same issues. Perhaps you could look and see how they solved them.

Resources