Drawing a self-erasing path with CGContextRef - ios

I would like to draw a "disappearing stroke" on a UIImageView, which follows a touch event and self-erases after a fixed time delay. Here's what I have in my ViewController.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
CGPoint lp = lastPoint;
UIColor *color = [UIColor blackColor];
[self drawLine:5 from:lastPoint to:currentPoint color:color blend:kCGBlendModeNormal];
double delayInSeconds = 1.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
[self drawLine:brush from:lp to:currentPoint color:[UIColor clearColor] blend:kCGBlendModeClear];
});
lastPoint = currentPoint;
}
- (void)drawLine:(CGFloat)width from:(CGPoint)from to:(CGPoint)to color:(UIColor*)color blend:(CGBlendMode)mode {
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.tempDrawImage.image drawInRect:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height)];
CGContextMoveToPoint(context, from.x, from.y);
CGContextAddLineToPoint(context, to.x, to.y);
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, width);
CGContextSetStrokeColorWithColor(context, [color CGColor]);
CGContextSetBlendMode(context, mode);
CGContextStrokePath(context);
self.tempDrawImage.image = UIGraphicsGetImageFromCurrentImageContext();
[self.tempDrawImage setAlpha:1];
UIGraphicsEndImageContext();
}
The draw phase works nicely, but there are a couple problems with the subsequent erase phase.
While the line "fill" is correctly cleared, a thin stroke around the path remains.
The "erase phase" is choppy, nowhere near as smooth as the drawing phase. My best guess is that this is due to the cost of UIGraphicsBeginImageContext run in dispatch_after.
Is there a better approach to drawing a self-erasing line?
BONUS: What I'd really like is for the path to "shrink and vanish." In other words, after the delay, rather than just clearing the stroked path, I'd like to have it shrink from 5pt to 0pt while simultaneously fading out the opacity.

I would just let the view draw continuously with 60 Hz, and each time draw the entire line using points stored in an array. This way, if you remove the oldest points from the array, they will not be drawn anymore.
to hook up your view to display refresh rate (60 Hz), try this:
displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSDefaultRunLoopMode];
Store an age property along with each point, then just loop over the array and remove points which are older than your threshold.
e.g.
#interface AgingPoint <NSObject>
#property CGPoint point;
#property NSTimeInterval birthdate;
#end
// ..... later, in the draw call
NSTimeInterval now = CACurrentMediaTime();
AgingPoint *p = [AgingPoint new];
p.point = touchlocation; // get yr touch
p.birthdate = now;
// remove old points
while(myPoints.count && now - [myPoints[0] birthdate] > 1)
{
[myPoints removeObjectAtIndex: 0];
}
myPoints.add(p);
if(myPoints.count < 2)
return;
UIBezierPath *path = [UIBezierPath path];
[path moveToPoint: [myPoints[0] point]];
for (int i = 1; i < myPoints.count; i++)
{
[path lineToPoint: [myPoints[i] point];
}
[path stroke];
So on each draw call, make a new bezierpath, move to the first point, then add lines to all other points. Finally, stroke the line.
To implement the "shrinking" line, you could draw just short lines between consecutive pairs of points in your array, and use the age property to calculate stroke width. This is not perfect, as the individual segments will have the same width at start and end point, but it's a start.
Important: If you are going to draw a lot of points, performance will become an issue. This kind of path rendering with Quartz is not exactly tuned to render real fast. In fact, it is very, very slow.
Cocoa arrays and objects are also not very fast.
If you run into performance issues and you want to continue this project, look into OpenGL rendering. You will be able to have this run a lot faster with plain C structs pushed into your GPU.

There were a lot of great answers here. I think the ideal solution is to use OpenGL, as it'll inevitably be the most performant and provide the most flexibility in terms of sprites, trails, and other interesting visual effects.
My application is a remote controller of sorts, designed to simply provide a small visual aid to track motion, rather than leave persistent or high fidelity strokes. As such, I ended up creating a simple subclass of UIView which uses CoreGraphics to draw a UIBezierPath. I'll eventually replace this quick-fix solution with an OpenGL solution.
The implementation I used is far from perfect, as it leaves behind white paths which interfere with future strokes, until the user lifts their touch, which resets the canvas. I've posted the solution I used here, in case anyone might find it helpful.

Related

Re-draw image using touch method

I use the code below for masking image to remove a part using touch. This works properly using CGContextSetBlendMode.
Now I want to re-draw that image using this touch event. Can you help me for re-draw erased part of the image?
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
currentTouch = [touch locationInView:Second_IMG];
CGFloat brushSize = 35;
CGColorRef strokeColor = [UIColor whiteColor].CGColor;
UIGraphicsBeginImageContext(Second_IMG.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[Second_IMG.image drawInRect:CGRectMake(0, 0, Second_IMG.frame.size.width, Second_IMG.frame.size.height)];
CGContextSetLineCap(context, kCGLineCapRound);
CGContextSetLineWidth(context, brushSize);
CGContextSetStrokeColorWithColor(context, strokeColor);
CGContextSetBlendMode(context, kCGBlendModeClear);
CGContextBeginPath(context);
CGContextMoveToPoint(context, lastTouch.x, lastTouch.y);
CGContextAddLineToPoint(context, currentTouch.x, currentTouch.y);
CGContextStrokePath(context);
Second_IMG.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastTouch = [touch locationInView:Second_IMG];
}
I use the code below for masking image
No, you're not masking at all. You are drawing a new image that lacks the touched area (because you "erased" that part of the image using kCGBlendModeClear). And you are then replacing the image of Second_IMG with this new partially erased image. So there is no "erased part" to "redraw" - you have thrown the information away.
Thus, to do what you are asking to do, you will need first to have access to a copy of the original Second_IMG.image. If you then replace the partially erased Second_IMG.image with the original, all the erased material will magically reappear - because you will return to the image with nothing erased.
But let's say that that isn't your goal: you don't want to bring back all the erased material, but only the material most recently erased. Then you would need to save off each generated intermediate image. You don't want to do this on every touchesMoved:, obviously, because you would end up with thousands of intermediate images. But if you save off the current state of the image on touchesEnded:, for example, then if you want to go back to before the most recent touchesMoved: sequence, you'll be able to, because you saved it on the previous occasion.
It would be simpler, however, if you were really using masking! In other words, if each stroke were expressed as an actual mask layer sitting on top of the image view, then the stroke could be removed simply by removing that mask layer. You would thus be working with masks layers expressing the strokes, rather than replacing the images as you are doing now. You will fine, in any case, that replacing the image repeatedly does not hold up well on the device - it's a very inefficient way to proceed.

UIBezierPath drawing takes up 100% of CPU

I have a UIBezierPath is drawn on the screen every time the user's touch is moved. This works fine on my simulator on a Mac Pro, but as soon as I moved it to a physical device, the drawing began to lag a lot. Using instruments, I checked the CPU usage, and it hits 100% while the user is drawing.
This becomes quite the problem because I have an NSTimer that is supposed to fire at 30fps, but when the CPU is overloaded by the drawing, the timer fires at only around 10fps.
How can I optimize the drawing so that it doesn't take 100% of the CPU?
UITouch* lastTouch;
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
if(lastTouch != nil)
return;
lastTouch = [touches anyObject];
[path moveToPoint:[lastTouch locationInView:self]];
[self drawDot];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
if([touches anyObject] == lastTouch)
[self drawDot];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if([touches anyObject] == lastTouch)
lastTouch = nil;
}
- (void)drawDot
{
if(!self.canDraw)
return;
[path addLineToPoint:[lastTouch locationInView:self]];
[self setNeedsDisplayInRect:CGRectMake([lastTouch locationInView:self].x-30, [lastTouch locationInView:self].y-30, 60, 60)];
}
- (void)drawRect:(CGRect)rect
{
CGColorRef green = [UIColor green].CGColor;
CGContextRef context = UIGraphicsGetCurrentContext();
CGColorRef gray = [UIColor gray].CGColor;
CGContextSetStrokeColor(context, CGColorGetComponents(gray));
[shape stroke];
CGContextSetStrokeColor(context, CGColorGetComponents(green));
[path stroke];
}
You shouldn't really be using -drawRect in modern code - it's 30 years old and designed for very old hardware (25Mhz CPU with 16MB of RAM and no GPU) and has performance bottlenecks on modern hardware.
Instead you should be using Core Animation or OpenGL for all drawing. Core Animation can be very similar to drawRect.
Add this to your view's init method:
self.layer.contentsScale = [UIScreen mainScreen].scale;
And implement:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
// basically the same code as you've got inDrawRect, although I recommend
// trying Core Graphics (CGContextMoveToPoint(), CGContextAddLineToPoint(),
// CGContextStrokePath(), etc)
}
And to make it re-draw (inside touchesMoved/etc):
[self.layer setNeedsDisplay];
I would also update your touch event code to just append to an NSMutableArray of points (perhaps encoded in in an [NSValue valueWithCGPoint:location]). That way you aren't creating a graphics path while responding to touch events.
A good place to look for efficient drawRect: is the wwdc video from a few years ago: https://developer.apple.com/videos/wwdc/2012/?id=238 . about 26m in he starts debugging a very simple painting app which is very similar to what you've described. At minute 30 he starts optimizing after the first optimization of setNeedsDisplayInRect: (which you're already doing).
short story: he uses an image as a backing store for what's already drawn so that each drawRect: call he only draws 1 image + very short additional line segments instead of drawing extremely long paths every time.

Performance Issues When Using Many CALayer Masks

I am trying to use CAShapeLayer to mask a CALayer in my iOS app as it takes a fraction of the CPU time to mask an image vs manually masking one in a bitmap context;
When I have several dozen or more images layered over each other, the CAShapeLayer masked UIImageView is slow to move to my touch.
Here is some example code:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"SomeImage.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:image];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), image.size.width * .25, image.size.height * .25);
CAShapeLayer *shape = [CAShapeLayer layer];
shape.path = path;
imageView.layer.mask = shape;
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
With the above code, imageView is very laggy. However, it reacts instantly if I mask it manually in a bitmap context:
UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:#"3.0-Pad-Classic0.jpg" ofType:nil]];
CGMutablePathRef path = CGPathCreateMutable();
CGPathAddEllipseInRect(path, NULL, CGRectMake(0.f, 0.f, image.size.width * .25, image.size.height * .25));
for (int i = 0; i < 200; i++) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width * .25, image.size.height * .25), NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextAddPath(ctx, path);
CGContextClip(ctx);
[image drawInRect:CGRectMake(-(image.size.width * .25), -(image.size.height * .25), image.size.width, image.size.height)];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SLTUIImageView *imageView = [[SLTUIImageView alloc]initWithImage:finalImage];
imageView.frame = CGRectMake(arc4random_uniform(CGRectGetWidth(self.view.bounds)), arc4random_uniform(CGRectGetHeight(self.view.bounds)), finalImage.size.width, finalImage.size.height);
[self.view addSubview:imageView];
[imageView release];
}
CGPathRelease(path);
By the way, here is the code to SLTUIImageView, it's just a simple subclass of UIImageView that responds to touches (for anyone who was wondering):
-(id)initWithImage:(UIImage *)image{
self = [super initWithImage:image];
if (self) {
self.userInteractionEnabled = YES;
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
[self.superview bringSubviewToFront:self];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
self.center = [touch locationInView:self.superview];
}
Is it possible to somehow optimize how the CAShapeLayer is masking the UIImageView so that the performance is improved? I have tried to find out where the bottle-neck is using the Time Profiler in Instruments, but I can't tell exactly what is causing it.
I have tried setting shouldRasterize to YES on both layer and on layer.mask but neither seem to have any effect. I'm not sure what to do.
Edit:
I have done more testing and find that if I use just a regular CALayer to mask another CALayer (layer.mask = someOtherLayer) I have the same performance issues. It seems that the problem isn't specific to CAShapeLayer—rather it is specific to the mask property of CALayer.
Edit 2:
So after learning more about using the Core Animation tool in Instruments, I learned that the view is being rendered offscreen each time it moves. Setting shouldRaster to YES when the touch begins and turning it off when the touch ends makes the view stay green (thus keeping the cache) in instruments, but performance is still terrible. I believe this is because even though the view is being cached, if it isn't opaque, than it still has to be re-rendered with each frame.
One thing to emphasize is that if there are only a few views being masked (say even around ten) the performance is pretty good. However, when you increase that to 100 or more, the performance lags. I imagine this is because when one moves over the others, they all have to be re-rendered.
My conclusion is this, I have one of two options.
First, there must be someway to permanently mask a view (render it once and call it good). I know this can be done via the graphic or bitmap context route as I show in my example code, but when a layer masks its view, it happens instantly. When I do it in a bitmap context as shown, it is quite slow (as in it almost can't even be compared how much slower it is).
Second, there must be some faster way to do it via the bitmap context route. If there is an expert in masking images or views, their help would be very much appreciated.
You've gotten pretty far along and I believe are almost to a solution. What I would do is simply an extension of what you've already tried. Since you say many of these layers are "ending up" in final positions that remain constant relative to the other layers, and the mask.. So simply render all those "finished" layers to a single bitmap context. That way, every time you write out a layer to that single context, you'll have one less layer to worry about that is slowing down the animation/rendering process.
Quartz (drawRect:) is slower than CoreAnimation for many reasons: CALayer vs CGContext drawRect vs CALayer. But it is necessary to use it correctly.
In the documentation you can see some advices. ImprovingCoreAnimationPerformance
If you want a hight performance, maybe you can try using AsyncDisplayKit. This framework allows to create smooth and responsive apps.

How to use CgLayer for optimal drawing

I have created a simple drawing project,the code works fine, but I want to cache the drawing into CGlayer, because I read that its more efficient way in drawing . I have read through the documents, but not able to understand it properly. So friends, I request you to please help me in this regard.
Below is my code, I want to know how to use CgLayer in this
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
if(myLayerRef == nil)
{
myLayerRef = CGLayerCreateWithContext(context, self.bounds.size, NULL);
}
CGContextRef layerContext = CGLayerGetContext(myLayerRef);
CGContextDrawLayerAtPoint(context, CGPointZero, myLayerRef);
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
m_previousPoint2 = m_previousPoint1;
m_previousPoint1 = [mytouch previousLocationInView:self];
m_currentPoint = [mytouch locationInView:self];
CGPoint mid1 = midPoint(m_previousPoint1, m_previousPoint2);
CGPoint mid2 = midPoint(m_currentPoint, m_previousPoint1);
testpath = CGPathCreateMutable();
CGPathMoveToPoint(testpath, NULL, mid1.x, mid1.y);
CGPathAddQuadCurveToPoint(testpath, NULL, m_previousPoint1.x, m_previousPoint1.y, mid2.x, mid2.y);
CGContextRef context = UIGraphicsGetCurrentContext();
context = CGLayerGetContext(myLayerRef);
CGRect bounds = CGPathGetBoundingBox(testpath);
CGPathRelease(testpath);
CGRect drawBox = bounds;
//Pad our values so the bounding box respects our line width
drawBox.origin.x -= self.lineWidth * 2;
drawBox.origin.y -= self.lineWidth * 2;
drawBox.size.width += self.lineWidth * 4;
drawBox.size.height += self.lineWidth * 4;
[self setNeedsDisplayInRect:drawBox];
}
- (void) drawingOperations
{
CGContextRef context1 = CGLayerGetContext(myLayerRef);
CGPoint mid1 = midPoint(m_previousPoint1, m_previousPoint2);
CGPoint mid2 = midPoint(m_currentPoint, m_previousPoint1);
CGContextMoveToPoint(context1, mid1.x, mid1.y);
CGContextAddQuadCurveToPoint(context1, m_previousPoint1.x, m_previousPoint1.y, mid2.x, mid2.y);
CGContextSetLineCap(context1, kCGLineCapRound);
CGContextSetLineWidth(context1, self.lineWidth);
CGContextSetStrokeColorWithColor(context1, self.lineColor.CGColor);
CGContextSetFlatness(context1, 2.0);
CGContextSetAllowsAntialiasing(context1, true);
CGContextStrokePath(context1);
}
Regards
Ranjit
The link posted by #hfossli is now dead, but here is the archived content:
CGLayer no longer recommended
Posted by robnapier on Jul 13, 2012 in Book Updates
I spend a lot of time in the labs at WWDC asking questions and talking with the developers. I sat down the Core Graphics engineers this time and asked them about one of my favorite underused tools: CGLayer, which I discuss at the end of Chapter 6. CGLayer sounds like a great idea: a drawing context optimized specifically for drawing on the screen, with hardware optimization. What could go wrong?
I started to have doubts, though, that CGLayer was always a great win. What if your layers were too large to store in GPU textures? CGLayer is advertised for use as a “stamp” that you repeatedly draw. Moving data to and from the GPU is expensive. Maybe CGLayer doesn’t make sense unless you draw it a certain number of times. The docs give no guidance on this.
So I asked the Core Graphics team “When should I be using CGLayer?”
“Never.”
… ??? Never? But for stamping right?
Never.
So we talked some more. It appears that CGLayer was one of those things that sounded great on paper, but just doesn’t always work in practice. Sometimes it’s faster. Sometimes its slower. There’s no easy rule for when it’s going to be faster. Over time it seems they’ve quietly abandoned it without actually deprecating it. I’ve asked that the docs be updated to match Apple’s current recommendation. The CGLayer Reference hasn’t been updated since 2006.
The recommendation I received was to use CGBitmapContext or CALayer for stamping. For the specific example given on pages 131-132, CATextLayer would probably be the best tool. Remember that you can easily clone a CALayer using initWithLayer:. (John Mueller points out below that this isn’t actually supported.)
Not optimal to use CGLayer anymore
http://iosptl.com/posts/cglayer-no-longer-recommended/

Smoother freehand drawing experience (iOS)

I am making a math related activity in which the user can draw with their fingers for scratch work as they try to solve the math question. However, I notice that when I move my finger quickly, the line lags behind my finger somewhat noticeably. I was wondering if there was some area I had overlooked for performance or if touchesMoved simply just doesn't come enough (it is perfectly smooth and wonderful if you don't move fast). I am using UIBezierPath. First I create it in my init method like this:
myPath=[[UIBezierPath alloc]init];
myPath.lineCapStyle=kCGLineCapSquare;
myPath.lineJoinStyle = kCGLineJoinBevel;
myPath.lineWidth=5;
myPath.flatness = 0.4;
Then in drawRect:
- (void)drawRect:(CGRect)rect
{
[brushPattern setStroke];
if(baseImageView.image)
{
CGContextRef c = UIGraphicsGetCurrentContext();
[baseImageView.layer renderInContext:c];
}
CGBlendMode blendMode = self.erase ? kCGBlendModeClear : kCGBlendModeNormal;
[myPath strokeWithBlendMode:blendMode alpha:1.0];
}
baseImageView is what I use to save the result so that I don't have to draw many paths (gets really slow after a while). Here is my touch logic:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
[myPath moveToPoint:[mytouch locationInView:self]];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *mytouch=[[touches allObjects] objectAtIndex:0];
[myPath addLineToPoint:[mytouch locationInView:self]];
[self setNeedsDisplay];
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, 0.0f);
CGContextRef c = UIGraphicsGetCurrentContext();
[self.layer renderInContext:c];
baseImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[myPath removeAllPoints];
[self setNeedsDisplay];
}
This project is going to be released as an enterprise app, so it will only be installed on iPad 2. Target iOS is 5.0. Any suggestions about how I can squeeze a little more speed out of this would be appreciated.
Of course you should start by running it under Instruments and look for your hotspots. Then you need to to make changes and re-evaluate to see their impact. Otherwise you're just guessing. That said, some notes from experience:
Adding lots of elements to a path can get very expensive. I would not be surprised if your addLineToPoint: turns out to be a hotspot. It has been for me.
Rather than backing your system with a UIImageView, I would probably render into a CGLayer. CGLayers are optimized for rendering into a specific context.
Why accumulate the path at all rather than just rendering it into the layer at each step? That way your path would never be more than two elements (move, addLine). Typically the two-stage approach is used so you can handle undo or the like.
Make sure that you're turning off any UIBezierPath features you don't want. In particular, look at the section "Accessing Draw Properties" in the docs. You may consider switching to CGMutablePath rather than UIBezierPath. It's not actually faster when configured the same, but it's default settings turn more things off, so by default it's faster. (You're already setting most of these; you'll want to experiment a little in Instruments to see what impact they make.)
http://mobile.tutsplus.com/tutorials/iphone/ios-sdk_freehand-drawing/
This link exactly shows how to make a curve smoother . This tutorial shows it step by step. And simply tells us how we can add some intermediate points (in touchesMoved method) to our curves to make them smoother.

Resources