CGContextStrokePath() causing "dots" in realtime drawing (finger painting) app - ios

I'm trying to write a finger painting type app. I am starting a path in touchesBegan and adding to that path in touchesMoved. In touchesMoved, I use the following code:
CGContextMoveToPoint(context, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(context, currentPoint.x, currentPoint.y);
CGContextStrokePath(context);
I call CGContextStokePath so that the path shows up in realtime as the user draws. The problem is that when using low alpha values, I get dots between the successive path segments where the end cap is essentially drawn twice - once for the previous segment, and once for the current segment.
I've tried using different line caps but the result isn't very pretty. I've also tried using the CGContextDrawPath function with all the various constants and I get the same result.
You can see the results here: http://www.idea-asylum.com/pathwithdots/index.html - It shows a line with alpha = 1.0 and one with alpha = 0.2.
Any ideas? Thanks in advance!

First, I hope you're drawing each shape into a separate layer (and I don't mean CALayer, I mean an internal construct unique to your app). This not only simplifies this task, it makes undo more or less painless (just move the last/topmost layer into a different array and hide it, and empty that array when the user draws a new layer).
Second, during the construction of the shape, don't only remember the last point. Create a CGMutablePath when the user begins the shape and add each subsequent point as another lineto. This also lets you keep the path around in that layer, which means you can throw the rendered image away if a low-memory warning arrives and re-create it the next time you need it.
Third, each time you update the shape during its creation, get its area so far, invalidate that section, and redraw all the layers under it as well as the shape being drawn (as it exists so far). That is, redraw the background, clobbering the new shape, and then draw the up-to-date version of the new shape on top.
Once you are constructing the shape as a single path, and stroking that single path in each draw cycle, the intersections between segments will disappear.

Related

Is it possible to get UIBezierPath current control points after path transformation?

I am working on creating a way for a user to move polygons on screen and change their shape by dragging their corner vertices. I then need to be able to re-draw those modified polygons on other devices - so I need to be able to get final positions of all vertices for all polygons on screen.
This is what I am currently doing:
I draw polygons using UIBezierPath for each shape in the override of the drawRect function. I then allow user to drag them around the screen by applying CGAffineTransformMakeTranslation function and the x and y coordinate deltas in touchedMoved function override. I have the initial control points from which initial polygons are drawn (as described here). But once a path instance is moved on screen, those values don't change - so I am only able to get initial values.
Is there something built - in in the Core Graphics framework that will allow me to grab a set of current control points in a UIBezierPath instance? I am trying to avoid keeping track of those points manually. I will consider using other ways to draw if:
there is a built - in way to detect if a point lies within that polygon (such as UIBezierPath#contains method
A way to easily introduce constraints so user can't move a polygon out of bounds of the superview (I need the whole polygon to be visible)
A way to grab all points easily when user is done
Everything can run under 60fps on iPhone 5.
Thanks for your time!
As you're only applying the transform to the view/layer, to get the transformed control points from the path, simply apply that same transform to the path, and then fetch the control points. Something like:
UIBezierPath* copiedPath = [myPath copy];
[copiedPath applyTransform:[view transform]];
[self yourExistingMethodToFetchPointsFromPath:copiedPath];
The way that you're currently pulling out points from a path is unfortunately at the only API available for re-fetching points from an input UIBezierPath. However - you might be interested in a library I wrote to make working with Bezier path's much simpler: PerformanceBezier. This library makes it significantly easier to get the points from a path:
for(NSInteger i=0;i<[path elementCount];i++){
CGPathElement element = [path elementAtIndex:n];
// now you know the element.type and the element.points
}
In addition to adding functionality to make paths easier to work with, it also adds a caching layer on top of the existing API to make the performance hit of working with paths much much smaller. Depending on how much CPU time you're spending on UIBezierPath methods, this library will make a significant improvement. I saw between 2x and 10x improvement, depending on the operations I was using.

UICollisionBehavior treats open path as closed?

If I define an open UIBezierPath and set it as a collision boundary:
_containerPath = [UIBezierPath bezierPathWithArcCenter:center
radius:radius
startAngle:M_PI
endAngle:0
clockwise:NO];
[_collisionBehavior addBoundaryWithIdentifier:#"containerBoundary" forPath:_containerPath];
and then turn gravity on, objects that are released inside the "bowl" respect the lower boundary, but objects released from above the bowl come to rest on the supposedly non-existent side. Is this expected behavior?
In the picture, the red rectangle was dropped from above; the reference view for the dynamic animator is the light gray rect. It fell from above and stopped at the invisible line.
I've confirmed that if you flip the bezier path over, the red rect does in fact respect the curved boundary; I've also tried this using an open (two-sided) triangle instead of curved path - same result.
The behavior you're seeing seems to be the same as what you see for fill with a bezier path. If you draw a "V" and fill it, it behaves as if it were a closed path. With the collision boundaries, you can make an open "V" by adding two lines with addBoundaryWithIdentifier:fromPoint:toPoint:. I don't know it there's any other way around the problem. For your half circle, I presume you could approximate it with a series of straight lines added with the method above. I've approximated circles before using 50 to 100 lines that look very close to what you get with BezierPathWithOvalInRect. I don't know if this creates a serious burden on the system when used as a collision boundary.

Large UIBezierPath, slow rendering

For an app I am creating, I am using a UIBezierPath to store a path. The problem is that this path continually increases in length, and at a point, the app lags and becomes jerky. Because the path is continually increasing in length, I am constantly redrawing the view 100 times a second (if there is a better way to do this, please let me know). At a certain point, the app just becomes extremely jerky. I think it is because it takes too long to stroke the path.
You can see in my drawRect method that the graphics are translated. Very little of the path is on the screen, so is there a way to only stroke the part of the path that is visible? That is what I thought I was doing with the CGContextClip method, but it had no noticeable improvement.
- (void)drawRect:(CGRect)rect {
CGContextRef myContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(myContext, 0, yTranslation);
[[UIColor whiteColor] setStroke];
[bPath addLineToPoint:currentPoint];
[bPath setLineWidth:lineWidth];
CGContextClip(myContext);
[bPath stroke];
}
Thank you.
A couple of thoughts:
IMHO, you shouldn't be adding data points at a regular interval at all. You should be adding data points if and only if there are new data points to be added (e.g. touchesMoved or gesture recognizer gets called with UIGestureStateChanged). This reduces the number of points in bezier and defers the point at which performance problems impose themselves upon you. The process should be driven by the touch events, not a timer.
If some of your drawing is off screen, you can probably speed it up by checking to see if either of the points falls within the visible portion of the view (e.g CGRectContainsPoint). You should probably check for intersections of line segments with the CGRect (as it's theoretically possible for neither the start nor ends to be inside the visible rectangle, but for the line segment between them to intersect the rectangle).
You can also design this process so that it determines which segments are visible only when the view port moves. This can save you from needing to constantly iterate through a very large array.
At some point, there are diminishing returns of drawing individual paths versus a bitmap image, even just for the visible portion. Sometimes it's useful to render old paths as an image, and then draw new paths over that snapshot rather than redrawing the whole path every time. For example, I've used approach where when I'm starting a new gesture, I snapshot old image and only draw new gesture on top of that.
Caching is a possible solution: Draw the curve once on an in memory image with transparent background. Update this image only when the curve changes. Overlay this cached image on whatever you are drawing on. It should be cheaper in processing power.
The other possibility is to remove the unneeded points from the bezier curve after determining which ones will affect the current view, then render the resulting bezier curve.

drawing,the intersection point clears

We are devloping a Drawing app. In that when we draw crossed lines ,the intersection point clears the previously drawn pixels where both lines intersects eachother.We are using setneedsdisplayinrect to refresh the drawing data.
How to over come this issue?
tl;dr: You need to store the previous line segments and redraw them when draw in the same rectangle again.
We are using setneedsdisplayinrect to refresh the drawing data
That is a good thing. Can you see any side effects of doing that? If not, try passing the entire rectangle and see what happens. You will see that only the very last segment is drawn.
Now you know that you need to store and possibly redraw previous line segments (or just their image).
Naive approach
The first and simplest solution would be to store all the lines in an array and redraw them. You would notice that this will slow down your app a lot especially when after having drawn for a while. Also, it doesn't go very well with only drawing what you need
Only drawing lines inside the refreshed rectangle
You could speed up the above implementation by filtering all the lines in the array to only redraw those that intersect the refreshed rect. This could for example be done by getting the bounding box for the line segment (using CGPathGetBoundingBox(path)) and checking if it intersects the refreshed rectangle (using CGRectIntersectsRect(refreshRect, boundingBox)).
That would reduce some of the drawing but you would still end up with a very long array of lines and see performance problems after a while.
Rasterize old line segments
One good way of not having to store all previous lines is to draw them into a bitmap (a separate image context (see UIGraphicsBeginImageContextWithOptions(...))) draw that image before drawing the new line segment. However, that would almost double the drawing you would have to do so it should not be done for every frame.
One thing you could do is to store the last 100 line segments (or maybe the last 1000 or whatever your own performance investigation shows, you should investigate these things yourself) and draw the rest into an image. Every time you have a 100 new lines you add them to the image context – by first drawing the image and then drawing the new 100 line – and save that as the new image.
One other thing you could do is to take all the new lines and draw them to the image every time the user lifts their finger. This can work well in combination with the above suggestion.
Depending on the complexity of your app you may need one or more of these suggestions to keep your app responsive (which is very important for a drawing app) but investigate first and don't make your solution overly complex if you don't have to.

Free hand painting and erasing using UIBezierPath and CoreGraphics

I have been trying so much but have no solution find out yet. I have to implement the painting and erasing on iOS so I successfully implemented the painting logic using UIBezierPath. The problem is that for erasing, I implemented the same logic as for painting by using kCGBlendModeClear but the problem is that I cant redraw on the erased area and this is because in each pass in drawRect i have to stroke both the painting and erasing paths. So is there anyway that we can subtract erasing path from drawing path to get the resultant path and then stroke it. I am very new to Core Graphics and looking forward for your reply and comments. Or any other logic to implement the same. I can't use eraser as background color because my background is textured.
You don't need to stroke the path every time, in fact doing so is a huge performance hit. I guarantee if you try it on an iPad 3 you will be met with a nearly unresponsive screen after a few strokes. You only need to add and stroke the path once. After that, it will be stored as pixel data. So don't keep track of your strokes, just add them, stroke them, and get rid of them. Also look into using a CGLayer (you can draw to that outside the main loop, and only render it to your rect in the main loop so it saves lots of time).
These are the steps that I use, and I am doing the exact same thing (I use a CGPath instead of UIBezierPath, but the idea is the same):
1) In touches began, store the touch point and set the context to either erase or draw, depending on what the user has selected.
2) In touches moved, if the point is a certain arbitrary distance away from the last point, then move to the last point (CGContextMoveToPoint) and draw a line to the new point (CGContextAddLineToPoint) in my CGLayer. Calculate the rectangle that was changed (i.e. contains the two points) and call setNeedsDisplayInRect: with that rectangle.
3) In drawRect render the CGLayer into the current window context ( UIGraphicsGetCurrentContext() ).
On an iPad 3 (the one that everyone has the most trouble with due to its enormous pixel count) this process takes between 0.05 ms and 0.15ms per render (depending on how fast you swipe). There is one caveat though, if you don't take the proper precautions, the entire frame rectangle will be redrawn even if you only use setNeedsDisplayInRect: My hacky way to combat this (thanks to the dev forums) is described in my self answered question here, Otherwise, if your view takes a long time to draw the entire frame (mine took an unacceptable 150 ms) you will get a short stutter under certain conditions while the view buffer gets recreated.
EDIT With the new info from your comments, it seems that the answer to this question will benefit you -> Use a CoreGraphic Stroke as Alpha Mask in iPhone App
Hai here is the code for making painting, erasing, undo, redo, saving as picture. you can check sample code and implement this on your project.
Here

Resources