how to get buffer zone around a uibezierpath - ios

i have some uibezierpaths. as paths, they don't really have thickness.
but, I am hoping to find a way to define an area around a path like the grayish areas around the lines in this picture
basically, i want to test whether drawn lines fall within the buffer zone around the lines.
i thought this would be simple, but it's turning out to be much more complex than i thought. I can use the CGPathApply function to examine the points along my path, and then getting a range +or- each point, but it's more complicated than that with angles and curves. any ideas?

Expanding the width of a path is actually quite difficult. However, you could just stroke it with a thicker width and get pretty much the same effect. Something like...
CGContextSetRGBStrokeColor(context, 0.4, 0.4, 0.4, 1.0);
[path setLineWidth:15];
[path stroke];
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 1.0);
[path setLineWidth:3];
[path stroke];
...would produce a picture like the one in your question. But I doubt that's news to you.
The real trick is the test of "whether drawn lines fall within the buffer zone." That problem is very similar to one which I just answered for myself in another question. Take a look at the LineSample.zip code I shared there. This implements a bitmap/bitwise data comparison to detect hits on lines much like you need. You could just draw the thicker "buffer" paths into the bitmap for testing and show the thinner lines in your view.

Basically, you want to check if any point falls inside a region of specified size around your path.
It is actually very simple to do. First, you need a value which will define the amount of space around path you want to test. Let's say 20 points. So what you need to do is start a FOR loop, starting from -20 to 20, and at each iteration, create a copy of your path, translate the path's x and y co-odrinates, check each of them.
All of this is more clear in this code sample.
CGPoint touchPoint = /*get the point*/;
NSInteger space = 20;
for (NSInteger i = -space; i < space; i++) {
UIBezierPath *pathX = [UIBezierPath bezierPathWithCGPath:originalPath.CGPath];
[pathX applyTransform:CGAffineTransformMakeTranslation(i, 0)];
if ([pathX containsPoint:touchPoint]) {
/*YEAH!*/
}
else {
UIBezierPath *pathY = [UIBezierPath bezierPathWithCGPath:originalPath.CGPath];
[pathY applyTransform:CGAffineTransformMakeTranslation(0, i)];
if ([pathY containsPoint:touchPoint]) {
/*YEAH!*/
}
}
}

Related

Error attempting to create polygon with CGPathAddEllipseInRect

I am trying to create an ellipses. I used bodyWithEdgeLoopFromPath and it worked but there seems to be something wrong with it because sometimes other objects get caught in the middle of it.
But I want the ellipses to be solid, So I tried bodyWithPolygonFromPath (I want it static)
horizontalOval = [[SKShapeNode alloc] init];
theRect = CGRectMake(0, 0, self.frame.size.width/6 , 15);
CGMutablePathRef ovalPath = CGPathCreateMutable();
CGPathAddEllipseInRect(ovalPath, NULL, theRect);
horizontalOval.path = ovalPath;
horizontalOval.fillColor = [UIColor blueColor];
horizontalOval.physicsBody.dynamic = NO;
horizontalOval.physicsBody = [SKPhysicsBody bodyWithPolygonFromPath:ovalPath];
But got the error
SKPhysicsBody: Error attempting to create polygon with 17 vertices, maximum is 12
How do I create complex paths and make them solid?
Also when I pud it in position self.frame.size.width/2 and self.frame.size.height/2 It doesn't stay center, it goes a little to the right.
I had to theRect = CGRectMake(-40, 0........) to make it center but why is that?
UIBezierPath* ovalPath = [UIBezierPath bezierPathWithOvalInRect: _paddleRect];
But has 13 vertices. Trying to use PaintCode.
You can think of an edge body as one that has no volume, just the edges, a body with 'negative space' - that's why your objects got caught in the middle of it. As the Sprite Kit Programming Guide says:
The main difference between a edge and a volume is that an edge permits movement inside its own boundaries, while a volume is considered a solid object.
Since you want your oval to be a solid object, you do need a volume-based body. For these bodies, you have three options to create their shape: a circle (bodyWithCircleOfRadius:), a rectangle (bodyWithRectangleOfSize:), or a polygon (bodyWithPolygonFromPath:).
For an oval shape, you probably have to draw a polygon - however, the Sprite Kit physics engine will only accept those with a maximum of 12 vertices (that's why you were getting an error when drawing an actual ellipse). Your best bet is drawing a polygon using a helper tool, such as this one: http://dazchong.com/spritekit/ - just drag and drop your sprite and draw the path. Remember that the polygon must be convex (no angles over 180 degrees inside it) and that it can have a maximum of 12 vertices.
Also check out this answer for a similar issue: Ellipse SKPhysicsBody

Does MKOverlayPathView need drawMapRect?

I'm having some inconsistencies modifying the Breadcrumb example, to have the CrumbPathView subclassed from MKOverlayPathView (like it's supposed to) rather than subclassed from MKOverlayView.
Trouble is, the docs are limited in stating the difference in how these 2 should be implemented. For a subclass of MKOverlayPathView it's advised to use:
- createPath
- applyStrokePropertiesToContext:atZoomScale:
- strokePath:inContext:
But is this in place of drawMapRect, or in addition to? It doesn't seem like much point if it's in addition to, because both would be used for similar implementations. But using it instead of drawMapRect, leaves the line choppy and broken.
Struggling to find any real world examples of subclassing MKOverlayPathView too...is there any point?
UPDATE - modified code from drawMapRect, to what should work:
- (void)createPath
{
CrumbPath *crumbs = (CrumbPath *)(self.overlay);
CGMutablePathRef newPath = [self createPathForPoints:crumbs.points
pointCount:crumbs.pointCount];
if (newPath != nil) {
CGPathAddPath(newPath, NULL, self.path);
[self setPath:newPath];
}
CGPathRelease(newPath);
}
- (void)applyStrokePropertiesToContext:(CGContextRef)context atZoomScale:(MKZoomScale)zoomScale
{
CGContextSetStrokeColorWithColor(context, [[UIColor greenColor] CGColor]);
CGFloat lineWidth = MKRoadWidthAtZoomScale(zoomScale);
CGContextSetLineWidth(context, lineWidth);
CGContextSetLineJoin(context, kCGLineJoinRound);
CGContextSetLineCap(context, kCGLineCapRound);
}
- (void)strokePath:(CGPathRef)path inContext:(CGContextRef)context
{
CGContextAddPath(context, path);
CGContextStrokePath(context);
[self setPath:path];
}
This draws an initial line, but fails to continue the line...it doesn't add the path. I've confirmed that applyStrokePropertiesToContext and strokePath are getting called, upon every new location.
Here's a screenshot of the broken line that results (it draws for createPath, but not after that):
Here's a screenshot of the "choppy" path that happens when drawMapRect is included with createPath:
Without having seen more of your code I'm guessing, but here goes.
I suspect the path is being broken into segments, A->B, C->D, E->F rather than a path with points A,B,C,D, E and F. To be sure of that we'd need to see what is happening to self.overlay and whether it is being reset at any point.
In strokePath you set self.path to be the one that is being stroked. I doubt that is a good idea since the stroking could happen at any time just like viewForAnnotations.
As for the choppiness it may be a side effect or a poor bounds calculation on Apple's part. If your like ends near the boundary of a tile that Apple uses to cover the map it would probably only prompt the map to draw the one the line is within. But your stroke width extends into a neighbouring tile that hasn't been draw. I'm guessing again but you could test this out by moving the point that is just north of the W in "Queen St W" a fraction south, or by increasing the stroke width and see if the cut off line stays in the same place geographically.

How to draw the vertices on a CGContext CGMutablePathRef?

I am drawing a set of connected lines as the user enters clicks to specify vertices. I am build a CGMutablePathRef
- (void)addPointToCGMPR: (CGPoint)p
forNewPolygon: (BOOL)newPoly
{
if (newPoly)
{
CGPathMoveToPoint(cgmpr, NULL, p.x, p.y);
}
else
{
CGPathAddLineToPoint(cgmpr, NULL, p.x, p.y);
}
[self setNeedsDisplay];
}
drawRect: is then called after each point is entered, and the context CGMutablePathRef is added to a CGContext for display
- (void)drawRect:(CGRect)rect
{
// Set up a context to display
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, cgmpr);
CGContextSetLineWidth(context, 1);
CGContextStrokePath(context);
// If NOT empty then Close the context so that box comparison can occur later
if (!CGContextIsPathEmpty(context))
{
CGContextClosePath(context);
}
}
I am getting lines on the screen as one would expect. My trouble is that after the user picks the first point there is nothing rendered on the screen and the user is left not knowing if the system got his first pick until after the second point is entered. I would like for the system to render the first point even before the next point is entered. I would also like for the system to render each one of the vertices picked in a visually distinct way - right now I am not getting vertices rendered, only lines. Is there a way to ask the CGContext to render points? Is there a way to specify the style in which these points are rendered? Thanks.
You're free to draw whatever you want to represent the points: a small square, a small circle, an image of a pushpin, etc. You already know how to draw into a context; just loop over your points and draw them however you want them to appear. I personally tend to use CGContextFillEllipseInRect for things like this. For each point, make a rect that surrounds the point and draw it:
CGFloat pointSize = 4;
CGRect pointRect = CGRectMake(point.x - pointSize / 2, point.y - pointSize / 2, pointSize, pointSize);
CGContextFillEllipseInRect(context, pointRect);
No. You'll need to render them yourself by adding another CGPath to describe whatever you want to represent the vertex. Since you'll probably draw this often, you may want to render it to a CGLayer so you can easily copy it where you need it. You could also draw your vertices onto CALayers so you can easily move them around. But it's up to you to design and manage this part of the drawing.

Simple 2D "collision detection" iOS

I'm writing an application that will calculate a CGPoint and show a mark in an envelope (a diagram if you like). My envelope is just part of the background image in an UIImageView. What I want to do is to construct a sort of "line", corresponding to the envelopes limits (they're not straight lines, but curves) so that if the calculated CGPoint is to the left of this line, or to the right of another line, then the calculated point is not approved. Where it to be in the middle of these two, it's approved.
I was first thinking of drawing lines using CoreGraphics, but I'm not sure if one could check whether the calculated CGPoint is to the right or left of those lines.
The envelope is only 149px high, so I was also thinking of putting together a dictionary, where the keys where the y position and the values where the x position of the pixels that represented that defining boundary line.
The application is rather simple and is not animating anything. Does anybody have an idea of how to best come up with a solution for this sort of behavior?
You can do this by creating a CGPath that represents your boundary lines (the outline of your envelope) and testing that a point is contained in it with CGPathContainsPoint.
You'll have to do some trial and error to construct a CGPath that matches your envelope shape, try filling it in the drawRect method to see what your path actually is.
Here's an example with a circle path:
CGPoint viewCenter = CGPointMake(100,100);
CGPoint checkPoint = CGPointMake(110,110);
UIBezierPath *bpath = [UIBezierPath bezierPathWithArcCenter:viewCenter radius:50 startAngle:0 endAngle:DEGREES_TO_RADIANS(360) clockwise:YES];
CGPathRef path = [bpath CGPath];
BOOL inPath = CGPathContainsPoint(path, NULL, checkPoint, NO);
Here I have DEGREES_TO_RADIANS defined like this:
#define DEGREES_TO_RADIANS(angle) ((angle) / 180.0 * M_PI)

Hit detection when drawing lines in iOS

I would like to allow the user to draw curves in such a way that no line can cross another line or even itself. Drawing the curves is no problem, and I even found that I can create a path that is closed and still pretty line-like by tracing the nodes of the line forwards and back and then closing the path.
Unfortunately, iOS only provides a test for whether a point is contained in a closed path (containsPoint: and CGPathContainsPoint). Unfortunately, a user can pretty easily move their finger fast enough that the touch points land on both sides of an existing path without actually being contained by that path, so testing the touch points is pretty pointless.
I can't find any "intersection" of paths method.
Any other thoughts on how to accomplish this task?
Well, I did come up with a way to do this. It is imperfect, but I thought others might want to see the technique since this question was upvoted a few times. The technique I used draws all the items to be tested against into a bitmap context and then draws the new segment of the progressing line into another bitmap context. The data in those contexts is compared using bitwise operators and if any overlap is found, a hit is declared.
The idea behind this technique is to test each segment of a newly drawn line against all the previously drawn lines and even against earlier pieces of the same line. In other words, this technique will detect when a line crosses another line and also when it crosses over itself.
A sample app demonstrating the technique is available: LineSample.zip.
The core of hit testing is done in my LineView object. Here are two key methods:
- (CGContextRef)newBitmapContext {
// creating b&w bitmaps to do hit testing
// based on: http://robnapier.net/blog/clipping-cgrect-cgpath-531
// see "Supported Pixel Formats" in Quartz 2D Programming Guide
CGContextRef bitmapContext =
CGBitmapContextCreate(NULL, // data automatically allocated
self.bounds.size.width,
self.bounds.size.height,
8,
self.bounds.size.width,
NULL,
kCGImageAlphaOnly);
CGContextSetShouldAntialias(bitmapContext, NO);
// use CGBitmapContextGetData to get at this data
return bitmapContext;
}
- (BOOL)line:(Line *)line canExtendToPoint:(CGPoint) newPoint {
// Lines are made up of segments that go from node to node. If we want to test for self-crossing, then we can't just test the whole in progress line against the completed line, we actually have to test each segment since one segment of the in progress line may cross another segment of the same line (think of a loop in the line). We also have to avoid checking the first point of the new segment against the last point of the previous segment (which is the same point). Luckily, a line cannot curve back on itself in just one segment (think about it, it takes at least two segments to reach yourself again). This means that we can both test progressive segments and avoid false hits by NOT drawing the last segment of the line into the test! So we will put everything up to the last segment into the hitProgressLayer, we will put the new segment into the segmentLayer, and then we will test for overlap among those two and the hitTestLayer. Any point that is in all three layers will indicate a hit, otherwise we are OK.
if (line.failed) {
// shortcut in case a failed line is retested
return NO;
}
BOOL ok = YES; // thinking positively
// set up a context to hold the new segment and stroke it in
CGContextRef segmentContext = [self newBitmapContext];
CGContextSetLineWidth(segmentContext, 2); // bit thicker to facilitate hits
CGPoint lastPoint = [[[line nodes] lastObject] point];
CGContextMoveToPoint(segmentContext, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(segmentContext, newPoint.x, newPoint.y);
CGContextStrokePath(segmentContext);
// now we actually test
// based on code from benzado: http://stackoverflow.com/questions/6515885/how-to-do-comparisons-of-bitmaps-in-ios/6515999#6515999
unsigned char *completedData = CGBitmapContextGetData(hitCompletedContext);
unsigned char *progressData = CGBitmapContextGetData(hitProgressContext);
unsigned char *segmentData = CGBitmapContextGetData(segmentContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(segmentContext);
size_t height = CGBitmapContextGetHeight(segmentContext);
size_t len = bytesPerRow * height;
for (int i = 0; i < len; i++) {
if ((completedData[i] | progressData[i]) & segmentData[i]) {
ok = NO;
break;
}
}
CGContextRelease(segmentContext);
if (ok) {
// now that we know we are good to go,
// we will add the last segment onto the hitProgressLayer
int numberOfSegments = [[line nodes] count] - 1;
if (numberOfSegments > 0) {
// but only if there is a segment there!
CGPoint secondToLastPoint = [[[line nodes] objectAtIndex:numberOfSegments-1] point];
CGContextSetLineWidth(hitProgressContext, 1); // but thinner
CGContextMoveToPoint(hitProgressContext, secondToLastPoint.x, secondToLastPoint.y);
CGContextAddLineToPoint(hitProgressContext, lastPoint.x, lastPoint.y);
CGContextStrokePath(hitProgressContext);
}
} else {
line.failed = YES;
[linesFailed addObject:line];
}
return ok;
}
I'd love to hear suggestions or see improvements. For one thing, it would be a lot faster to only check the bounding rect of the new segment instead of the whole view.
Swift 4, answer is based on CGPath Hit Testing - Ole Begemann (2012)
From Ole Begemann blog:
contains(point: CGPoint)
This function is helpful if you want to hit test on the entire region
the path covers. As such, contains(point: CGPoint) doesn’t work with
unclosed paths because those don’t have an interior that would be
filled.
copy(strokingWithWidth lineWidth: CGFloat, lineCap: CGLineCap, lineJoin: CGLineJoin, miterLimit: CGFloat, transform: CGAffineTransform = default) -> CGPath
This function creates a mirroring tap target object that only covers
the stroked area of the path. When the user taps on the screen, we
iterate over the tap targets rather than the actual shapes.
My solution in code
I use a UITapGestureRecognizer linked to the function tap():
var bezierPaths = [UIBezierPath]() // containing all lines already drawn
var tappedPaths = [CAShapeLayer]()
#IBAction func tap(_ sender: UITapGestureRecognizer) {
let point = sender.location(in: imageView)
for path in bezierPaths {
// create tapTarget for path
if let target = tapTarget(for: path) {
if target.contains(point) {
tappedPaths.append(layer)
}
}
}
}
fileprivate func tapTarget(for path: UIBezierPath) -> UIBezierPath {
let targetPath = path.copy(strokingWithWidth: path.lineWidth, lineCap: path..lineCapStyle, lineJoin: path..lineJoinStyle, miterLimit: path.miterLimit)
return UIBezierPath.init(cgPath: targetPath)
}

Resources