I would like to allow the user to draw curves in such a way that no line can cross another line or even itself. Drawing the curves is no problem, and I even found that I can create a path that is closed and still pretty line-like by tracing the nodes of the line forwards and back and then closing the path.
Unfortunately, iOS only provides a test for whether a point is contained in a closed path (containsPoint: and CGPathContainsPoint). Unfortunately, a user can pretty easily move their finger fast enough that the touch points land on both sides of an existing path without actually being contained by that path, so testing the touch points is pretty pointless.
I can't find any "intersection" of paths method.
Any other thoughts on how to accomplish this task?
Well, I did come up with a way to do this. It is imperfect, but I thought others might want to see the technique since this question was upvoted a few times. The technique I used draws all the items to be tested against into a bitmap context and then draws the new segment of the progressing line into another bitmap context. The data in those contexts is compared using bitwise operators and if any overlap is found, a hit is declared.
The idea behind this technique is to test each segment of a newly drawn line against all the previously drawn lines and even against earlier pieces of the same line. In other words, this technique will detect when a line crosses another line and also when it crosses over itself.
A sample app demonstrating the technique is available: LineSample.zip.
The core of hit testing is done in my LineView object. Here are two key methods:
- (CGContextRef)newBitmapContext {
// creating b&w bitmaps to do hit testing
// based on: http://robnapier.net/blog/clipping-cgrect-cgpath-531
// see "Supported Pixel Formats" in Quartz 2D Programming Guide
CGContextRef bitmapContext =
CGBitmapContextCreate(NULL, // data automatically allocated
self.bounds.size.width,
self.bounds.size.height,
8,
self.bounds.size.width,
NULL,
kCGImageAlphaOnly);
CGContextSetShouldAntialias(bitmapContext, NO);
// use CGBitmapContextGetData to get at this data
return bitmapContext;
}
- (BOOL)line:(Line *)line canExtendToPoint:(CGPoint) newPoint {
// Lines are made up of segments that go from node to node. If we want to test for self-crossing, then we can't just test the whole in progress line against the completed line, we actually have to test each segment since one segment of the in progress line may cross another segment of the same line (think of a loop in the line). We also have to avoid checking the first point of the new segment against the last point of the previous segment (which is the same point). Luckily, a line cannot curve back on itself in just one segment (think about it, it takes at least two segments to reach yourself again). This means that we can both test progressive segments and avoid false hits by NOT drawing the last segment of the line into the test! So we will put everything up to the last segment into the hitProgressLayer, we will put the new segment into the segmentLayer, and then we will test for overlap among those two and the hitTestLayer. Any point that is in all three layers will indicate a hit, otherwise we are OK.
if (line.failed) {
// shortcut in case a failed line is retested
return NO;
}
BOOL ok = YES; // thinking positively
// set up a context to hold the new segment and stroke it in
CGContextRef segmentContext = [self newBitmapContext];
CGContextSetLineWidth(segmentContext, 2); // bit thicker to facilitate hits
CGPoint lastPoint = [[[line nodes] lastObject] point];
CGContextMoveToPoint(segmentContext, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(segmentContext, newPoint.x, newPoint.y);
CGContextStrokePath(segmentContext);
// now we actually test
// based on code from benzado: http://stackoverflow.com/questions/6515885/how-to-do-comparisons-of-bitmaps-in-ios/6515999#6515999
unsigned char *completedData = CGBitmapContextGetData(hitCompletedContext);
unsigned char *progressData = CGBitmapContextGetData(hitProgressContext);
unsigned char *segmentData = CGBitmapContextGetData(segmentContext);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(segmentContext);
size_t height = CGBitmapContextGetHeight(segmentContext);
size_t len = bytesPerRow * height;
for (int i = 0; i < len; i++) {
if ((completedData[i] | progressData[i]) & segmentData[i]) {
ok = NO;
break;
}
}
CGContextRelease(segmentContext);
if (ok) {
// now that we know we are good to go,
// we will add the last segment onto the hitProgressLayer
int numberOfSegments = [[line nodes] count] - 1;
if (numberOfSegments > 0) {
// but only if there is a segment there!
CGPoint secondToLastPoint = [[[line nodes] objectAtIndex:numberOfSegments-1] point];
CGContextSetLineWidth(hitProgressContext, 1); // but thinner
CGContextMoveToPoint(hitProgressContext, secondToLastPoint.x, secondToLastPoint.y);
CGContextAddLineToPoint(hitProgressContext, lastPoint.x, lastPoint.y);
CGContextStrokePath(hitProgressContext);
}
} else {
line.failed = YES;
[linesFailed addObject:line];
}
return ok;
}
I'd love to hear suggestions or see improvements. For one thing, it would be a lot faster to only check the bounding rect of the new segment instead of the whole view.
Swift 4, answer is based on CGPath Hit Testing - Ole Begemann (2012)
From Ole Begemann blog:
contains(point: CGPoint)
This function is helpful if you want to hit test on the entire region
the path covers. As such, contains(point: CGPoint) doesn’t work with
unclosed paths because those don’t have an interior that would be
filled.
copy(strokingWithWidth lineWidth: CGFloat, lineCap: CGLineCap, lineJoin: CGLineJoin, miterLimit: CGFloat, transform: CGAffineTransform = default) -> CGPath
This function creates a mirroring tap target object that only covers
the stroked area of the path. When the user taps on the screen, we
iterate over the tap targets rather than the actual shapes.
My solution in code
I use a UITapGestureRecognizer linked to the function tap():
var bezierPaths = [UIBezierPath]() // containing all lines already drawn
var tappedPaths = [CAShapeLayer]()
#IBAction func tap(_ sender: UITapGestureRecognizer) {
let point = sender.location(in: imageView)
for path in bezierPaths {
// create tapTarget for path
if let target = tapTarget(for: path) {
if target.contains(point) {
tappedPaths.append(layer)
}
}
}
}
fileprivate func tapTarget(for path: UIBezierPath) -> UIBezierPath {
let targetPath = path.copy(strokingWithWidth: path.lineWidth, lineCap: path..lineCapStyle, lineJoin: path..lineJoinStyle, miterLimit: path.miterLimit)
return UIBezierPath.init(cgPath: targetPath)
}
Related
I am making an iOS drawing app and would like to add functionality where erasing a UIBezierPath will erase the whole path and not just overlay the segment with a white color.
I have an array of UIBezierPaths stored, but I am not sure how efficient it is to loop through the entire array for each touch point, detect if it intersects with the current touch point, and remove it from the array.
Any suggestions?
If I understand correctly you need to loop through path at least. They are independent after all.
An UIBezierPaths has method contains which would tell you if a point is inside the path. If that is usable then what you need to do is simply
paths = paths.filter { !$0.contains(touchPoint) }
But since this is drawing app it is safe to assume you are using stroke and not fill which will most likely not work with contains the way you want it to.
You could do the intersection manually and it should have relatively good performance up until you have too many points. But when you have too many points I wager drawing performance will be more of your concern. Still the intersection may be a bit problematic with including some stroke line width; user would need to cross the center of your line to detect a hit.
There is a way though to convert stroked path to filled path. Doing that will then enable you to use contains method. The method is called replacePathWithStrokedPath but the thing is that it only exists on CGContext (At least I have never managed to find an equivalent on UIBezierPath). So procedure to do so includes creating a context. For instance something like this will work:
static func convertStrokePathToFillPath(_ path: UIBezierPath) throws -> UIBezierPath {
UIGraphicsBeginImageContextWithOptions(CGSize(width: 1.0, height: 1.0), true, 1.0)
guard let context = UIGraphicsGetCurrentContext() else { throw NSError(domain: "convertStrokePathToFillPath", code: 500, userInfo: ["dev_message":"Could not generate image context"]) }
context.addPath(path.cgPath)
context.setLineWidth(path.lineWidth)
// TODO: apply all possible settings from path to context
context.replacePathWithStrokedPath()
guard let returnedPath = context.path else { throw NSError(domain: "convertStrokePathToFillPath", code: 500, userInfo: ["dev_message":"Could not get path from context"]) }
UIGraphicsEndImageContext()
return UIBezierPath(cgPath: returnedPath)
}
So the result should look something like:
static func filterPaths(_ paths: [UIBezierPath], containingPoint point: CGPoint) -> [UIBezierPath] {
return paths.filter { !(try! convertStrokePathToFillPath($0).contains(point)) }
}
I have a function intended to create a trapezoidal polygon CCSprite. 99% of the time, the function works fine. The other 1%, it creates a completely blank sprite (though, of the correct dimensions). Here's my function:
- (CCRenderTexture*)createPolygon {
CGFloat bottom = [self bottomCrop];
CGFloat top = self.contentSize.height - [self topCrop];
CGFloat constrict = PERSPECTIVE_CONSTRICT_PX_FOR_HEIGHT(top);
CCRenderTexture * rt = [CCRenderTexture renderTextureWithWidth:self.contentSize.width
height:self.contentSize.height];
[rt begin];
CGPoint pts[] = { ccp(0,bottom),
ccp(constrict,top),
ccp(self.contentSize.width-constrict,top),
ccp(self.contentSize.width,bottom) };
ccDrawSolidPoly(pts, 4, ccc4FFromccc3B(self.backgroundColor));
[rt end];
#if TARGET_IPHONE_SIMULATOR
[rt saveToFile:#"terrain.jpg"];
#endif
return rt;
}
As you can see, I'm saving the RenderTexture to a file so that I can compare good vs. bad output. As I said above, the dimensions of the files are the same, but in the edge case the polygon is simply missing from the output.
I'm guessing I have some sort of race condition with another function (perhaps ccDrawSolidRect is drawing into the wrong context...?), but I have no idea where it might be. Is there any way I can protect against this?
edit FWIW, I'm positive that self.backgroundColor is not white/transparent, so that should not be the problem. Similarly, I'm sure that bottom and top are non-zero. These values are all coded as switch statements, based upon some configurations.
I'm using a MKOverlayView for drawing a path onto the apple maps. I'd like to draw many short paths onto it, because I need to colorize the track depending on some other values. But I'm getting some fancy effects doing it that way ... also my start- and ending points are connected, but I don't know why. After zooming in/out the fancy-effect-pattern changes and gets bigger/smaller. It seems that you can see the apple map tiles on my path ...
This is my code, its called inside the drawMapRect method of my overlay view.
for(int i = 0; i < tdpoints.pointCount-1; i++ ){
CGPoint firstCGPoint = [self pointForMapPoint:tdpoints.points[i]];
CGPoint secCGPoint = [self pointForMapPoint:tdpoints.points[i+1]];
if (lineIntersectsRect(tdpoints.points[i], tdpoints.points[i+1], clipRect)){
double val1 = (arc4random() % 10) / 10.0f;
double val2 = (arc4random() % 10) / 10.0f;
double val3 = (arc4random() % 10) / 10.0f;
CGContextSetRGBStrokeColor(context, val1 ,val2, val3, 1.0f);
CGContextSetLineWidth(context, lineWidth);
CGContextBeginPath(context);
CGContextMoveToPoint(context,firstCGPoint.x,firstCGPoint.y);
CGContextAddLineToPoint(context, secCGPoint.x, secCGPoint.y);
CGContextStrokePath(context);
CGContextClosePath(context);
}
}
http://imageshack.us/photo/my-images/560/iossimulatorbildschirmf.jpg/
http://imageshack.us/photo/my-images/819/iossimulatorbildschirmf.jpg/
I'm adding my GPS Points like that. (From Breadcrumbs Apple Example)
CLLocationCoordinate2D coord = {.latitude = 49.1,.longitude =12.1f};
[self drawPathWithLocations:coord];
CLLocationCoordinate2D coord1 = {.latitude = 49.2,.longitude =12.2f};
[self drawPathWithLocations:coord1];
CLLocationCoordinate2D coord2 = {.latitude = 50.1,.longitude =12.9f};
[self drawPathWithLocations:coord2];
This is the adding Method:
-(void) drawPathWithLocations:(CLLocationCoordinate2D)coord{
if (!self.crumbs)
{
// This is the first time we're getting a location update, so create
// the CrumbPath and add it to the map.
//
_crumbs = [[CrumbPath alloc] initWithCenterCoordinate:coord];
[self.trackDriveMapView addOverlay:self.crumbs];
// On the first location update only, zoom map to user location
[_trackDriveMapView setCenterCoordinate:coord zoomLevel:_zoomLevel animated:NO];
} else
{
// This is a subsequent location update.
// If the crumbs MKOverlay model object determines that the current location has moved
// far enough from the previous location, use the returned updateRect to redraw just
// the changed area.
//
// note: iPhone 3G will locate you using the triangulation of the cell towers.
// so you may experience spikes in location data (in small time intervals)
// due to 3G tower triangulation.
//
MKMapRect updateRect = [self.crumbs addCoordinate:coord];
if (!MKMapRectIsNull(updateRect))
{
// There is a non null update rect.
// Compute the currently visible map zoom scale
MKZoomScale currentZoomScale = (CGFloat)(self.trackDriveMapView.bounds.size.width / self.trackDriveMapView.visibleMapRect.size.width);
// Find out the line width at this zoom scale and outset the updateRect by that amount
CGFloat lineWidth = MKRoadWidthAtZoomScale(currentZoomScale);
updateRect = MKMapRectInset(updateRect, -lineWidth, -lineWidth);
// Ask the overlay view to update just the changed area.
[self.crumbView setNeedsDisplayInMapRect:updateRect];
}
}
This is the addCoordinate method:
- (MKMapRect)addCoordinate:(CLLocationCoordinate2D)coord
{
pthread_rwlock_wrlock(&rwLock);
// Convert a CLLocationCoordinate2D to an MKMapPoint
MKMapPoint newPoint = MKMapPointForCoordinate(coord);
MKMapPoint prevPoint = points[pointCount - 1];
// Get the distance between this new point and the previous point.
CLLocationDistance metersApart = MKMetersBetweenMapPoints(newPoint, prevPoint);
NSLog(#"PUNKTE SIND %f METER AUSEINANDER ... ", metersApart);
MKMapRect updateRect = MKMapRectNull;
if (metersApart > MINIMUM_DELTA_METERS)
{
// Grow the points array if necessary
if (pointSpace == pointCount)
{
pointSpace *= 2;
points = realloc(points, sizeof(MKMapPoint) * pointSpace);
}
// Add the new point to the points array
points[pointCount] = newPoint;
pointCount++;
// Compute MKMapRect bounding prevPoint and newPoint
double minX = MIN(newPoint.x, prevPoint.x);
double minY = MIN(newPoint.y, prevPoint.y);
double maxX = MAX(newPoint.x, prevPoint.x);
double maxY = MAX(newPoint.y, prevPoint.y);
updateRect = MKMapRectMake(minX, minY, maxX - minX, maxY - minY);
}
pthread_rwlock_unlock(&rwLock);
return updateRect;
}
Hint
I think my refresh algorithm only refreshes one tile of the whole map on the screen and because every time the drawMapRect method is called for this specific area a new random color is generated. (The rest of the path is clipped and the oder color remains ...).
The "fancy effects" you see are a combination of the way MKMapView calls drawMapRect and your decision to use random colours every time it is draw. To speed up display when the user pans the map around MKMapView caches tiles from your overlay. If one tile goes off screen it can be thrown away or stored in a different cache or something, but the ones still on screen are just moved about and don't need to be redrawn which is good because drawing might mean a trip to your data supply or some other long calculation. That's why you call setNeedsDisplayInMapRect, it only needs to fetch those tiles and not redraw everything.
This works in all the apps I've seen and is a good system on the whole. Except for when you draw something that isn't going to be the same each time, like your random colours. If you really want to colour the path like that then you should use a hash or something that seems random but is really based on something repeatable. Maybe the index the point is at, multiplied by the point coordinate, MD5ed and then take the 5th character and etc etc. What ever it is it must generate the same colour for the same line no matter how many times it is called. Personally I'd rather the line was one colour, maybe dashed. But that's between you and your users.
because whenever you draw any path you need to close it. and as you close the path it automatically draws line between lastPoint and firstPoint.
just remove last line in your path drawing
CGContextClosePath(context);
The purpose of CGContextClosePath is to literally close the path - connect start and end points. You don't need that, StrokePath drew the path already. Remove the line. Also move CGContextStrokePath outside your loop, the approach is to move/add line/move/add line... and then stroke (you can change colors as you do this, which you are).
For the "fancy effects" (tilted line joining), investigate the effects of possible CGContextSetLineJoin and CGContextSetLineCap call parameters.
I am drawing a set of connected lines as the user enters clicks to specify vertices. I am build a CGMutablePathRef
- (void)addPointToCGMPR: (CGPoint)p
forNewPolygon: (BOOL)newPoly
{
if (newPoly)
{
CGPathMoveToPoint(cgmpr, NULL, p.x, p.y);
}
else
{
CGPathAddLineToPoint(cgmpr, NULL, p.x, p.y);
}
[self setNeedsDisplay];
}
drawRect: is then called after each point is entered, and the context CGMutablePathRef is added to a CGContext for display
- (void)drawRect:(CGRect)rect
{
// Set up a context to display
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextAddPath(context, cgmpr);
CGContextSetLineWidth(context, 1);
CGContextStrokePath(context);
// If NOT empty then Close the context so that box comparison can occur later
if (!CGContextIsPathEmpty(context))
{
CGContextClosePath(context);
}
}
I am getting lines on the screen as one would expect. My trouble is that after the user picks the first point there is nothing rendered on the screen and the user is left not knowing if the system got his first pick until after the second point is entered. I would like for the system to render the first point even before the next point is entered. I would also like for the system to render each one of the vertices picked in a visually distinct way - right now I am not getting vertices rendered, only lines. Is there a way to ask the CGContext to render points? Is there a way to specify the style in which these points are rendered? Thanks.
You're free to draw whatever you want to represent the points: a small square, a small circle, an image of a pushpin, etc. You already know how to draw into a context; just loop over your points and draw them however you want them to appear. I personally tend to use CGContextFillEllipseInRect for things like this. For each point, make a rect that surrounds the point and draw it:
CGFloat pointSize = 4;
CGRect pointRect = CGRectMake(point.x - pointSize / 2, point.y - pointSize / 2, pointSize, pointSize);
CGContextFillEllipseInRect(context, pointRect);
No. You'll need to render them yourself by adding another CGPath to describe whatever you want to represent the vertex. Since you'll probably draw this often, you may want to render it to a CGLayer so you can easily copy it where you need it. You could also draw your vertices onto CALayers so you can easily move them around. But it's up to you to design and manage this part of the drawing.
i have some uibezierpaths. as paths, they don't really have thickness.
but, I am hoping to find a way to define an area around a path like the grayish areas around the lines in this picture
basically, i want to test whether drawn lines fall within the buffer zone around the lines.
i thought this would be simple, but it's turning out to be much more complex than i thought. I can use the CGPathApply function to examine the points along my path, and then getting a range +or- each point, but it's more complicated than that with angles and curves. any ideas?
Expanding the width of a path is actually quite difficult. However, you could just stroke it with a thicker width and get pretty much the same effect. Something like...
CGContextSetRGBStrokeColor(context, 0.4, 0.4, 0.4, 1.0);
[path setLineWidth:15];
[path stroke];
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 0.0, 1.0);
[path setLineWidth:3];
[path stroke];
...would produce a picture like the one in your question. But I doubt that's news to you.
The real trick is the test of "whether drawn lines fall within the buffer zone." That problem is very similar to one which I just answered for myself in another question. Take a look at the LineSample.zip code I shared there. This implements a bitmap/bitwise data comparison to detect hits on lines much like you need. You could just draw the thicker "buffer" paths into the bitmap for testing and show the thinner lines in your view.
Basically, you want to check if any point falls inside a region of specified size around your path.
It is actually very simple to do. First, you need a value which will define the amount of space around path you want to test. Let's say 20 points. So what you need to do is start a FOR loop, starting from -20 to 20, and at each iteration, create a copy of your path, translate the path's x and y co-odrinates, check each of them.
All of this is more clear in this code sample.
CGPoint touchPoint = /*get the point*/;
NSInteger space = 20;
for (NSInteger i = -space; i < space; i++) {
UIBezierPath *pathX = [UIBezierPath bezierPathWithCGPath:originalPath.CGPath];
[pathX applyTransform:CGAffineTransformMakeTranslation(i, 0)];
if ([pathX containsPoint:touchPoint]) {
/*YEAH!*/
}
else {
UIBezierPath *pathY = [UIBezierPath bezierPathWithCGPath:originalPath.CGPath];
[pathY applyTransform:CGAffineTransformMakeTranslation(0, i)];
if ([pathY containsPoint:touchPoint]) {
/*YEAH!*/
}
}
}