Objective-C check if subviews of rotated UIViews intersect? - ios

I don't know where to start with this one. Obviously CGRectIntersectsRect will not work in this case, and you'll see why.
I have a subclass of UIView that has a UIImageView inside it that is placed in the exact center of the UIView:
I then rotate the custom UIView to maintain the frame of the inner UIImageView while still being able to perform a CGAffineRotation. The resulting frame looks something like this:
I need to prevent users from making these UIImageViews intersect, but I have no idea how to check intersection between the two UIImageViews, since not only do their frames not apply to the parent UIView, but also, they are rotated without it affecting their frames.
The only results from my attempts have been unsuccessful.
Any ideas?

The following algorithm can be used to check if two (rotated or otherwise transformed) views overlap:
Use [view convertPoint:point toView:nil] to convert the 4 boundary points of both views
to a common coordinate system (the window coordinates).
The converted points form two convex quadrilaterals.
Use the SAT (Separating Axis Theorem) to check if the quadrilaterals intersect.
This: http://www.geometrictools.com/Documentation/MethodOfSeparatingAxes.pdf is another description of the algorithm containing pseudo-code, more can be found by googling for "Separating Axis Theorem".
Update: I have tried to create a Objective-C method for the "Separating Axis Theorem", and this is what I got. Up to now, I did only a few tests, so I hope that there are not too many errors.
- (BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2;
tests if 2 convex polygons intersect. Both polygons are given as a CGPoint array of the vertices.
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
tests (as described above) if two arbitrary views intersect.
Implementation:
- (void)projectionOfPolygon:(CGPoint *)poly count:(int)count onto:(CGPoint)perp min:(CGFloat *)minp max:(CGFloat *)maxp
{
CGFloat minproj = MAXFLOAT;
CGFloat maxproj = -MAXFLOAT;
for (int j = 0; j < count; j++) {
CGFloat proj = poly[j].x * perp.x + poly[j].y * perp.y;
if (proj > maxproj)
maxproj = proj;
if (proj < minproj)
minproj = proj;
}
*minp = minproj;
*maxp = maxproj;
}
-(BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2
{
for (int i = 0; i < count1; i++) {
// Perpendicular vector for one edge of poly1:
CGPoint p1 = poly1[i];
CGPoint p2 = poly1[(i+1) % count1];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
// Projection intervals of poly1, poly2 onto perpendicular vector:
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
// If projections do not overlap then we have a "separating axis"
// which means that the polygons do not intersect:
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// And now the other way around with edges from poly2:
for (int i = 0; i < count2; i++) {
CGPoint p1 = poly2[i];
CGPoint p2 = poly2[(i+1) % count2];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// No separating axis found, then the polygons must intersect:
return YES;
}
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
{
CGPoint poly1[4];
CGRect bounds1 = view1.bounds;
poly1[0] = [view1 convertPoint:bounds1.origin toView:nil];
poly1[1] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y) toView:nil];
poly1[2] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y + bounds1.size.height) toView:nil];
poly1[3] = [view1 convertPoint:CGPointMake(bounds1.origin.x, bounds1.origin.y + bounds1.size.height) toView:nil];
CGPoint poly2[4];
CGRect bounds2 = view2.bounds;
poly2[0] = [view2 convertPoint:bounds2.origin toView:nil];
poly2[1] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y) toView:nil];
poly2[2] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y + bounds2.size.height) toView:nil];
poly2[3] = [view2 convertPoint:CGPointMake(bounds2.origin.x, bounds2.origin.y + bounds2.size.height) toView:nil];
return [self convexPolygon:poly1 count:4 intersectsWith:poly2 count:4];
}
Swift version. (Added this behaviour to UIView via an extension)
extension UIView {
func projection(of polygon: [CGPoint], perpendicularVector: CGPoint) -> (CGFloat, CGFloat) {
var minproj = CGFloat.greatestFiniteMagnitude
var maxproj = -CGFloat.greatestFiniteMagnitude
for j in 0..<polygon.count {
let proj = polygon[j].x * perpendicularVector.x + polygon[j].y * perpendicularVector.y
if proj > maxproj {
maxproj = proj
}
if proj < minproj {
minproj = proj
}
}
return (minproj, maxproj)
}
func convex(polygon: [CGPoint], intersectsWith polygon2: [CGPoint]) -> Bool {
//
let count1 = polygon.count
for i in 0..<count1 {
let p1 = polygon[i]
let p2 = polygon[(i+1) % count1]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m2.1
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
let count2 = polygon2.count
for i in 0..<count2 {
let p1 = polygon2[i]
let p2 = polygon2[(i+1) % count2]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m1.0
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
return true
}
func intersects(with someView: UIView) -> Bool {
//
var points1 = [CGPoint]()
let bounds1 = bounds
let p11 = convert(bounds1.origin, to: nil)
let p21 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y), to: nil)
let p31 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y + bounds1.size.height) , to: nil)
let p41 = convert(CGPoint(x: bounds1.origin.x, y: bounds1.origin.y + bounds1.size.height), to: nil)
points1.append(p11)
points1.append(p21)
points1.append(p31)
points1.append(p41)
//
var points2 = [CGPoint]()
let bounds2 = someView.bounds
let p12 = someView.convert(bounds2.origin, to: nil)
let p22 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y), to: nil)
let p32 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y + bounds2.size.height) , to: nil)
let p42 = someView.convert(CGPoint(x: bounds2.origin.x, y: bounds2.origin.y + bounds2.size.height), to: nil)
points2.append(p12)
points2.append(p22)
points2.append(p32)
points2.append(p42)
//
return convex(polygon: points1, intersectsWith: points2)
}

Related

How to draw line chart with smooth curve in iOS?

I have some points like
(x1,y1), (x2,y2), (x3,y3)...
Now I want to draw a chart with smooth curve?
I'm trying to draw as below
-(void)drawPrices
{
NSInteger count = self.prices.count;
UIBezierPath *path = [UIBezierPath bezierPath];
path.lineCapStyle = kCGLineCapRound;
for(int i=0; i<count-1; i++)
{
CGPoint controlPoint[2];
CGPoint p = [self pointWithIndex:i inData:self.prices];
if(i==0)
{
[path moveToPoint:p];
}
CGPoint nextPoint, previousPoint, m;
nextPoint = [self pointWithIndex:i+1 inData:self.prices];
previousPoint = [self pointWithIndex:i-1 inData:self.prices];
if(i > 0) {
m.x = (nextPoint.x - previousPoint.x) / 2;
m.y = (nextPoint.y - previousPoint.y) / 2;
} else {
m.x = (nextPoint.x - p.x) / 2;
m.y = (nextPoint.y - p.y) / 2;
}
controlPoint[0].x = p.x + m.x * 0.2;
controlPoint[0].y = p.y + m.y * 0.2;
// Second control point
nextPoint = [self pointWithIndex:i+2 inData:self.prices];
previousPoint = [self pointWithIndex:i inData:self.prices];
p = [self pointWithIndex:i + 1 inData:self.prices];
m = zeroPoint;
if(i < self.prices.count - 2) {
m.x = (nextPoint.x - previousPoint.x) / 2;
m.y = (nextPoint.y - previousPoint.y) / 2;
} else {
m.x = (p.x - previousPoint.x) / 2;
m.y = (p.y - previousPoint.y) / 2;
}
controlPoint[1].x = p.x - m.x * 0.2;
controlPoint[1].y = p.y - m.y * 0.2;
[path addCurveToPoint:p controlPoint1:controlPoint[0] controlPoint2:controlPoint[1]];
}
CAShapeLayer *lineLayer = [CAShapeLayer layer];
lineLayer.path = path.CGPath;
lineLayer.lineWidth = LINE_WIDTH;
lineLayer.strokeColor = _priceColor.CGColor;
lineLayer.fillColor = [UIColor clearColor].CGColor;
[self.layer addSublayer:lineLayer];
}
but in some situation, the line will "go back" like
Is there any better way to do that?
I've got you another solution which I used to success.
It requires SpriteKit, which has a fantastic tool called SKKeyFrameSequence which is capable of spline interpolation as shown in this tutorial provided by Apple.
So the idea is that you'll create the correct SKKeyFrameSequence object like this (assuming your data is in an array of (CGFloat, CGFloat) (x, y) tuples):
let xValues = data.map { $0.0 }
let yValues = data.map { $0.1 }
let sequence = SKKeyFrameSequence(keyFrameValues: yValues,
times: xValues.map { NSNumber($0) })
sequence.interpolationMode = .spline
let xMin = xValues.min()!
let xMax = xValues.max()!
Then, if you divide the interpolated spline into 200 pieces (change this value if you want, but for me this resulted in smooth waves to human eyes), you can draw a path consisting of small lines.
var splinedValues = [(CGFloat, CGFloat)]()
stride(from: xMin, to: xMax, by: (xMax - xMin) / 200).forEach {
splinedValues.append((CGFloat($0),
sequence.sample(atTime: CGFloat($0)) as! CGFloat))
}
Then finally the path (I will use SwiftUI, but you can use UIKit the same way too.):
Path { path in
path.move(to: CGPoint(x: splinedValues[0].0, y: splinedValues[0].1))
for splineValue in splinedValues.dropFirst() {
path.addLine(to: CGPoint(x: splineValue.0, y: splineValue.1))
}
}
For the values
[(1, 4), (2, 6), (3, 7), (4, 5), (5, 3), (6, -1), (7, -2), (8, -2.5), (9, -2), (10, 0), (11, 4)]
I've gotten a graph like this with the method described above: (I added the points too to evaluate the result better)
I find the answer at Draw Graph curves with UIBezierPath
And try implement with the code
+ (UIBezierPath *)quadCurvedPathWithPoints:(NSArray *)points
{
UIBezierPath *path = [UIBezierPath bezierPath];
NSValue *value = points[0];
CGPoint p1 = [value CGPointValue];
[path moveToPoint:p1];
if (points.count == 2) {
value = points[1];
CGPoint p2 = [value CGPointValue];
[path addLineToPoint:p2];
return path;
}
for (NSUInteger i = 1; i < points.count; i++) {
value = points[i];
CGPoint p2 = [value CGPointValue];
CGPoint midPoint = midPointForPoints(p1, p2);
[path addQuadCurveToPoint:midPoint controlPoint:controlPointForPoints(midPoint, p1)];
[path addQuadCurveToPoint:p2 controlPoint:controlPointForPoints(midPoint, p2)];
p1 = p2;
}
return path;
}
static CGPoint midPointForPoints(CGPoint p1, CGPoint p2) {
return CGPointMake((p1.x + p2.x) / 2, (p1.y + p2.y) / 2);
}
static CGPoint controlPointForPoints(CGPoint p1, CGPoint p2) {
CGPoint controlPoint = midPointForPoints(p1, p2);
CGFloat diffY = abs(p2.y - controlPoint.y);
if (p1.y < p2.y)
controlPoint.y += diffY;
else if (p1.y > p2.y)
controlPoint.y -= diffY;
return controlPoint;
}
Thanks user1244109 ^_^.

How can i draw Two line with some angle between them, later can change angle by dragging any line

I want to make UBersense like app (http://blog.ubersense.com/2013/01/03/how-to-use-the-drawing-tools-in-ubersense/), there i need to draw two line with some angle, after that i can adjust the angle between two line by dragging any line or intersection point.
can you guys please provide me some idea or code snippet.
screenshots url:
https://picasaweb.google.com/yunusm7/AppScreenshots#slideshow/5952787957718627714
Thanks in advance.
You have a construction with three points, one point is an angle point, and two others are just vertices. First of all you should create a new class like this:
#interface MyAngle : NSObject {
}
#property (nonatomic) CGPoint p1;
#property (nonatomic) CGPoint p2;
#property (nonatomic) CGPoint v; // this is an angle point
#end
You can use the default implementation of this without any tricks with such sample init:
- (id)init {
if (self = [super init]) {
p1 = CGPointMake(1,0);
p2 = CGPointMake(0,1);
v = CGPointZero;
}
return self;
}
But also as I understood you need to know the value of the angle. You can do this using the following way:
- (CGFloat)valueOfAngle {
CGPoint v1 = CGPointMake(p1.x-v.x, p1.y-v.y);
CGPoint v2 = CGPointMake(p2.x-v.x, p2.y-v.y);
CGFloat scalarProduct = v1.x*v2.x + v1.y*v2.y;
CGFloat lengthProduct = sqrt(v1.x*v1.x + v1.y*v1.y)*sqrt(v2.x*v2.x + v2.y*v2.y);
CGFloat fraction = scalarProduct / lengthProduct;
if (fraction < -1) fraction = -1;
if (fraction > 1) fraction = 1;
return acos(fraction);
}
If you want to obtain angles more than 180 degrees you should change the code above a little. But there are too much information about how to do this in the Internet, so I will skip this part.
Then you need to create an instance of MyAngle in your viewController. Let it be called "angle". Knowing coordinates of every three points if enough do draw it (!!!). Implement drawRect method in a view that will contain the MyAngle instance (I strongly recommend do to this on your own subclass of UIView):
- (void)drawRect {
[super drawRect];
// set options of drawing
CGContextRef c = UIGraphicsGetCurrentContext();
CGFloat red[4] = {1.0f, 0.0f, 0.0f, 1.0f};
CGContextSetLineWidth(c, 3.0);
CGContextSetStrokeColor(c, red);
// draw an angle directly
CGContextBeginPath(c);
CGContextMoveToPoint(c, angle.p1.x, angle.p1.y);
CGContextAddLineToPoint(c, angle.v.x, angle.v.y);
CGContextAddLineToPoint(c, angle.p2.x, angle.p2.y);
CGContextStrokePath(c);
// draw circles around vertices (like on the screenshot you provided)
CGFloat R = 7.0f;
CGContextBeginPath(c);
CGContextAddEllipseInRect(c, CGRectMake(angle.p1.x - R, angle.p1.y - R, 2*R, 2*R));
CGContextStrokePath(c);
CGContextBeginPath(c);
CGContextAddEllipseInRect(c, CGRectMake(angle.p2.x - R, angle.p2.y - R, 2*R, 2*R));
CGContextStrokePath(c);
CGContextBeginPath(c);
CGContextAddEllipseInRect(c, CGRectMake(angle.v.x - R, angle.v.y - R, 2*R, 2*R));
CGContextStrokePath(c);
}
And that's all you need to know for drawing what you want! You can change the stroke color or radius of three circles if you want.
Then you need to have a possibility to change the locations of your angle's points. For this you can just implement panGestureRecognizer in your viewController's viewDidLoad method like this:
UIPanGestureRecognizer *pan = [[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(moveAngle:)] autorelease];
pan.delegate = self;
[self.view addGestureRecognizer:pan];
Implement UIGestureRecognizerDelegate method:
- (BOOL)gestureRecognizerShouldBegin:(UIGestureRecognizer *)gestureRecognizer {
if ([gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]]) {
CGPoint p = [gestureRecognizer locationInView:self.view];
CGFloat d1 = sqrt((p.x-angle.p1.x)*(p.x-angle.p1.x) + (p.y-angle.p1.y)*(p.y-angle.p1.y);
CGFloat d2 = sqrt((p.x-angle.p2.x)*(p.x-angle.p2.x) + (p.y-angle.p2.y)*(p.y-angle.p2.y);
CGFloat d3 = sqrt((p.x-angle.v.x)*(p.x-angle.v.x) + (p.y-angle.v.y)*(p.y-angle.v.y);
// just check if we touched the screen near some of angle's points
CGFloat tolerance = 15.0f;
return (d1 < tolerance) || (d2 < tolerance) || (d3 < tolerance);
}
return YES;
}
and tagret's selector (also in your viewController):
- (void)moveAngle:(UIPanGestureRecognizer *)gr {
CGPoint p = [gr locationInView:self.view];
if (gr.state == UIGestureRecognizerStateBegan) {
CGFloat d1 = sqrt((p.x-angle.p1.x)*(p.x-angle.p1.x) + (p.y-angle.p1.y)*(p.y-angle.p1.y);
CGFloat d2 = sqrt((p.x-angle.p2.x)*(p.x-angle.p2.x) + (p.y-angle.p2.y)*(p.y-angle.p2.y);
CGFloat d3 = sqrt((p.x-angle.v.x)*(p.x-angle.v.x) + (p.y-angle.v.y)*(p.y-angle.v.y);
// pointToMove is your int variable
CGFloat tolerance = 15.0f;
if (d1 < tolerance) {
pointToMove = 1;
}
else if (d2 < tolerance) {
pointToMove = 2;
}
else {
pointToMove = 3;
}
}
else {
if (pointToMove == 1) {
angle.p1 = loc;
}
else if (pointToMove == 2) {
angle.p2 = loc;
}
else {
angle.v = loc;
}
[yourCustomView setNeedsDisplay];
[yourLabel setText:[NSString stringWithFormat:#"%.3f", [angle valueOfangle]*180/PI]];
}
}
Maybe I skip some evident things, but I think it should be enough for you to begin writing some code.

How does CALayer convert point from and to its sublayers?

Lets assume we have a 2D space (to simplify situation), and layer S and layer C, where C is sublayer of S.
The conversion process must affect bounds, position of C, transform of C, sublayersTransform of S, anchorPoint of C. My guess was the next:
CGAffineTransform transformToChild(CALayer *S, CALayer *C) {
CGFloat txa = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tya = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txb = C.position.x;
CGFloat tyb = C.position.y;
CGAffineTransform sublayerTransform = CATransform3DGetAffineTransform(S.sublayerTransform);
CGAffineTransform fromS = CGAffineTransformTranslate(sublayerTransform, txb, tyb);
fromS = CGAffineTransformConcat(fromS, C.affineTransform);
fromS = CGAffineTransformTranslate(fromS, txa, tya);
return fromS;
}
But this is not working when transform of the child layer is not identity (e.g. in case of rotation to M_PI_2 angle).
Whole code with layers:
CALayer *l1 = [CALayer new];
l1.frame = CGRectMake(-40, -40, 80, 80);
l1.bounds = CGRectMake(40, 40, 80, 80);
CALayer *l2 = [CALayer new];
l2.frame = CGRectMake(50, 40, 20, 20);
l2.bounds = CGRectMake(40, 40, 20, 20);
CGAffineTransform t2 = CGAffineTransformMakeRotation(M_PI / 2);
l2.affineTransform = t2;
[l1 addSublayer:l2];
CGAffineTransform toL2 = transformToChild(l1, l2);
CGPoint p = CGPointApplyAffineTransform(CGPointMake(70, 50), toL2);
NSLog(#"Custom Point %#", [NSValue valueWithCGPoint:p]);
p = [l1 convertPoint:CGPointMake(70, 50) toLayer:l2];
NSLog(#"CoreAnimation Point %#", [NSValue valueWithCGPoint:p]);
Comparison to system results:
Custom Point NSPoint: {-50, 80}
CoreAnimation Point NSPoint: {50, 40}
There's an old mailing list thread with some details about this here:
http://lists.apple.com/archives/quartz-dev/2008/Mar/msg00086.html
http://lists.apple.com/archives/quartz-dev/2008/Mar/msg00087.html
those messages are quite old, so e.g. they don't include the effects of the geometryFlipped property, which was added more recently, but that would just add another term onto the merged matrix.
So, I found this way of point conversion to and from sublayer's coordinate space, which works correctly with sublayer's non-identity transformation. sublayersTransform of the super layer is not covered here, but I think it would be not hard to extend these functions to support it.
CGPoint pointToChild(CALayer *C, CGPoint p) {
CGFloat txa = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tya = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txb = C.position.x;
CGFloat tyb = C.position.y;
p.x -= txb;
p.y -= tyb;
p = CGPointApplyAffineTransform(p, CGAffineTransformInvert(C.affineTransform));
if (C.isGeometryFlipped) {
CGAffineTransform flip = CGAffineTransformMakeScale(1.0f, -1.0f);
flip = CGAffineTransformTranslate(flip, 0, C.bounds.size.height * (2.0f * C.anchorPoint.y - 1.0f));
p = CGPointApplyAffineTransform(p, CGAffineTransformInvert(flip));
}
p.x -= txa;
p.y -= tya;
return p;
}
CGPoint pointFromChild(CALayer *C, CGPoint p) {
CGFloat txb = - C.bounds.origin.x - C.bounds.size.width * C.anchorPoint.x;
CGFloat tyb = - C.bounds.origin.y - C.bounds.size.height * C.anchorPoint.y;
CGFloat txa = C.position.x;
CGFloat tya = C.position.y;
p.x += txb;
p.y += tyb;
if (C.isGeometryFlipped) {
CGAffineTransform flip = CGAffineTransformMakeScale(1.0f, -1.0f);
flip = CGAffineTransformTranslate(flip, 0, C.bounds.size.height * (2.0f * C.anchorPoint.y - 1.0f));
p = CGPointApplyAffineTransform(p, flip);
}
p = CGPointApplyAffineTransform(p, C.affineTransform);
p.x += txa;
p.y += tya;
return p;
}

How to discover if 2 CGPath roteted rects intersect?

I am building 2 rects when view controller is loaded, using CGPath. The rects can be moved with PanGestureRecognizer. The question is how can I know when 2 rects had met? In order to not let them intersect?
MainViewController.m
- (void)viewDidLoad
{
[super viewDidLoad];
for (int i=0; i< 2; i++) {
//create rects of CGPath
CGRectCustomView* randomColorRect =
[[CGRectCustomView alloc]initWithFrame:
CGRectMake(<random place on screen>)];
//random angle
randomColorRect.transform =
CGAffineTransformMakeRotation
(DegreesToRadians([Shared randomIntBetween:0 and:360]));
[self.view addSubview:randomColorRect];
}
}
- (BOOL)areRectsCollide {
???How to find this???
}
CGRectCustomView.m:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 8.0);
CGContextStrokePath(context); // do actual stroking
CGContextSetRGBFillColor(context, <green color>, 1);
CGContextFillRect(context, CGRectMake(self.frame.origin.x, self.frame.origin.y, self.frame.size.width, self.frame.size.height));
path = CGContextCopyPath(context);
}
In Apple guide here, there is a function that determines if a path contains point
- (BOOL)containsPoint:(CGPoint)point onPath:(UIBezierPath *)path inFillArea:(BOOL)inFil,
but I have a rectangle which is endless number of points. So what should I do? Breaking my head...
Found it!
I used algorithm described here.
Just one thing: I move the rects around on the screen. So in order for them not to stack after the first collision, I save the last uncollisioned point and if collision accusers, I restore last location.
- (void)handlePan:(UIPanGestureRecognizer *)recognizer {
BOOL rectsColide = NO;
for (RandomColorRect* buttonRectInStoredRects in arreyOfPaths) {
if (buttonRectInStoredRects.tag != recognizer.view.tag) {
if ([self view:buttonRectInStoredRects intersectsWith:recognizer.view]) {
rectsColide = YES;
}
}
}
CGPoint translation = [recognizer translationInView:self.view];
if (!rectsColide) {
lastPoint = recognizer.view.center;
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x,
recognizer.view.center.y + translation.y);
}else{
recognizer.view.center = CGPointMake(lastPoint.x ,lastPoint.y);
}
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
}
- (void)projectionOfPolygon:(CGPoint *)poly count:(int)count onto:(CGPoint)perp min:(CGFloat *)minp max:(CGFloat *)maxp
{
CGFloat minproj = MAXFLOAT;
CGFloat maxproj = -MAXFLOAT;
for (int j = 0; j < count; j++) {
CGFloat proj = poly[j].x * perp.x + poly[j].y * perp.y;
if (proj > maxproj)
maxproj = proj;
if (proj < minproj)
minproj = proj;
}
*minp = minproj;
*maxp = maxproj;
}
-(BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2
{
for (int i = 0; i < count1; i++) {
// Perpendicular vector for one edge of poly1:
CGPoint p1 = poly1[i];
CGPoint p2 = poly1[(i+1) % count1];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
// Projection intervals of poly1, poly2 onto perpendicular vector:
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
// If projections do not overlap then we have a "separating axis"
// which means that the polygons do not intersect:
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// And now the other way around with edges from poly2:
for (int i = 0; i < count2; i++) {
CGPoint p1 = poly2[i];
CGPoint p2 = poly2[(i+1) % count2];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// No separating axis found, then the polygons must intersect:
return YES;
}
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
{
CGPoint poly1[4];
CGRect bounds1 = view1.bounds;
poly1[0] = [view1 convertPoint:bounds1.origin toView:nil];
poly1[1] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y) toView:nil];
poly1[2] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y + bounds1.size.height) toView:nil];
poly1[3] = [view1 convertPoint:CGPointMake(bounds1.origin.x, bounds1.origin.y + bounds1.size.height) toView:nil];
CGPoint poly2[4];
CGRect bounds2 = view2.bounds;
poly2[0] = [view2 convertPoint:bounds2.origin toView:nil];
poly2[1] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y) toView:nil];
poly2[2] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y + bounds2.size.height) toView:nil];
poly2[3] = [view2 convertPoint:CGPointMake(bounds2.origin.x, bounds2.origin.y + bounds2.size.height) toView:nil];
return [self convexPolygon:poly1 count:4 intersectsWith:poly2 count:4];
}
If I understand your question, so in order to check if 2 CGRects meet you better use:
/* Return the intersection of `r1' and `r2'. This may return a null rect. */
CG_EXTERN CGRect CGRectIntersection(CGRect r1, CGRect r2)
For example:
CGRect rect1 = CGRectMake(0, 0, 10, 10);
CGRect rect2 = CGRectMake(5, 5, 10, 10);
CGRect rect3 = CGRectMake(20, 20, 10, 10);
CGRect r1 = CGRectIntersection(rect1, rect2); // Returns a CGRect with (0,0,5,5)
CGRect r2 = CGRectIntersection(rect1, rect3); // Returns a CGRect with (Inf,Inf,0,0)
if (CGSizeEqualToSize(r2.size,CGSizeZero)) {
// rect1 rect3 Intersects at CGSizeZero == Do not Intersect
}

Turn two CGPoints into a CGRect

How can I, given two different CGPoints, turn them into an CGRect?
Example:
CGPoint p1 = CGPointMake(0,10);
CGPoint p2 = CGPointMake(10,0);
How can I turn this into a CGRect?
This will take two arbitrary points and give you the CGRect that has them as opposite corners.
CGRect r = CGRectMake(MIN(p1.x, p2.x),
MIN(p1.y, p2.y),
fabs(p1.x - p2.x),
fabs(p1.y - p2.y));
The smaller x value paired with the smaller y value will always be the origin of the rect (first two arguments). The absolute value of the difference between x values will be the width, and between y values the height.
A slight modification of Ken's answer. Let CGGeometry "standardize" the rect for you.
CGRect rect = CGRectStandardize(CGRectMake(p1.x, p1.y, p2.x - p1.x, p2.y - p1.y));
Swift extension:
extension CGRect {
init(p1: CGPoint, p2: CGPoint) {
self.init(x: min(p1.x, p2.x),
y: min(p1.y, p2.y),
width: abs(p1.x - p2.x),
height: abs(p1.y - p2.y))
}
}
Assuming p1 is the origin and the other point is the opposite corner of a rectangle, you could do this:
CGRect rect = CGRectMake(p1.x, p1.y, fabs(p2.x-p1.x), fabs(p2.y-p1.y));
This function takes any number of CGPoints and gives you the smallest CGRect back.
CGRect CGRectSmallestWithCGPoints(CGPoint pointsArray[], int numberOfPoints)
{
CGFloat greatestXValue = pointsArray[0].x;
CGFloat greatestYValue = pointsArray[0].y;
CGFloat smallestXValue = pointsArray[0].x;
CGFloat smallestYValue = pointsArray[0].y;
for(int i = 1; i < numberOfPoints; i++)
{
CGPoint point = pointsArray[i];
greatestXValue = MAX(greatestXValue, point.x);
greatestYValue = MAX(greatestYValue, point.y);
smallestXValue = MIN(smallestXValue, point.x);
smallestYValue = MIN(smallestYValue, point.y);
}
CGRect rect;
rect.origin = CGPointMake(smallestXValue, smallestYValue);
rect.size.width = greatestXValue - smallestXValue;
rect.size.height = greatestYValue - smallestYValue;
return rect;
}
This will return a rect of width or height 0 if the two points are on a line
float x,y,h,w;
if (p1.x > p2.x) {
x = p2.x;
w = p1.x-p2.x;
} else {
x = p1.x;
w = p2.x-p1.x;
}
if (p1.y > p2.y) {
y = p2.y;
h = p1.y-p2.y;
} else {
y = p1.y;
h = p2.y-p1.y;
}
CGRect newRect = CGRectMake(x,y,w,h);
let r0 = CGRect(origin: p0, size: .zero)
let r1 = CGRect(origin: p1, size: .zero)
let rect = r0.union(r1).standardized

Resources