Detect if UIbuttons overlap - ios

I have two UIButtons on my view, and I am trying to detect if the two overlap, in oder to do:
if (overlap)
move the second button
I have tried this:
if (BluetoothDeviceButton.X1 < btn.X2 && BluetoothDeviceButton.X2 > btn.X1 &&
BluetoothDeviceButton.Y1 < btn.Y2 && BluetoothDeviceButton.Y2 > btn.Y1){
}
I can't really get what I should put instead of X1, X2, etc. And I don't really know if this method is going to work at all.

CGRectIntersectsRect(CGRect rect1, CGRect rect2) will tell you if their frames overlap.
if (CGRectIntersectsRect(btn.frame, BluetoothDeviceButton.frame)) {
...
}

You need to use BluetoothDeviceButton.frame.origin.x or BluetoothDeviceButton.center.x.

Firstly make sure they are in the same view. If not, then get their frames using convertRectToView, or converRectFromView. Then use the CGRectContainsPoint of the CGGeometry class and check if any corner of one button lies in the frame of the other button.
PS. your corners will be:
CGFloat x1 = button1.frame.origin.x;
CGFloat y1 = button1.frame.origin.y;
CGFloat x2 = button1.frame.origin.x + button1.frame.size.width;
CGFloat y2 = button1.frame.origin.y + button1.frame.size.height;
Your corners will be:
CGPoint topLeft = CGPointMake(x1,y1);
CGPoint topRight = CGPointMake(x2,y1);
CGPoint bottomLeft = CGPointMake(x1,y2);
CGPoint bottomRight = CGPointMake(x2,y2);
This is just one possible solution. This is just to help understand geometry.
But the simplest solution would be using CGRectIntersectsRect (button1.frame, button2.frame);

Use BluetoothDeviceButton.frame.origin.x. Frame property contain also size.

Related

Determine if image being dragged is inside a specific area

I have a game where I'm moving square Blocks on a top layer overtop circles underneath, which are non-moveable. So when the dragging of a block ceases, I want to run a check or an if statement to see if the block I'm moving (myBlocks[objectDragging]) is within x amount of pixels of the center of my circle (myCircles[objectDragging]). objectDragging is just getting the tag of the image clicked. The matchable circle will have the same tag. Everything is working fine, I just cannot figure out how to check if the block I'm dropping (it's center point) is within so many pixels of the circles center point.
Some of what I'm working with:
var myBlocks = [UIImageView]()
var myCircles = [UIImageView]()
let objectDragging = recognizer.view?.tag
if myBlocks[objectDragging!].center.x == myCircles[objectDragging!].center.x {
...
} //this checks for an exact match of center.x where-as I want to check
//if the center.x for myBlocks[objectDragging!] is <= we'll say,
//25, pixels of the myCircles[objectDragging!].center.x
Discussion here to find distance between two CGPoints:
How to find the distance between two CG points?
per Lucius (answer 2)
You can use the hypot() or hypotf() function to calculate the
hypotenuse. Given two points p1 and p2:
CGFloat distance = hypotf(p1.x - p2.x, p1.y - p2.y);
sub in your myBlocks.center and myCircles.center for p1 and p2 and then
if distance < 25 {
...
}

Scaling a UIView along one direction with UIPinchGestureRecognizer

In my app I have a draggable UIView to which I have added a UIPinchGestureRecognizer. Following this tutorial, as default the view is scaled along both x and y directions. Is it possibile to scale along only one direction using one finger? I mean, for example, I tap the UIView both with thumb and index fingers as usual but while I'am holding the thumb I could move only the index finger in one direction and then the UIView should scale along that dimension depending on the direction in which the index moves.
Actually, I have achieved this by adding pins to my UIView like the following:
I was just thinking my app might be of better use using UIPinchGestureRecognizer.
I hope I have explained myself, if you have some hints or tutorial or documentation to link to me I would be very grateful.
In CGAffineTransformScale(view.transform, recognizer.scale, recognizer.scale) update only one axis: X or Y with recognizer.scale and keep the other one 1.0f
CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)
In order to know which axis you should scale, you need to do some math to see the angle between two fingers of a gesture and based on the angle to decide which axis to scale.
Maybe something like
enum Axis {
case X
case Y
}
func axisFromPoints(p1: CGPoint, _ p2: CGPoint) -> Axis {
let absolutePoint = CGPointMake(p2.x - p1.x, p2.y - p1.y)
let radians = atan2(Double(absolutePoint.x), Double(absolutePoint.y))
let absRad = fabs(radians)
if absRad > M_PI_4 && absRad < 3*M_PI_4 {
return .X
} else {
return .Y
}
}

How to attach sprites that collide?

I essentially want the "sprites" to collide when they stick together. However, I don't want the "joint" to be rigid; I essentially want the sprites to be able to move around as long as they are in contact with each other. Imagine two circles connected, and you can move one circle around the other, as long as it remains in contact.
I found this question: How to make one body stick to another moving object in SpriteKit and a lot of other resources that explain how to make sprites stick upon collision, but they all use SKJoints, which are rigid are not really flexible.
I guess another way to phrase it would be to say that I want the sprites to stick, but I want them to be able to "slide" on each other.
Well, I can think of one workaround, but this wouldn't work with non-normal polygons.
Sticking (pun unintended) with your circles example, what if you lock the position of the circle?
let circle1 = center circle
let circle2 = movable circle
Knowing the width of both circles, you can place in the update function that the position should be exactly the distance of:
((circle1.frame.width / 2) + (circle2.frame.width / 2))
If you're up to it, here's some code to help you on your way.
override func update(currentTime: CFTimeInterval) {
{
let distance = hypotf(Float(circle1.position.x - circle2.position.x), Float(circle1.position.y - circle2.position.y))
//calculate circle distances from each other
let radius = ((circle1.frame.width / 2) + (circle2.frame.width / 2))
//distance of circle positions
if distance != radius
{
//if distance is less or more than radius
let pointA = circle1.position
let pointB = circle2.position
let pointC = CGPointMake(pointB.x + 2, pointB.y)
let angle_ab = atan2(pointA.y - pointB.y, pointA.x - pointB.x)
let angle_cb = atan2(pointC.y - pointB.y, pointC.x - pointB.x)
let angle_abc = angle_ab - angle_cb
//get angle of circles from each other using atan2
let vectorx = cos(angle_abc)
let vectory = sin(angle_abc)
//convert angle into vectors
let x = circle1.position.x + radius * vectorx
let y = circle1.position.y + radius * vectory
//get new coordinates from vector, radius and center circle position
circle2.position = CGPointMake(x, y)
//set new position
}
}
Well you need to write code to make sure the movable circle, is well movable.
But, this should work.
I haven't tested this yet though, and I haven't even learned geometry let alone trig in school yet.
If I'm reading your question as you intended it, you can still use joints- just create actions with Inverse Kinematic constraints that allow rotation and translation around the contacting circles' joint.
https://developer.apple.com/library/prerelease/ios/documentation/SpriteKit/Reference/SKAction_Ref/index.html#//apple_ref/doc/uid/TP40013017-CH1-SW72

Ensure arbitrarily rotated CGRect fills another when rotation occurs

Update: partially working implementation below.
I've asked a couple questions on this previously here and here.
The first works great to determine if the "image" rect is sufficiently contained inside the "crop" rect.
The second works a little bit, but something's off in my implementation of it that it doesn't really work.
I'm now looking at the problem a little differently, and would like to change the behavior:
When the user begins to rotate the image, I'll run the check method (below) to determine if it needs fixing or not.
If it does need fixing, rather than waiting until the user has finished rotating it, I'd like to resize the image simultaneously to fit the bounds. Is there a simpler (or more reliable) way to implement this behavior?
I'm going to block rotation greater than 35ยบ in either direction so that we don't have to worry about severe enlargements.
Assumptions/Constraints:
I'm using AutoLayout
Point of rotation will be the center of the crop rect, which may or may not be the center of the image rect.
This demonstrates it working with a square crop, but the user can resize it to whatever, so I imagine it's going to bite me in the ass even more so when it's not square.
Code:
- (BOOL)rotatedView:(UIView*)rotatedView containsViewCompletely:(UIView*)cropView {
// If this method returns YES, good! if NO, bad!
CGPoint cropRotated[4];
CGRect rotatedBounds = rotatedView.bounds;
CGRect cropBounds = cropView.bounds;
// Convert corner points of cropView to the coordinate system of rotatedView:
cropRotated[0] = [cropView convertPoint:cropBounds.origin toView:rotatedView];
cropRotated[1] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y) toView:rotatedView];
cropRotated[2] = [cropView convertPoint:CGPointMake(cropBounds.origin.x + cropBounds.size.width, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
cropRotated[3] = [cropView convertPoint:CGPointMake(cropBounds.origin.x, cropBounds.origin.y + cropBounds.size.height) toView:rotatedView];
// Check if all converted points are within the bounds of rotatedView:
return (CGRectContainsPoint(rotatedBounds, cropRotated[0]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[1]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[2]) &&
CGRectContainsPoint(rotatedBounds, cropRotated[3]));
}
Taking even yet a different spin on this, I'm getting there. But as you can see in the .gif, eventually the calculations get out of whack because the rotation really isn't what I should be using to calculate the new size. How can I implement this with the correct geometry to ensure the image always resizes correctly? For time saving, I put this into an Xcode project so you don't have to fiddle around building your own: https://github.com/r3mus/RotationCGRectFix.git
- (IBAction)gestureRecognized:(UIRotationGestureRecognizer *)gesture {
CGFloat maxRotation = 40;
CGFloat rotation = gesture.rotation;
CGFloat currentRotation = atan2f(_imageView.transform.b, _imageView.transform.a);;
NSLog(#"%0.4f", RADIANS_TO_DEGREES(rotation));
if ((currentRotation > DEGREES_TO_RADIANS(maxRotation) && rotation > 0) || (currentRotation < DEGREES_TO_RADIANS(-maxRotation) && rotation < 0)) {
return;
}
CGAffineTransform rotationTransform = CGAffineTransformRotate(self.imageView.transform, rotation);
gesture.rotation = 0.0f;
if (gesture.state == UIGestureRecognizerStateChanged) {
CGFloat scale = sqrt(_imageView.transform.a * _imageView.transform.a + _imageView.transform.c * _imageView.transform.c);
if ((currentRotation > 0 && rotation > 0) || (currentRotation < 0 && rotation < 0))
scale = 1 + fabs(rotation);
else if (currentRotation == 0)
scale = 1;
else
scale = 1 - fabs(rotation);
CGAffineTransform sizeTransform = CGAffineTransformMakeScale(scale, scale);
CGPoint center = _imageView.center;
_imageView.transform = CGAffineTransformConcat(rotationTransform, sizeTransform);
_imageView.center = center;
}
}
Gif:

Starling iOS touch THEN drag centres object to "finger touch"

I've added a Quad to the stage, which I have made draggable.
However when I touch the draggable object and start to drag it, the object is "centered" to my finger touching (the x and y coordinates of my finger) the screen? As if the center of the Quad snaps to my touch point??? So touch the corners of the Quad will make it "snap" to my touch point => moving slightly.
So what I really want to know is, is it possible to "grab/drag" an object without having it adjusting and centering to my actual touch point? Ie. so that I can drag the object touching one of the corners of the squared Quad object?
My code can be seen below:
public function Game() {
addEventListener(Event.ADDED_TO_STAGE, onAdded);
}
private function onAdded (e:Event):void {
var q:Quad = new Quad(200,200);
q.x = 100;
q.y = 100;
q.addEventListener(TouchEvent.TOUCH, touchHandler);
addChild(q);
}
private function touchHandler(e : TouchEvent) : void
{
var touch:Touch = e.getTouch(stage);
var position:Point = touch.getLocation(stage);
var target:Quad = e.target as Quad;
if(touch.phase == TouchPhase.MOVED ){
target.x = position.x - target.width/2;
target.y = position.y - target.height/2;
trace("x:" + target.x + " y:" + target.y)
}
}
Cheers
Of course you can. It's normal to snap, as you do this:
target.x = position.x - target.width/2;
target.y = position.y - target.height/2;
The thing to do here is to save the location at which the object was touched. This way you can calculate the difference between it's origin point and the actual point that you've touched it. Let's say you have a square of 100x100, your origin point is at 0x0, and you touch it at 50x50.
If you move your finger exactly one pixel (just for explanation), the object should actually move ONE pixel. Logic is this:
save touch point only once (50x50)
every time there is a movement, check location
make calculation: current location (51x51) - initial point (50x50) = difference (1x1)
move the object with the difference: 0x0 + 1x1 = 1x1, which means the object will move one pixel
Right. Using:
touch = event.getTouch(stage);
touchX = touch.globalX;
touchY = touch.globalY;
And calculating the offset (x and y) did the trick.

Resources