I'm creating a game in Spritekit and am using the Pan Gesture with the Pinch Gesture in order to view a map. Currently I am limiting the zoom of the Pinch Gesture to 2.0 scale. I'm trying to limit the Pan Gesture to the bounds of the screen even when zoomed. The map is larger than the screen and I only want the user to be able to pan around until the outer edge of the map hits the outer edge of the screen, even when zoomed. Here is my amateur way of trying to handle the situation:
-(void) handlePanFrom:(UIPanGestureRecognizer*)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [recognizer translationInView:recognizer.view];
translation = CGPointMake(-translation.x, translation.y);
CGPoint desiredPos = CGPointSubtract(_mapNode.position, translation);
NSLog(#"Moving map to x: %f y: %f", desiredPos.x, desiredPos.y);
NSLog(#"Map node position x: %f y: %f", _mapNode.position.x, _mapNode.position.y);
NSLog(#"Map scale is %f", mapScale);
NSLog(#"Map size is x: %f y: %f", _mapNode.map.frame.size.width, _mapNode.map.frame.size.height);
if (desiredPos.y <= (mapScale * 300) && desiredPos.y >= ((1/mapScale) * 200)) {
_mapNode.position = CGPointMake(_mapNode.position.x, desiredPos.y);
[recognizer setTranslation:CGPointZero inView:recognizer.view];
}
if (desiredPos.x <= (mapScale * 250) && desiredPos.x >= ((1/mapScale) * 77)) {
_mapNode.position = CGPointMake(desiredPos.x, _mapNode.position.y);
[recognizer setTranslation:CGPointZero inView:recognizer.view];
}
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
}
}
I set a scale iVar when zooming to get the scale of the node. I should be able to use the sizes of the nodes (_mapNode.map is an SKSpriteNode) and the screen size to get the panning I want.
I thought I could do scale*xMax and (1/scale)*xMin (also with y position) but that doesn't seem to work. I would love to not have hard numbers in there (300, 200, etc) and use the sizing of the nodes/screen/etc to limit the panning. Any help is appreciated! Thanks!
Okay this is what I did for future reference for people.
I first created two iVars: one for the initial x position of the node and one for the initial y position. I then calculated a 'movable distance' by taking: sizeOfNode * 0.5 * scale and subtracting screenSize * 0.5. As long as desiredLocation <= (initialPos + movableDistance) and desiredLocation >= (initialPos - moveableDistance) then you could change the position of the node. You can do this for both x and y.
Related
I've got an UIImageView with a pan gesture recognizer, which I move and rotate based on user action.
When user lifts the finger, I want it to be animated back to its original position, here's my code:
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
startX = recognizer.view.center.x;
startY = recognizer.view.center.y;
startRotation = atan2(recognizer.view.transform.b, recognizer.view.transform.a);
NSLog(#"Start Position %f %f %f", startX, startY, startRotation);
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
float distance = (startX - recognizer.view.center.x) / DISTANCE_TO_ACCEPT;
CGPoint center = recognizer.view.center;
// animate back
[UIView animateWithDuration:0.5 animations:^{
NSLog(#"Position %f %f %f", recognizer.view.center.x, recognizer.view.center.y, atan2(recognizer.view.transform.b, recognizer.view.transform.a));
NSLog(#"Destination %f %f %f", startX, startY, startRotation);
NSLog(#"Translation %f %f %f", startX - center.x, startY - center.y, (startRotation - atan2(recognizer.view.transform.b, recognizer.view.transform.a)));
recognizer.view.transform = CGAffineTransformRotate(CGAffineTransformTranslate(recognizer.view.transform, startX - center.x, startY - center.y), (CGFloat) (startRotation - atan2(recognizer.view.transform.b, recognizer.view.transform.a)));
}];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
// move image
CGPoint translation = [recognizer translationInView:self.view];
recognizer.view.center = CGPointMake(recognizer.view.center.x + translation.x, recognizer.view.center.y + translation.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:self.view];
// rotate image
float distance = (startX - recognizer.view.center.x) / DISTANCE_TO_ACCEPT;
// cap distance
if (distance > 1) {
distance = 1;
} else if (distance < -1) {
distance = -1;
}
double rotation = 15 - startRotation;
rotation = rotation * distance;
recognizer.view.transform = CGAffineTransformRotate(recognizer.view.transform, (CGFloat) ((rotation * M_PI / 180) - atan2(recognizer.view.transform.b, recognizer.view.transform.a)));
}
}
But it never goes to the original position. It always has some kind of offset. Further movement of the image view only makes it worse.
It looks like the position is not modified during the back animation, because when I pan the image again, the start position and rotation is equal to the position and rotation it had at the end of previous movement [even though on screen the image position and rotation has changed].
What am I doing wrong here?
Thanks
Try something like this in UIGestureRecognizerStateEnded state:
recognizer.view.transform = CGAffineTransformIdentity;
recognizer.view.center = CGPointMake(startX,startY);
I currently have 2 circles. One big circle and one little circle. The little circle has a tap gesture recognizer that allows it to be dragged by the user. I would like the little circle's center to go no further than the big circle's radius. I have 4 auto layout constraints on the inner circle. 1 for fixed width, 1 for fixed height, 1 for distance from center for x, and 1 for distance from center for y. Here is how I am going about this:
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [recognizer translationInView:self.view];
CGFloat x = recognizer.view.center.x + translation.x;
CGFloat y = recognizer.view.center.y + translation.y;
CGPoint desiredPoint = CGPointMake(x, y);
//check if point the user is trying to get to is outside the radius of the outer circle
//if it is, set the center of the inner circle to the right position at the distance of the radius and with the same angle
if ([self distanceBetweenStartPoint:self.outerCircleView.center endPoint:desiredPoint] > self.outerCircleRadius) {
CGFloat angle = [self angleBetweenStartPoint:self.outerCircleView.center endPoint:actualPosition];
desiredPoint = [self findPointFromRadius:self.outerCircleRadius startPoint:self.outerCircleView.center angle:angle];
}
//adjust the constraints to move the inner circle
self.innerCircleCenterXConstraint.constant += actualPosition.x - recognizer.view.center.x;
self.innerCircleCenterYConstraint.constant += actualPosition.y - recognizer.view.center.y;
[recognizer setTranslation:CGPointMake(0.0, 0.0) inView:self.view];
}
}
- (CGFloat)distanceBetweenStartPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint {
CGFloat xDif = endPoint.x - startPoint.x;
CGFloat yDif = endPoint.y - startPoint.y;
//pythagorean theorem
return sqrt((xDif * xDif) + (yDif * yDif));
}
- (CGPoint)findPointFromRadius:(CGFloat)radius startPoint:(CGPoint)startPoint angle:(CGFloat)angle {
CGFloat x = radius * cos(angle) + startPoint.x;
CGFloat y = radius * sin(angle) + startPoint.y;
return CGPointMake(x, y);
}
- (CGFloat)angleBetweenStartPoint:(CGPoint)startPoint endPoint:(CGPoint)endPoint {
CGPoint originPoint = CGPointMake(endPoint.x - startPoint.x, endPoint.y - startPoint.y);
return atan2f(originPoint.y, originPoint.x);
}
This works almost perfectly. The problem is I try to find the percentage that the user moved towards the outside of the circle. So I use the distanceBetweenStartPoint(center of outer circle) endPoint(center of inner circle) method and divide that by the radius of the outer circle. This should give me a value of 1 when the circle has been dragged as far to one side as it can go. Unfortunately I am getting values like 0.9994324 or 1.000923. What could be causing this? Thanks for any insight!
I have the following code and it works just fine, but I can't really explain it to myself. My goal was to drag my UIView from the bottom and have it stop moving when the bottom of the UIView reaches the center of the screen. The code does just that, but I'm still confused by how the limit i set (newCenter.y >= 0 && newCenter.y <= 284) works.
My perception of whats happening is that when I begin dragging the UIView from the very bottom towards the top of the screen, my newCenter.y is constantly changing (decreasing) as I drag upwards. But what makes it stop when the bottom of the view being dragged reaches the center of the screen? The view stops dragging way before I even get close to 0.
- (IBAction)handlePan:(UIPanGestureRecognizer *)recognizer {
CGPoint translation = [recognizer translationInView:self.view];
CGPoint newCenter = CGPointMake(self.view.bounds.size.width / 2,
recognizer.view.center.y + translation.y);
if (newCenter.y >= 0 && newCenter.y <= 284) {
recognizer.view.center = newCenter;
[recognizer setTranslation:CGPointZero inView:self.view];
}
}
I am currently having a map(tilemap) within a layer that i would like to pan/zoom using the following code:
- (void) pinchGestureUpdated: (UIPinchGestureRecognizer *) recognizer {
if([recognizer state] == UIGestureRecognizerStateBegan) {
_lastScale = [recognizer scale];
CGPoint touchLocation1 = [recognizer locationOfTouch:0 inView:recognizer.view];
CGPoint touchLocation2 = [recognizer locationOfTouch:1 inView:recognizer.view];
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
}
else if ([recognizer state] == UIGestureRecognizerStateChanged) {
// NSLog(#"%d", recognizer.scale);
CGFloat newDeltaScale = 1 - (_lastScale - [recognizer scale]);
newDeltaScale = MIN(newDeltaScale, kMaxScale / self.scale);
newDeltaScale = MAX(newDeltaScale, kMinScale / self.scale);
CGFloat newScale = self.scale * newDeltaScale;
//self.scale = newScale;
[self scale: newScale atCenter:_pinchCenter];
_lastScale = [recognizer scale];
}
}
- (void) scale: (CGFloat) newScale atCenter: (CGPoint) center {
CGPoint oldCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Set the scale.
self.scale = newScale;
// Get the new center point.
CGPoint newCenterPoint = ccp(center.x * self.scale, center.y * self.scale);
// Then calculate the delta.
CGPoint centerPointDelta = ccpSub(oldCenterPoint, newCenterPoint);
// Now adjust your layer by the delta.
self.position = ccpAdd(self.position, centerPointDelta);
}
my issue is that the zoom is not taking in effect at the center of the pinch even though i am trying to change it at the same time i am zooming in through this method: (void) scale: (CGFloat) newScale atCenter: (CGPoint) center. Is there any reason this might not happening properly? Also how do i convert to the center location of the pinch into the coordinates system for my scene/layer?
Everything was actually fine in my approach. The problem i was having though is that the layer anchor point was different from the map one i defined, which was introducing an offset during scaling. I had to set both anchor to ccp(0,0).
The concersion from the screen coordinates of the pinch gesture's center to the layer is correct and can be achived by the following instructions when using UIKIt gesture recognizers:
CGPoint centerGL = [[CCDirector sharedDirector] convertToGL: ccpMidpoint(touchLocation1, touchLocation2)];
_pinchCenter = [self convertToNodeSpace:centerGL];
First of all, you cannot do ([recognizer state] == UIGestureRecognizerStateBegan) because state is a bitfield! So you have to do:
([recognizer state] & UIGestureRecognizerStateBegan)
The center location of your pinch is going to be in coordinates on the screen basically. As far as how you convert that into your own coordinate system, you need to figure out what the bounds on the device's screen are of the part of your scene/layer that is shown at the time the gesture starts. That's going to be coordinates like 10,10 x 200,200 or something, representing the pixel grid of the screen. Then you will have to figure out, in the coordinate system of your own app's scene/layer, what 10,10 maps to, and what 200,200 maps to. From there, you can derive a factor to apply to the screen coordinates of the pinch gesture's center, that would translate the pinch gesture's screen coordinates into the scene/layer coordinates.
What you're trying to do is tricky because as you scale the scene/layer, your centering that scaling around a point that's not in the center of the view. I'm sure if you look through some of Apple's sample code in one of the map-related apps, you can probably find some examples of methods that have this kind of pinch zooming.
I hope this helps.
I am currently designing a little game as a learning project and it is basically the following, a image is rotated and scaled on viewDidLoad and another image is a direct copy of the original image.
So basically there is a image that is a little bit different from the other, the objective is to scale it back down, rotate it and move it on top of the other image with 5 pixels, 5 degrees of rotate and 5 percent scale.
I have run into an issue. I "skew" the image using the following code...
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI/2.5);
image.transform = CGAffineTransformScale(transform, 1.25, 1.25);
My pan gesture does not perform correctly after rotating the image and then scaling it 125%.
Does anyone know what could be going on here? By incorrectly I mean that it doesn't move around with my finger.. It seems to glide or go the opposite direction. Video .
My pan gesture method is below.
if (gesture.state == UIGestureRecognizerStateBegan || gesture.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [gesture translationInView:image];
//if within game field
if((image.center.x + translation.x) > 50.0 && (image.center.x + translation.x) < 255.0 && (image.center.y + translation.y) > 50.0 && (image.center.y + translation.y) < 302) {
[image setCenter:CGPointMake([image center].x + translation.x, [image center].y + translation.y)]; //move it
}
}
[gesture setTranslation:CGPointZero inView:[image superview]];
if(gesture.state == UIGestureRecognizerStateEnded) [self didWin]; // not relevant to question
Does anyone know why pan performs incorrectly after I rotate and scale my image? When I comment out those first two lines of code the pan performs correctly and moves around with the users finger.
Thanks in advance for any suggestions or help!
Rotation might have changed the frame. I used cosf and sinf function to deal with it.
handlePan and handleRotate are the callback functions, in which I can control the subview of self.view. Here, you should replace image with your own view.
static CGFloat _rotation = 0;
- (void)handlePan:(UIPanGestureRecognizer*)recognizer
{
UIImageView *image = nil;
for (UIImageView *tmp in recognizer.view.subviews) { // pick the subview
if (tmp.tag == AXIS_TAG) {
image = tmp;
}
}
CGPoint translation = [recognizer translationInView:image];
// I have found any related documents yet, but these equations do work!//////
CGFloat dx = translation.x * cosf(_rotation) - translation.y*sinf(_rotation);
CGFloat dy = translation.x * sinf(_rotation) + translation.y*cosf(_rotation);
/////////////////////////////////////////////////////////////////////////////
image.center = CGPointMake(image.center.x+dx, dy+image.center.y);
[recognizer setTranslation:CGPointMake(0, 0) inView:image];
}
- (void)handleRotate:(UIRotationGestureRecognizer*)recognizer
{
UIImageView *image = nil;
for (UIImageView *tmp in recognizer.view.subviews) { // pick the subview
if (tmp.tag == AXIS_TAG) {
image = tmp;
}
}
CGFloat r = [recognizer rotation];
if ((recognizer.state == UIGestureRecognizerStateBegan || recognizer.state == UIGestureRecognizerStateChanged)
&& recognizer.numberOfTouches == 2) {
image.transform = CGAffineTransformMakeRotation(_rotation+r);
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
_rotation+=r; // Record the final rotation
}
}
The solution was to change the pan code just a tiny bit..
if (gesture.state == UIGestureRecognizerStateBegan || gesture.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [gesture translationInView:self.view]; //CHANGED
//if within game field
if((image.center.x + translation.x) > 50.0 && (image.center.x + translation.x) < 255.0 && (image.center.y + translation.y) > 50.0 && (image.center.y + translation.y) < 302) {
[image setCenter:CGPointMake([image center].x + translation.x, [image center].y + translation.y)]; //move it
}
}
[gesture setTranslation:CGPointZero inView:self.view];
I changed in view:self.view and translationInView:self.view];