UIPanGestureRecognizer stop calling the selector - ios

I'm having trouble to work with UIPanGestureRecognizer as it just calls the selector when I have my finger moving, I want it to keep calling the selector even my finger is standing at the same place.
There are 4 objects on the screen one at the top, one at the right side, one at the left side and one at the bottom. I have an object at the center of the screen (this is the one I'm moving with the panGesture). When this object touches the others I want it to give me a Log, it works when it touches but if I keep my finger at the same place it stops to give me logs, if I move a little it starts to give me logs again.
Is there anyway I can keep calling the selector even with my finger at the same place?
here is a code example:
- (void)moveObject:(UIPanGestureRecognizer *)sender
{
CGPoint translation = [sender translationInView:self.limiteDirecional];
[sender setTranslation:CGPointMake(0, 0) inView:self.limiteDirecional];
CGPoint center = sender.view.center;
center.y += translation.y;
int yMin = 0;
int yMax = self.limiteDirecional.frame.size.height;
if (center.y < yMin || center.y > yMax )
return;
sender.view.center = center;
center.x += translation.x;
int xMin = self.limiteDirecional.frame.size.width;
int xMax = 0;
if (center.x > xMin || center.x < xMax)
return;
sender.view.center = center;
if (CGRectIntersectsRect(sender.view.frame,self.Top.frame)) {
NSLog(#"TOP");
}
if (CGRectIntersectsRect(sender.view.frame,self.Botton.frame)) {
NSLog(#"BOTTON");
}
if (CGRectIntersectsRect(sender.view.frame,self.Right.frame)) {
NSLog(#"RIGHT");
}
if (CGRectIntersectsRect(sender.view.frame,self.Left.frame)) {
NSLog(#" LEFT");
}
if (sender.state == UIGestureRecognizerStateEnded) {
sender.view.center = CGPointMake(self.view.frame.size.width / 2, self.view.frame.size.height / 2);
}
}

I'm not entirely following the logic of your routine, so I'll provide a generic template of what a solution might look like when you want continuous events in the middle of a gesture, whether the user is moving their finger or not. Hopefully you can adapt this technique for your own purposes.
This uses CADisplayLink, which is considered a better technique for animation than the older technique of using a NSTimer. To use CADisplayLink, though, you need to add the needed framework, QuartzCore.framework, to your project, if you haven't already. Also note that in my gesture recognizer, I'm checking the state of a gesture, to know whether we're starting a gesture, in the middle of one, or ending one:
#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface ViewController ()
#property (nonatomic, strong) CADisplayLink *displayLink;
#property (nonatomic) CGPoint translationInView;
#end
#implementation ViewController
- (void)viewDidLoad
{
[super viewDidLoad];
UIPanGestureRecognizer *gesture = [[UIPanGestureRecognizer alloc] initWithTarget:self
action:#selector(handleGesture:)];
// I'm adding to the main view, but add it to whatever you want
[self.view addGestureRecognizer:gesture];
}
- (void)startDisplayLink
{
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(handleDisplayLink:)];
[self.displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
- (void)stopDisplayLink
{
[self.displayLink invalidate];
self.displayLink = nil;
}
- (void)handleDisplayLink:(CADisplayLink *)displayLink
{
NSLog(#"%s translationInView = %#", __FUNCTION__, NSStringFromCGPoint(self.translationInView));
// Do here whatever you need to happen continuously while the user is in the
// middle of the gesture, whether their finger is moving or not.
}
- (void)handleGesture:(UIPanGestureRecognizer *)gesture
{
self.translationInView = [gesture translationInView:gesture.view];
if (gesture.state == UIGestureRecognizerStateBegan)
{
[self startDisplayLink];
// Do whatever other initialization stuff as the user starts the gesture
// (e.g. you might alter the appearance of the joystick to provide some
// visual feedback that they're controlling the joystick).
}
else if (gesture.state == UIGestureRecognizerStateChanged)
{
// Do here only that stuff that actually changes as the user is moving their
// finger in the middle of the gesture, but which you don't need to have
// repeatedly done while the user's finger is not moving (e.g. maybe the
// visual movement of the "joystick" control on the screen).
}
else if (gesture.state == UIGestureRecognizerStateEnded ||
gesture.state == UIGestureRecognizerStateCancelled ||
gesture.state == UIGestureRecognizerStateFailed)
{
[self stopDisplayLink];
// Do whatever other cleanup you want to do when the user stops the gesture
// (e.g. maybe animating the moving of the joystick back to the center).
}
}
#end
You can achieve a similar effect if you use NSTimer, too. Whatever works better for you.

Related

How to continue to drawRect: when finger on screen

I have the current code:
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
self.objectPoint = [[touches anyObject] locationInView:self];
float x, y;
if (self.objectPoint.x > self.objectPoint.x) {
x = self.objectPoint.x + 1;
}
else x = self.objectPoint.x - 1;
if (self.fingerPoint.y > self.objectPoint.y) {
y = self.objectPoint.y + 1;
}
else y = self.minionPoint.y - 1;
self.objectPoint = CGPointMake(x, y);
[self setNeedsDisplay];
}
My problem is that I want to keep the object follow your finger until you take your finger off the screen. It will only follow if my finger is moving. touchesEnded only works when I take my finger off the screen, so that's not what I want either. How can I enable something that would solve my problem?
If you want to touch a part of the screen and you want to move the drawn object in that direction as long as you're holding your finger down, there are a couple of approaches.
On approach is the use of some form of timer, something that will repeatedly call a method while the user is holding their finger down on the screen (because, as you noted, you only get updates to touchesMoved when you move). While NSTimer is the most common timer that you'd encounter, in this case you'd want to use a specialized timer called a display link, a CADisplayLink, that fires off when screen updates can be performed. So, you would:
In touchesBegan, capture where the user touched on the screen and start the CADisplayLink;
In touchesMoved, you'd update the user's touch location (but only called if they moved their finger);
In touchesEnded, you'd presumably stop the display link; and
In your CADisplayLink handler, you'd update the location (and you'd need to know the speed with which you want it to move).
So, that would look like:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
self.velocity = 100.0; // 100 points per second
self.touchLocation = [[touches anyObject] locationInView:self];
[self startDisplayLink];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
self.touchLocation = [[touches anyObject] locationInView:self];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
[self stopDisplayLink];
}
- (void)startDisplayLink
{
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(handleDisplayLink:)];
self.lastTimestamp = CACurrentMediaTime(); // initialize the `lastTimestamp`
[self.displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
- (void)stopDisplayLink
{
[self.displayLink invalidate];
self.displayLink = nil;
}
- (void)handleDisplayLink:(CADisplayLink *)displayLink
{
// figure out the time elapsed, and reset the `lastTimestamp`
CFTimeInterval currentTimestamp = CACurrentMediaTime();
CFTimeInterval elapsed = currentTimestamp - self.lastTimestamp;
self.lastTimestamp = currentTimestamp;
// figure out distance to touch and distance we'd move on basis of velocity and elapsed time
CGFloat distanceToTouch = hypotf(self.touchLocation.y - self.objectPoint.y, self.touchLocation.x - self.objectPoint.x);
CGFloat distanceWillMove = self.velocity * elapsed;
// this does the calculation of the angle between the touch location and
// the current `self.objectPoint`, and then updates `self.objectPoint` on
// the basis of (a) the angle; and (b) the desired velocity.
if (distanceToTouch == 0.0) // if we're already at touchLocation, then just quit
return;
if (distanceToTouch < distanceWillMove) { // if the distance to move is less than the target, just move to touchLocation
self.objectPoint = self.touchLocation;
} else { // otherwise, calculate where we're going to move to
CGFloat angle = atan2f(self.touchLocation.y - self.objectPoint.y, self.touchLocation.x - self.objectPoint.x);
self.objectPoint = CGPointMake(self.objectPoint.x + cosf(angle) * distanceWillMove,
self.objectPoint.y + sinf(angle) * distanceWillMove);
}
[self setNeedsDisplay];
}
and to use that, you'd need a few properties defined:
#property (nonatomic) CGFloat velocity;
#property (nonatomic) CGPoint touchLocation;
#property (nonatomic, strong) CADisplayLink *displayLink;
#property (nonatomic) CFTimeInterval lastTimestamp;
If you want to drag it with your finger, you want to:
In touchesBegan, save the starting locationInView as well as the "original location" of the object being dragged;
In touchesMoved, get the new locationInView, calculate the delta (the "translation") between that and the original locationInView, add that to the saved "original location" of the view, and use that to update the view.
That way, the object will track 100% with your finger as you're dragging it across the screen.
For example, you might:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
self.touchBeganLocation = [[touches anyObject] locationInView:self];
self.originalObjectPoint = self.objectPoint;
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint location = [[touches anyObject] locationInView:self];
CGPoint translation = CGPointMake(location.x - self.touchBeganLocation.x, location.y - self.touchBeganLocation.y);
self.objectPoint = CGPointMake(self.originalObjectPoint.x + translation.x, self.originalObjectPoint.y + translation.y);
[self setNeedsDisplay];
}
Probably needless to say, you need properties to keep track of these two new CGPoint values:
#property (nonatomic) CGPoint originalObjectPoint;
#property (nonatomic) CGPoint touchBeganLocation;
Frankly, I might use gesture recognizer, but that's an example of dragging with touchesBegan and touchesMoved.

Throwing ball in SpriteKit

Last days, I experimented some time with spriteKit and (amongst other things) tried to solve the problem to "throw" a sprite by touching it and dragging.
The same question is on Stackexchange, but they told me to first remove the bug and then let the code be reviewed.
I have tackled the major hurdles, and the code is working fine, but there consist one little problem.
(Additionally, I'd be interested if somebody has a more polished or better working solution for this. I'd also love to hear suggestions about how to perfect the feeling of realism in this interaction.)
Sometimes, the ball just gets stuck.
If you want to reproduce that, just swipe the ball really fast and short. I suspect the gestureRecognizer to make "touchesMoved" and "touchesEnded" callback asynchronous and through that some impossible state occurs in the physics simulation.
Can anybody provide a more reliable way to reproduce the issue, and what could be the solution for that?
The project is called ballThrow and BT is the class prefix.
#import "BTMyScene.h"
#import "BTBall.h"
#interface BTMyScene()
#property (strong, nonatomic) NSMutableArray *balls;
#property (nonatomic) CGFloat yPosition;
#property (nonatomic) CGFloat xCenter;
#property (nonatomic) BOOL updated;
#end
#implementation BTMyScene
const CGFloat BALLDISTANCE = 80;
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
_balls = [NSMutableArray arrayWithCapacity:5];
//define the region where the balls will spawn
_yPosition = size.height/2.0;
_xCenter = size.width/2.0;
/* Setup your scene here */
self.backgroundColor = [SKColor colorWithRed:0.15 green:0.15 blue:0.3 alpha:1.0];
}
return self;
}
-(void)didMoveToView:(SKView *)view {
//Make an invisible border
//this seems to be offset... Why the heck is this?
self.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:view.frame];
[self createBalls:2];
//move balls with pan gesture
//could be improved by combining with touchesBegan for first locating the touch
[self.view addGestureRecognizer:[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(moveBall:)]];
}
-(void)moveBall:(UIPanGestureRecognizer *)pgr {
//depending on the touch phase do different things to the ball
if (pgr.state == UIGestureRecognizerStateBegan) {
[self attachBallToTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateChanged) {
[self moveBallToTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateEnded) {
[self stopMovingTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateCancelled) {
[self stopMovingTouch:pgr];
}
}
-(void)attachBallToTouch:(UIPanGestureRecognizer *)touch {
//determine the ball to move
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch])
{
//stop ball movement
[ball.physicsBody setAffectedByGravity:NO];
[ball.physicsBody setVelocity:CGVectorMake(0, 0)];
//the ball might not be touched right in its center, so save the relative location
ball.touchLocation = [self convertPoint:[self convertPointFromView:[touch locationInView:self.view]] toNode:ball];
//update location once, just in case...
[self setBallPosition:ball toTouch:touch];
if (_updated) {
_updated = NO;
[touch setTranslation:CGPointZero inView:self.view];
}
}
}
}
-(void)moveBallToTouch:(UIPanGestureRecognizer *)touch {
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch])
{
//update the position of the ball and reset translation
[self setBallPosition:ball toTouch:touch];
if (_updated) {
_updated = NO;
[touch setTranslation:CGPointZero inView:self.view];
}
break;
}
}
}
-(void)setBallPosition:(BTBall *)ball toTouch:(UIPanGestureRecognizer *)touch {
//gesture recognizers only deliver locations in views, thus convert to node
CGPoint touchPosition = [self convertPointFromView:[touch locationInView:self.view]];
//update the location to the toucheĀ“s location, offset by touch position in ball
[ball setNewPosition:CGPointApplyAffineTransform(touchPosition,
CGAffineTransformMakeTranslation(-ball.touchLocation.x,
-ball.touchLocation.y))];
//save the velocity between the last two touch records for later release
CGPoint velocity = [touch velocityInView:self.view];
//why the hell is the y coordinate inverted??
[ball setLastVelocity:CGVectorMake(velocity.x, -velocity.y)];
}
-(void)stopMovingTouch:(UIPanGestureRecognizer *)touch {
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch]) {
//release the ball: enable gravity impact and make it move
[ball.physicsBody setAffectedByGravity:YES];
[ball.physicsBody setVelocity:CGVectorMake(ball.lastVelocity.dx, ball.lastVelocity.dy)];
break;
}
}
}
-(BOOL)isMovingBall:(BTBall *)ball forGestureRecognizer:(UIPanGestureRecognizer *)touch {
//latest location of touch
CGPoint touchPosition = [touch locationInView:self.view];
//distance covered since the last call
CGPoint touchTranslation = [touch translationInView:self.view];
//position, where the ball must be, if it is the one
CGPoint translatedPosition = CGPointApplyAffineTransform(touchPosition,
CGAffineTransformMakeTranslation(-touchTranslation.x,
-touchTranslation.y));
CGPoint inScene = [self convertPointFromView:translatedPosition];
//determine weather the last touch location was on the ball
//if last touch location was on the ball, return true
return [[self nodesAtPoint:inScene] containsObject:ball];
}
-(void)update:(CFTimeInterval)currentTime {
//updating the ball position here improved performance dramatically
for (BTBall *ball in self.balls) {
//balls that move are not gravity affected
//easiest way to determine movement
if ([ball.physicsBody affectedByGravity] == NO) {
[ball setPosition:ball.newPosition];
}
}
//ball positions are refreshed
_updated = YES;
}
-(void)createBalls:(int)numberOfBalls {
for (int i = 0; i<numberOfBalls; i++) {
BTBall *ball;
//reuse balls (not necessary yet, but imagine balls spawning)
if(i<[self.balls count]) {
ball = self.balls[i];
}
else {
ball = [BTBall newBall];
}
[ball.physicsBody setAffectedByGravity:NO];
//calculate ballposition
CGPoint ballPosition = CGPointMake(self.xCenter-BALLSIZE/2+(i-(numberOfBalls-1)/2.0)*BALLDISTANCE, self.yPosition);
[ball setNewPosition:ballPosition];
[self.balls addObject:ball];
[self addChild:ball];
}
}
#end
The BTBall (subclass of SKShapeNode, because of the custom properties needed)
#import <SpriteKit/SpriteKit.h>
#interface BTBall : SKShapeNode
const extern CGFloat BALLSIZE;
//some properties for the throw animation
#property (nonatomic) CGPoint touchLocation;
#property (nonatomic) CGPoint newPosition;
#property (nonatomic) CGVector lastVelocity;
//create a standard ball
+(BTBall *)newBall;
#end
The BTBall.m with a class method to create new balls
#import "BTBall.h"
#implementation BTBall
const CGFloat BALLSIZE = 80;
+(BTBall *)newBall {
BTBall *ball = [BTBall node];
//look
[ball setPath:CGPathCreateWithEllipseInRect(CGRectMake(-BALLSIZE/2,-BALLSIZE/2,BALLSIZE,BALLSIZE), nil)];
[ball setFillColor:[UIColor redColor]];
[ball setStrokeColor:[UIColor clearColor]];
//physics
SKPhysicsBody *ballBody = [SKPhysicsBody bodyWithCircleOfRadius:BALLSIZE/2.0];
[ball setPhysicsBody:ballBody];
[ball.physicsBody setAllowsRotation:NO];
//ball is not moving at the beginning
ball.lastVelocity = CGVectorMake(0, 0);
return ball;
}
#end
1. A couple of problems (see comments in code) are related to the spriteKit coordinate system. I just do not get the border of the scene align with its actual frame, though I make it with the exact same code that Apple gives us in the programming guide. I have moved it from initWithSize to didMoveToView due to a suggestion here on Stackoverflow, but that did not help. It is possible to manually offset the border with hardcoded values, but that does not satisfy me.
2. Does anybody know a debugging tool, which colors the physics body of a sprite, in order to see its size and whether it is at the same position as the sprite?
Update: Problems above solved by using YMC Physics Debugger:
This lines of code are correct:
[ball setPath:CGPathCreateWithEllipseInRect(CGRectMake(-BALLSIZE/2,-BALLSIZE/2,BALLSIZE,BALLSIZE), nil)];
SKPhysicsBody *ballBody = [SKPhysicsBody bodyWithCircleOfRadius:BALLSIZE/2.0];
Because 0,0 is the center of the physics body, the origin of the path must be translated.

Sprite-Kit Pinch to Zoom Problems UIPinchGestureRecognizer

I've been working on this code for quite a while now but it just feels like one step forward and two steps back. I'm hoping someone can help me.
I'm working with Sprite Kit so I have a Scene file that manages the rendering, UI and touch controls. I have an SKNode thats functioning as the camera like so:
_world = [[SKNode alloc] init];
[_world setName:#"world"];
[self addChild:_world];
I am using UIGestureRecognizer, so I add the ones I need like so:
_panRecognizer = [[UIPanGestureRecognizer alloc]initWithTarget:self action:#selector(handlePanFrom:)];
[[self view] addGestureRecognizer:_panRecognizer];
_pinchRecognizer = [[UIPinchGestureRecognizer alloc]initWithTarget:self action:#selector(handlePinch:)];
[[self view] addGestureRecognizer:_pinchRecognizer];
The panning is working okay, but not great. The pinching is the real problem. The idea for the pinching is to grab a point at the center of the screen, convert that point to the world node, and then move to it while zooming in. Here is the method for pinching:
-(void) handlePinch:(UIPinchGestureRecognizer *)sender {
if (sender.state == UIGestureRecognizerStateBegan) {
_tempScale = [sender scale];
}
if (sender.state == UIGestureRecognizerStateChanged) {
if([sender scale] > _tempScale) {
if (_world.xScale < 6) {
//_world.xScale += 0.05;
//_world.yScale += 0.05;
//[_world setScale:[sender scale]];
[_world setScale:_world.xScale += 0.05];
CGPoint screenCenter = CGPointMake(_initialScreenSize.width/2, _initialScreenSize.height/2);
CGPoint newWorldPoint = [self convertTouchPointToWorld:screenCenter];
//crazy method why does this work
CGPoint alteredWorldCenter = CGPointMake(((newWorldPoint.x*_world.xScale)*-1), (newWorldPoint.y*_world.yScale)*-1);
//why does the duration have to be exactly 0.3 to work
SKAction *moveToCenter = [SKAction moveTo:alteredWorldCenter duration:0.3];
[_world runAction:moveToCenter];
}
} else if ([sender scale] < _tempScale) {
if (_world.xScale > 0.5 && _world.xScale > 0.3){
//_world.xScale -= 0.05;
//_world.yScale -= 0.05;
//[_world setScale:[sender scale]];
[_world setScale:_world.xScale -= 0.05];
CGPoint screenCenter = CGPointMake(_initialScreenSize.width/2, _initialScreenSize.height/2);
CGPoint newWorldPoint = [self convertTouchPointToWorld:screenCenter];
//crazy method why does this work
CGPoint alteredWorldCenter = CGPointMake(((newWorldPoint.x*_world.xScale - _initialScreenSize.width)*-1), (newWorldPoint.y*_world.yScale - _initialScreenSize.height)*-1);
SKAction *moveToCenter = [SKAction moveTo:alteredWorldCenter duration:0.3];
[_world runAction:moveToCenter];
}
}
}
if (sender.state == UIGestureRecognizerStateEnded) {
[_world removeAllActions];
}
}
I've tried many iterations of this, but this exact code is what is getting me the closest to pinching on a point in the world. There are some problems though. As you get further out from the center, it doesn't work as well, as it pretty much still tries to zoom in on the very center of the world. After converting the center point to the world node, I still need to manipulate it again to get it centered properly (the formula I describe as crazy). And it has to be different for zooming in and zooming out to work. The duration of the move action has to be set to 0.3 or it pretty much won't work at all. Higher or lower and it doesn't zoom in on the center point. If I try to increment the zoom by more than a small amount, it moves crazy fast. If I don't end the actions when the pinch ends, the screen jerks. I don't understand why this works at all (it smoothly zooms in to the center point before the delay ends and the screen jerks) and I'm not sure what I'm doing wrong. Any help is much appreciated!
Take a look at my answer to a very similar question.
https://stackoverflow.com/a/21947549/3148272
The code I posted "anchors" the zoom at the location of the pinch gesture instead of the center of the screen, but that is easy to change as I tried it both ways.
As requested in the comments below, I am also adding my panning code to this answer.
Panning Code...
// instance variables of MyScene.
SKNode *_mySkNode;
UIPanGestureRecognizer *_panGestureRecognizer;
- (void)didMoveToView:(SKView *)view
{
_panGestureRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanFrom:)];
[[self view] addGestureRecognizer:_panGestureRecognizer];
}
- (void)handlePanFrom:(UIPanGestureRecognizer *)recognizer
{
if (recognizer.state == UIGestureRecognizerStateBegan) {
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
CGPoint translation = [recognizer translationInView:recognizer.view];
translation = CGPointMake(-translation.x, translation.y);
_mySkNode.position = CGPointSubtract(_mySkNode.position, translation);
[recognizer setTranslation:CGPointZero inView:recognizer.view];
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
// No code needed for panning.
}
}
The following are the two helper functions that were used above. They are from the Ray Wenderlich book on Sprite Kit.
SKT_INLINE CGPoint CGPointAdd(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x + point2.x, point1.y + point2.y);
}
SKT_INLINE CGPoint CGPointSubtract(CGPoint point1, CGPoint point2) {
return CGPointMake(point1.x - point2.x, point1.y - point2.y);
}

Zoom and Scroll SKNode in SpriteKit

I am working on a Game like Scrabble on SpriteKit and have been stuck on Zooming and Scrolling the Scrabble Board.
First Let me Explain the working behind the game:
On my GameScene I Have:
A SKNode subclass called GameBoard Layer (named NAME_GAME_BOARD_LAYER) containing following Children:
A SKNode subclass for Scrabble Board named NAME_BOARD.
A SKNode subclass for Letters Tile Rack named NAME_RACK.
The Letters Tiles are picked from the Tile Rack and dropped at the Scrabble Board.
The problem here is, I need to mimic the zooming and scrolling which can be achieved by UIScrollView, which I think cant be added on a SKNode. The Features I need to mimic are:
Zoom at the precise location where the user has Double-Tapped
Scroll around (Tried PanGestures, somehow creates issue with tiles dragging-dropping)
Keep the Zoomed SKNode in the Particular Area (Like UIScrollView keeps the zoomed content in the scrollView bounds)
Here is the Code I have used for Zooming, using UITapGestures:
In my GameScene.m
- (void)didMoveToView:(SKView *)view {
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(handleTapGesture:)];
tapGesture.numberOfTapsRequired = 2;
tapGesture.numberOfTouchesRequired = 1;
[self.scene.view addGestureRecognizer:tapGesture];
}
- (void)handleTapGesture:(UITapGestureRecognizer*)recognizer {
if ([self childNodeWithName:NAME_GAME_BOARD_LAYER]) {
GameBoardLayer *gameBoardLayer = (GameBoardLayer*)[self childNodeWithName:NAME_GAME_BOARD_LAYER];
SKNode *node = [Utils nodeAt:[recognizer locationInView:self.view]
withName:NAME_BOARD
inCurrentNode:gameBoardLayer];
if ([node.name isEqualToString:NAME_BOARD]) {
[gameBoardLayer handleDoubleTap:recognizer];
}
}
}
In my GameBoardLayer Node:
- (void)handleDoubleTap:(UITapGestureRecognizer*)recognizer {
Board *board = (Board*)[self childNodeWithName:NAME_BOARD];
if (isBoardZoomed)
{
[board runAction:[SKAction scaleTo:1.0f duration:0.25f]];
isBoardZoomed = NO;
}
else
{
isBoardZoomed = YES;
[board runAction:[SKAction scaleTo:1.5f duration:0.25f]];
}
}
Would someone kindly guide me how can i achieve this functionality?
Thanks Everyone.
This is how I would do this:
Setup:
Create a GameScene as the rootNode of your game. (child of SKScene)
Add BoardNode as child to the scene (child of SKNode)
Add CameraNode as child to the Board (child of SKNode)
Add LetterNodes as children of the Board
Keep Camera node centered:
// GameScene.m
- (void) didSimulatePhysics
{
[super didSimulatePhysics];
[self centerOnNode:self.Board.Camera];
}
- (void) centerOnNode:(SKNode*)node
{
CGPoint posInScene = [node.scene convertPoint:node.position fromNode:node.parent];
node.parent.position = CGPointMake(node.parent.position.x - posInScene.x, node.parent.position.y - posInScene.y);
}
Pan view by moving BoardNode around (Remember to prevent panning out of bounds)
// GameScene.m
- (void) handlePan:(UIPanGestureRecognizer *)pan
{
if (pan.state == UIGestureRecognizerStateChanged)
{
[self.Board.Camera moveCamera:CGVectorMake([pan translationInView:pan.view].x, [pan translationInView:pan.view].y)];
}
}
// CameraNode.m
- (void) moveCamera:(CGVector)direction
{
self.direction = direction;
}
- (void) update:(CFTimeInterval)dt
{
if (ABS(self.direction.dx) > 0 || ABS(self.direction.dy) > 0)
{
float dx = self.direction.dx - self.direction.dx/20;
float dy = self.direction.dy - self.direction.dy/20;
if (ABS(dx) < 1.0f && ABS(dy) < 1.0f)
{
dx = 0.0;
dy = 0.0;
}
self.direction = CGVectorMake(dx, dy);
self.Board.position = CGPointMake(self.position.x - self.direction.dx, self.position.y + self.direction.dy);
}
}
// BoardNode.m
- (void) setPosition:(CGPoint)position
{
CGRect bounds = CGRectMake(-boardSize.width/2, -boardSize.height/2, boardSize.width, boardSize.height);
self.position = CGPointMake(
MAX(bounds.origin.x, MIN(bounds.origin.x + bounds.size.width, position.x)),
MAX(bounds.origin.y, MIN(bounds.origin.y + bounds.size.height, position.y)));
}
Pinch Zoom by setting the size of your GameScene:
// GameScene.m
- (void) didMoveToView:(SKView*)view
{
self.scaleMode = SKSceneScaleModeAspectFill;
}
- (void) handlePinch:(UIPinchGestureRecognizer *)pinch
{
switch (pinch.state)
{
case UIGestureRecognizerStateBegan:
{
self.origPoint = [self GetGesture:pinch LocationInNode:self.Board];
self.lastScale = pinch.scale;
} break;
case UIGestureRecognizerStateChanged:
{
CGPoint pinchPoint = [self GetGesture:pinch LocationInNode:self.Board];
float scale = 1 - (self.lastScale - pinch.scale);
float newWidth = MAX(kMinSceneWidth, MIN(kMaxSceneWidth, self.size.width / scale));
float newHeight = MAX(kMinSceneHeight, MIN(kMaxSceneHeight, self.size.height / scale));
[self.gameScene setSize:CGSizeMake(newWidth, newHeight)];
self.lastScale = pinch.scale;
} break;
default: break;
}
}
What comes to the problem of panning accidentally dragging your LetterNodes, I usually implement a single TouchDispatcher (usually in GameScene class) that registers all the touches. TouchDispatcher then decides which node(s) should respond to the touch (and in which order).

Slide button back and forth between two points

I'm trying to use UIPanGestureRecognizer to slide a button between two points (like a volume slider). The following code allows me to slide the button back and forth, but the button doesn't stop at the same point every time (i.e., when I slide it to the right, sometime it stops where it's supposed to, sometimes it stops +/- 10 pixels from where it's supposed to). What am I doing wrong?
- (void)handlePan:(UIPanGestureRecognizer *)recognizer {
CGPoint location = [recognizer locationInView:self];
// 1
if (recognizer.state == UIGestureRecognizerStateBegan) {
// if the gesture just started, record current center location
_originalCenter = self.sliderButton.center;
}
// 2
if (recognizer.state == UIGestureRecognizerStateChanged) {
// move the checkmarks and main label based on touch
//CGPoint translation = [recognizer translationInView:self];
// move slider button
if (location.x < 70 + 178 && location.x > 70) {
self.sliderButton.center = CGPointMake(location.x, _originalCenter.y);
}
// determine whether the item has been dragged far enough to initiate a removal
if (location.x > 170 + 70) {
_draggedToEnd = YES;
} else {
_draggedToEnd = NO;
}
}
// 3
if (recognizer.state == UIGestureRecognizerStateEnded) {
if (_draggedToEnd) {
// notify the delegate that this item should be deleted
[self buttonDraggedToEnd];
}
}
}

Resources