So I'm trying to learn SpriteKit while building what I think is a simple puzzle game. I have a 5x5 grid of SKSpriteNodes of different colors. What I want is to be able to touch one, and move my finger horizontally or vertically and detect all the nodes that my finger is touching, like if I was "selecting" them.
I tried to do something like this, but it crashes the app:
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKSpriteNode *node = [self nodeAtPoint:location];
NSLog(#"Dragged over: %#", node);
}
Is there something like a "touchEnter" / "touchLeave" kinda event that I'm missing? Sorry, I don't even know what I don't know.
UIPanGestureRecognizer is your friend:
-(void)didMoveToView:(SKView*)view {
UIPanGestureRecognizer *recognizer = [[UIPangestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
recognizer.delegate = self;
[self.view addGestureRecognizer:recognizer];
}
-(void)hadlePangesture:(UIPanGestureRecognizer*)recognizer {
CGPoint location = [recognizer locationInView:self.view];
SKSpriteNode *node = [self nodeAtPoint:[self convertPointFromView:location]];
if (node) {
NSLog(#"Dragged over: %#", node);
}
}
Related
I am trying to allow the user to drag a label around the screen, but in the simulator it only moves a little each time I touch somewhere on the screen. It will jump to the location and then drag slightly, but then it will stop dragging and I have to touch a different location to get it to move again. Here is my code in my .m file.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
UITouch *Drag = [[event allTouches] anyObject];
firstInitial.center = [Drag locationInView: self.view];
}
My ultimate goal is to be able to drag three different labels on the screen, but I'm just trying to tackle this problem first. I would greatly appreciate any help!
Thanks.
Try using a UIGestureRecognizer instead of -touchesMoved:withEvent:. And implementing something similar to the following code.
//Inside viewDidLoad
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc]initWithTarget:self action:#selector(dragonMoved:)];
panGesture.minimumNumberOfTouches = 1;
[self addGestureRecognizer:panGesture];
//**********
- (void)dragonMoved:(UIPanGestureRecognizer *)gesture{
CGPoint touchLocation = [gesture locationInView:self];
static UIView *currentDragObject;
if(UIGestureRecognizerStateBegan == gesture.state){
for(DragObect *dragView in self.dragObjects){
if(CGRectContainsPoint(dragView.frame, touchLocation)){
currentDragObject = dragView;
break;
}
}
}else if(UIGestureRecognizerStateChanged == gesture.state){
currentDragObject.center = touchLocation;
}else if (UIGestureRecognizerStateEnded == gesture.state){
currentDragObject = nil;
}
}
I have created a node called and i have amanged to make it drag from across the screen when the node it touched and dragged. For some reason this one method (code below) lets me drap any node on the screen. How can i make effect only the one "testNode2".
Also i would the node to drag to the movement of the finger but this can work if the finger is touch anywhere on the screen, not just when the node itself is touched? (but not jump to the position of the finger, just move relevant to the finger movement). For example is the screen is pressed anywhere then dragged 100 pixels left the node will move 100 pixels left.
my code is below
-(void) colourSprite2:(CGSize)size {
self.testNode2 = [SKSpriteNode spriteNodeWithColor:[SKColor greenColor] size:CGSizeMake(30, 30)];
self.testNode2.position = CGPointMake(self.size.width/2, self.size.height/1.1);
self.testNode2.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:self.testNode2.frame.size];
[self addChild:self.testNode2];
}
-(void)touchesBegan:(NSSet*) touches withEvent:(UIEvent*) event
{ self.testNode2 = [self nodeAtPoint:[[touches anyObject] locationInNode:self]]; }
-(void)touchesMoved:(NSSet*) touches withEvent:(UIEvent*) event
{ self.testNode2.position = [[touches anyObject] locationInNode:self]; }
-(void)touchesEnded:(NSSet*) touches withEvent:(UIEvent*) event
{ self.testNode2 = nil; }
The touch delegate returns an NSSet of touches which holds multiple UITouch objects, each of which correspond to a touch relevant to the object the delegate methods are implemented in.
In your case, the node will move itself to the position of any touch the delegate encounters. This includes multiple touches on the screen.
You should read about UITouch and the UIResponder classes.
The solution for your problem will be to keep a track of the specific touch that is being used to move the node.
Maintain a UITouch object as an instance variable:
#implementation MyScene
{
UITouch *currentTouch;
}
Then keep a track of the specific touch as follows:
-(void)touchesBegan:(NSSet*) touches withEvent:(UIEvent*) event
{
UITouch *touch = [touches anyObject];
SKNode *node = [self nodeAtPoint:[touch locationInNode:self]];
if (currentTouch == nil && [node isEqual:self.testNode2])
{
currentTouch = touch;
}
}
-(void)touchesMoved:(NSSet*) touches withEvent:(UIEvent*) event
{
UITouch *touch = [touches anyObject];
if ([touch isEqual:currentTouch])
{
self.testNode2.position = [touch locationInNode:self];
}
}
-(void)touchesEnded:(NSSet*) touches withEvent:(UIEvent*) event
{
UITouch *touch = [touches anyObject];
if ([touch isEqual:currentTouch])
{
self.testNode2 = nil;
currentTouch = nil;
}
}
I'm wondering if there's an easy way that I could take an SKNode and increase the region in which it is pressed.
For example, I am currently checking if a node is clicked like so:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint positionInScene = [touch locationInNode:self];
SKNode *node = [self nodeAtPoint:positionInScene];
if ([node.name isEqualToString:TARGET_NAME]) {
// do whatever
}
}
}
If the node drawn on the screen is something like 40 pixels by 40 pixels, is there a way that if a user clicks within 10 pixels of the node, it would render as being clicked?
Thanks!
You could add an invisible sprite node as a child to the visible node. Have the child node's size be larger than the visible node's.
For example, on OSX this would work in a scene:
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
SKSpriteNode *visibleNode = [[SKSpriteNode alloc] initWithColor:[NSColor yellowColor] size:CGSizeMake(100, 100)];
visibleNode.name = #"visible node";
visibleNode.position = CGPointMake(320, 240);
SKSpriteNode *clickableNode = [[SKSpriteNode alloc] init];
clickableNode.size = CGSizeMake(200, 200);
clickableNode.name = #"clickable node";
[visibleNode addChild:clickableNode];
[self addChild:visibleNode];
}
return self;
}
-(void)mouseDown:(NSEvent *)theEvent
{
CGPoint positionInScene = [theEvent locationInNode:self];
SKNode *node = [self nodeAtPoint:positionInScene];
NSLog(#"Clicked node: %#", node.name);
}
The clickable node extends 50px outwards from the edges of the visible node. Clicking within this will output "Clicked node: clickable node".
The node named "visible node" will never be returned by the call to [self nodeAtPoint:positionInScene], because the clickable node overlays it.
The same principle applies on iOS.
is it possible to get the x and y coordinates of a touch? If so could someone please provide a very simple example where the coordinates are just logged to the console.
Using touchesBegan Event
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
CGPoint touchPoint = [touch locationInView:self.view];
NSLog(#"Touch x : %f y : %f", touchPoint.x, touchPoint.y);
}
This event is triggered when touch starts.
Using Gesture
Register your UITapGestureRecognizer in viewDidLoad: Method
- (void)viewDidLoad {
[super viewDidLoad];
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapGestureRecognizer:)];
[self.view setUserInteractionEnabled:YES];
[self.view addGestureRecognizer:tapGesture];
}
Setting up the tapGestureRecognizer function
// Tap GestureRecognizer function
- (void)tapGestureRecognizer:(UIGestureRecognizer *)recognizer {
CGPoint tappedPoint = [recognizer locationInView:self.view];
CGFloat xCoordinate = tappedPoint.x;
CGFloat yCoordinate = tappedPoint.y;
NSLog(#"Touch Using UITapGestureRecognizer x : %f y : %f", xCoordinate, yCoordinate);
}
Sample Project
First you need to add a gesture recognizer to the view you want.
UITapGestureRecognizer *myTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(myTapRecognizer:)];
[self.myView setUserInteractionEnabled:YES];
[self.myView addGestureRecognizer:myTap];
Then in the gesture recognizer method you make a call to locationInView:
- (void)myTapRecognizer:(UIGestureRecognizer *)recognizer
{
CGPoint tappedPoint = [recognizer locationInView:self.myView];
CGFloat xCoordinate = tappedPoint.x;
CGFloat yCoordinate = tappedPoint.y;
}
You may want to take a look at apple's UIGestureRecognizer Class Reference
Here's a very basic example (place it inside your view controller):
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentPoint = [touch locationInView:self.view];
NSLog(#"%#", NSStringFromCGPoint(currentPoint));
}
This triggers every time the touch moves. You can also use touchesBegan:withEvent: which triggers when a touch starts, and touchesEnded:withEvent: which triggers when a touch ends (i.e. a finger is lifted).
You can also do this using a UIGestureRecognizer, which in many cases is more practical.
I am trying to develop an analysing app that determines if you are "clever"
What this involves doing is taking a picture of yourself and dragging points onto your face, where the nose, mouth and eyes are. However, The code I have tried does not work:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
CGPoint location = [touch locationInView:self.view];
if ([touch view] == eye1)
{
eye1.center = location;
}
else if ([touch view] == eye2)
{
eye2.center = location;
}
else if ([touch view] == nose)
{
nose.center = location;
}
else if ([touch view] == chin)
{
chin.center = location;
}
else if ([touch view] == lip1)
{
lip1.center = location;
}
else if ([touch view] ==lip2)
{
lip2.center = location;
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
[self touchesBegan:touches withEvent:event];
}
What is happening, because when I just have a single image, it works, but is not helpful for me. What can I do to make it work? The spots start at the bottom of the screen in a "Toolbar" and then the user drags them onto the face. I kinda want the finished result to look like:
There are two basic approaches:
You can use the various touches methods (e.g. touchesBegan, touchesMoved, etc.) in your controller or the main view, or you can use a single gesture recognizer on the main view. In this situation, you'd use touchesBegan or, if using a gesture recognizer, a state of UIGestureRecognizerStateBegan, determine locationInView of the superview, and then test whether the touch is over one of your views by testing CGRectContainsPoint, using the frame of the various views as the first parameter, and by using the location as the second parameter.
Having identified the view that the gesture began, then in touchesMoved or, if in a gesture recognizer, a state of UIGestureRecognizerStateChanged, and move the view based upon the translationInView.
Alternatively (and easier IMHO), you can create individual gesture recognizers that you attach to each of the subviews. This latter approach might look like the following. For example, you first add your gesture recognizers:
NSArray *views = #[eye1, eye2, lip1, lip2, chin, nose];
for (UIView *view in views)
{
view.userInteractionEnabled = YES;
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
[view addGestureRecognizer:pan];
}
Then you implement a handlePanGesture method:
- (void)handlePanGesture:(UIPanGestureRecognizer *)gesture
{
CGPoint translation = [gesture translationInView:gesture.view];
if (gesture.state == UIGestureRecognizerStateChanged)
{
gesture.view.transform = CGAffineTransformMakeTranslation(translation.x, translation.y);
[gesture.view.superview bringSubviewToFront:gesture.view];
}
else if (gesture.state == UIGestureRecognizerStateEnded)
{
gesture.view.transform = CGAffineTransformIdentity;
gesture.view.center = CGPointMake(gesture.view.center.x + translation.x, gesture.view.center.y + translation.y);
}
}