Zoom and Scroll SKNode in SpriteKit - ios

I am working on a Game like Scrabble on SpriteKit and have been stuck on Zooming and Scrolling the Scrabble Board.
First Let me Explain the working behind the game:
On my GameScene I Have:
A SKNode subclass called GameBoard Layer (named NAME_GAME_BOARD_LAYER) containing following Children:
A SKNode subclass for Scrabble Board named NAME_BOARD.
A SKNode subclass for Letters Tile Rack named NAME_RACK.
The Letters Tiles are picked from the Tile Rack and dropped at the Scrabble Board.
The problem here is, I need to mimic the zooming and scrolling which can be achieved by UIScrollView, which I think cant be added on a SKNode. The Features I need to mimic are:
Zoom at the precise location where the user has Double-Tapped
Scroll around (Tried PanGestures, somehow creates issue with tiles dragging-dropping)
Keep the Zoomed SKNode in the Particular Area (Like UIScrollView keeps the zoomed content in the scrollView bounds)
Here is the Code I have used for Zooming, using UITapGestures:
In my GameScene.m
- (void)didMoveToView:(SKView *)view {
UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self
action:#selector(handleTapGesture:)];
tapGesture.numberOfTapsRequired = 2;
tapGesture.numberOfTouchesRequired = 1;
[self.scene.view addGestureRecognizer:tapGesture];
}
- (void)handleTapGesture:(UITapGestureRecognizer*)recognizer {
if ([self childNodeWithName:NAME_GAME_BOARD_LAYER]) {
GameBoardLayer *gameBoardLayer = (GameBoardLayer*)[self childNodeWithName:NAME_GAME_BOARD_LAYER];
SKNode *node = [Utils nodeAt:[recognizer locationInView:self.view]
withName:NAME_BOARD
inCurrentNode:gameBoardLayer];
if ([node.name isEqualToString:NAME_BOARD]) {
[gameBoardLayer handleDoubleTap:recognizer];
}
}
}
In my GameBoardLayer Node:
- (void)handleDoubleTap:(UITapGestureRecognizer*)recognizer {
Board *board = (Board*)[self childNodeWithName:NAME_BOARD];
if (isBoardZoomed)
{
[board runAction:[SKAction scaleTo:1.0f duration:0.25f]];
isBoardZoomed = NO;
}
else
{
isBoardZoomed = YES;
[board runAction:[SKAction scaleTo:1.5f duration:0.25f]];
}
}
Would someone kindly guide me how can i achieve this functionality?
Thanks Everyone.

This is how I would do this:
Setup:
Create a GameScene as the rootNode of your game. (child of SKScene)
Add BoardNode as child to the scene (child of SKNode)
Add CameraNode as child to the Board (child of SKNode)
Add LetterNodes as children of the Board
Keep Camera node centered:
// GameScene.m
- (void) didSimulatePhysics
{
[super didSimulatePhysics];
[self centerOnNode:self.Board.Camera];
}
- (void) centerOnNode:(SKNode*)node
{
CGPoint posInScene = [node.scene convertPoint:node.position fromNode:node.parent];
node.parent.position = CGPointMake(node.parent.position.x - posInScene.x, node.parent.position.y - posInScene.y);
}
Pan view by moving BoardNode around (Remember to prevent panning out of bounds)
// GameScene.m
- (void) handlePan:(UIPanGestureRecognizer *)pan
{
if (pan.state == UIGestureRecognizerStateChanged)
{
[self.Board.Camera moveCamera:CGVectorMake([pan translationInView:pan.view].x, [pan translationInView:pan.view].y)];
}
}
// CameraNode.m
- (void) moveCamera:(CGVector)direction
{
self.direction = direction;
}
- (void) update:(CFTimeInterval)dt
{
if (ABS(self.direction.dx) > 0 || ABS(self.direction.dy) > 0)
{
float dx = self.direction.dx - self.direction.dx/20;
float dy = self.direction.dy - self.direction.dy/20;
if (ABS(dx) < 1.0f && ABS(dy) < 1.0f)
{
dx = 0.0;
dy = 0.0;
}
self.direction = CGVectorMake(dx, dy);
self.Board.position = CGPointMake(self.position.x - self.direction.dx, self.position.y + self.direction.dy);
}
}
// BoardNode.m
- (void) setPosition:(CGPoint)position
{
CGRect bounds = CGRectMake(-boardSize.width/2, -boardSize.height/2, boardSize.width, boardSize.height);
self.position = CGPointMake(
MAX(bounds.origin.x, MIN(bounds.origin.x + bounds.size.width, position.x)),
MAX(bounds.origin.y, MIN(bounds.origin.y + bounds.size.height, position.y)));
}
Pinch Zoom by setting the size of your GameScene:
// GameScene.m
- (void) didMoveToView:(SKView*)view
{
self.scaleMode = SKSceneScaleModeAspectFill;
}
- (void) handlePinch:(UIPinchGestureRecognizer *)pinch
{
switch (pinch.state)
{
case UIGestureRecognizerStateBegan:
{
self.origPoint = [self GetGesture:pinch LocationInNode:self.Board];
self.lastScale = pinch.scale;
} break;
case UIGestureRecognizerStateChanged:
{
CGPoint pinchPoint = [self GetGesture:pinch LocationInNode:self.Board];
float scale = 1 - (self.lastScale - pinch.scale);
float newWidth = MAX(kMinSceneWidth, MIN(kMaxSceneWidth, self.size.width / scale));
float newHeight = MAX(kMinSceneHeight, MIN(kMaxSceneHeight, self.size.height / scale));
[self.gameScene setSize:CGSizeMake(newWidth, newHeight)];
self.lastScale = pinch.scale;
} break;
default: break;
}
}
What comes to the problem of panning accidentally dragging your LetterNodes, I usually implement a single TouchDispatcher (usually in GameScene class) that registers all the touches. TouchDispatcher then decides which node(s) should respond to the touch (and in which order).

Related

iOS: Resize and Rotate UIView Concurrently

Using a UIPanGestureRecognizer in my view controller, I'm trying to draw a view (ArrowView) at an angle based upon the touch location. I'm trying to use CGAffineTransformRotate to rotate the view based up the angle between the first touch and the current touch, but this isn't working unless the view has already been drawn by at lease 20 or more pixels. Also, when drawing, the view doesn't always line up under my finger. Is this the correct approach for this situation? If not, does anyone recommend a better way of accomplishing this? If so, what am I doing wrong?
ViewController.m
#implementation ViewController {
ArrowView *_selectedArrowView;
UIColor *_selectedColor;
CGFloat _selectedWeight;
CGPoint _startPoint;
}
- (void)viewDidLoad {
[super viewDidLoad];
_selectedColor = [UIColor yellowColor];
_selectedWeight = 3;
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panHandler:)];
[self.view addGestureRecognizer:pan];
}
- (void) panHandler: (UIPanGestureRecognizer *) sender {
if (sender.state == UIGestureRecognizerStateBegan) {
//Instantiate the arrow
CGPoint touchPoint = [sender locationInView:sender.view];
_startPoint = touchPoint;
_selectedArrowView = [[ArrowView alloc] initWithFrame:CGRectMake(touchPoint.x, touchPoint.y, 0, 25) withColor:_selectedColor withWeight:_selectedWeight];
_selectedArrowView.delegate = self;
[self.view addSubview:_selectedArrowView];
[self.view bringSubviewToFront:_selectedArrowView];
} else if (sender.state == UIGestureRecognizerStateChanged) {
//"Draw" the arrow based upon finger postion
CGPoint touchPoint = [sender locationInView:sender.view];
[_selectedArrowView drawArrow:_startPoint to:touchPoint];
}
}
#end
ArrowView.m
- (void) drawArrow: (CGPoint) startPoint to: (CGPoint) endPoint {
startPoint = [self convertPoint:startPoint fromView:self.superview];
endPoint = [self convertPoint:endPoint fromView:self.superview];
if (_initialAngle == -1000 /*Initially set to an arbitrary value so I know when the draw began*/) {
_initialAngle = atan2(startPoint.y - endPoint.y, startPoint.x - endPoint.x);
[self setPosition:0];
} else {
CGFloat ang = atan2(startPoint.y - endPoint.y, startPoint.x - endPoint.x);
ang -= _initialAngle;
self.transform = CGAffineTransformRotate(self.transform, ang);
CGFloat diff = (endPoint.x - self.bounds.size.width);
NSLog(#"\n\n diff: %f \n\n", diff);
self.bounds = CGRectMake(0, 0, self.bounds.size.width + diff, self.bounds.size.height);
_endPoint = CGPointMake(self.bounds.size.width, self.bounds.size.height);
[self setNeedsDisplay];
}
}
- (void) setPosition: (CGFloat) anchorPointX {
CGPoint layerLoc;
if (anchorPointX == 0) {
layerLoc = CGPointMake(self.layer.bounds.origin.x, self.layer.bounds.origin.y + (self.layer.bounds.size.height / 2));
} else {
layerLoc = CGPointMake(self.layer.bounds.origin.x + self.layer.bounds.size.width, self.layer.bounds.origin.y + (self.layer.bounds.size.height / 2));
}
CGPoint superLoc = [self convertPoint:layerLoc toView:self.superview];
self.layer.anchorPoint = CGPointMake(anchorPointX, 0.5);
self.layer.position = superLoc;
}
- (CGFloat) pToA: (CGPoint) touchPoint {
CGPoint start;
if (_dotButtonIndex == kDotButtonFirst) {
start = CGPointMake(CGRectGetMaxX(self.bounds), CGRectGetMaxY(self.bounds));
} else {
start = CGPointMake(CGRectGetMinX(self.bounds), CGRectGetMinY(self.bounds));
}
return atan2(start.y - touchPoint.y, start.x - touchPoint.x);
}
Link to project on GitHub: Project Link
Figured it out.
I had to make it have an initial width so the angle would work.
_initialAngle = atan2(startPoint.y - endPoint.y, startPoint.x - (endPoint.x + self.frame.size.width));

Pinching/Panning a CCNode in Cocos2d 3.0

I want to zoom in out a CCNode by pinching and panning the screen. The node has a background which is very large but the portionof it shown on the screen. That node also contains other sprites.
What I have done by now is that first I register UIPinchGestureRecognizer
UIPinchGestureRecognizer * pinchRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchFrom:)];
[[[CCDirector sharedDirector] view] addGestureRecognizer: pinchRecognizer];
-(void)handlePinchFrom:(UIPinchGestureRecognizer *) pinch
{
if(pinch.state == UIGestureRecognizerStateEnded) {
prevScale = 1;
}
else {
CGFloat dscale = [self scale] - prevScale + pinch.scale;
if(dscale > 0)
{
deltaScale = dscale;
}
CGAffineTransform transform = CGAffineTransformScale(pinch.view.transform, deltaScale, deltaScale);
[pinch.view setTransform: transform];
// [_contentNode setScale:deltaScale];
prevScale = pinch.scale;
}
}
The problem is that it scalw whole UIView not the CCNode. I have also tried to by setting the scale of my _contentNode.
**EDIT
I ave also tried this
- (void)handlePinchGesture:(UIPinchGestureRecognizer*)aPinchGestureRecognizer
{
if (pinch.state == UIGestureRecognizerStateBegan || pinch.state == UIGestureRecognizerStateChanged) {
CGPoint midpoint = [pinch locationInView:[CCDirector sharedDirector].view];
CGSize winSize = [CCDirector sharedDirector].viewSize;
float x = midpoint.x/winSize.width;
float y = midpoint.y/winSize.height;
_contentNode.anchorPoint = CGPointMake(x, y);
float scale = [pinch scale];
_contentNode.scale *= scale;
pinch.scale = 1;
}
}
But it zoom from the bottom left of the screen.
I had the same problem. I use CCScrollView, that contains CCNode that larger than device screen. I want scroll and zoom it, but node shouldnt scroll out of screen, and scale smaller than screen. So, i create my subclass of CCScrollView, where i handle pinch. It has some strange glitches, but it works fine at all.
When pinch began i set anchor point of my node to pinch center on node space. Then i need change position of my node proportional to shift of anchor point, so moving anchor point doesn't change nodes location on view:
- (void)handlePinch:(UIPinchGestureRecognizer*)recognizer
{
if (recognizer.state == UIGestureRecognizerStateEnded) {
_previousScale = self.contentNode.scale;
}
else if (recognizer.state == UIGestureRecognizerStateBegan) {
float X = [recognizer locationInNode:self.contentNode].x / self.contentNode.contentSize.width;
float Y = [recognizer locationInNode:self.contentNode].y / self.contentNode.contentSize.height;
float positionX = self.contentNode.position.x + self.contentNode.boundingBox.size.width * (X - self.contentNode.anchorPoint.x);
float positionY = self.contentNode.position.y + self.contentNode.boundingBox.size.height * (Y - self.contentNode.anchorPoint.y);
self.contentNode.anchorPoint = ccp(X, Y);
self.contentNode.position = ccp(positionX, positionY);
}
else {
CGFloat scale = _previousScale * recognizer.scale;
if (scale >= maxScale) {
self.contentNode.scale = maxScale;
}
else if (scale <= [self minScale]) {
self.contentNode.scale = [self minScale];
}
else {
self.contentNode.scale = scale;
}
}
}
Also i need change CCScrollView min and max scroll, so my node never scroll out of view. Default anchor point is (0,1), so i need shift min and max scroll proportional to the new anchor point.
- (float) maxScrollX
{
if (!self.contentNode) return 0;
float maxScroll = self.contentNode.boundingBox.size.width - self.contentSizeInPoints.width;
if (maxScroll < 0) maxScroll = 0;
return maxScroll - self.contentNode.boundingBox.size.width * self.contentNode.anchorPoint.x;
}
- (float) maxScrollY
{
if (!self.contentNode) return 0;
float maxScroll = self.contentNode.boundingBox.size.height - self.contentSizeInPoints.height;
if (maxScroll < 0) maxScroll = 0;
return maxScroll - self.contentNode.boundingBox.size.height * (1 - self.contentNode.anchorPoint.y);
}
- (float) minScrollX
{
float minScroll = [super minScrollX];
return minScroll - self.contentNode.boundingBox.size.width * self.contentNode.anchorPoint.x;
}
- (float) minScrollY
{
float minScroll = [super minScrollY];
return minScroll - self.contentNode.boundingBox.size.height * (1 - self.contentNode.anchorPoint.y);
}
UIGestureRecognizerStateEnded doesn't have locationInNode: method, so i added it by category. It just return touch location on node space:
#import "UIGestureRecognizer+locationInNode.h"
#implementation UIGestureRecognizer (locationInNode)
- (CGPoint) locationInNode:(CCNode*) node
{
CCDirector* dir = [CCDirector sharedDirector];
CGPoint touchLocation = [self locationInView: [self view]];
touchLocation = [dir convertToGL: touchLocation];
return [node convertToNodeSpace:touchLocation];
}
- (CGPoint) locationInWorld
{
CCDirector* dir = [CCDirector sharedDirector];
CGPoint touchLocation = [self locationInView: [self view]];
return [dir convertToGL: touchLocation];
}
#end

Change image when touched

I need an image to change itself when it's touched.
At the moment the image that changes is the next image that spawns and not self.
#implementation MyScene2
{
Marsugo *marsugo;
SKAction *actionMoveDown;
SKAction *actionMoveEnded;
SKTexture *rescued;
}
-(id)initWithSize:(CGSize)size
{
if (self = [super initWithSize:size]) {
// Initializes Background
self.currentBackground = [Background generateNewBackground];
[self addChild:self.currentBackground];
}
return self;
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
for (UITouch *touch in touches) {
NSArray *nodes = [self nodesAtPoint:[touch locationInNode:self]];
for (SKNode *node in nodes) {
if ([node.name isEqualToString:playerObject]) {
rescued = [SKTexture textureWithImageNamed:#"rescued"];
marsugo.texture = rescued;
// this is changing the image of the next marsugo that spawns instead of self.
}
}
}
}
-(void)addMarsugos
{
// the marsugo is being initialized inside this method, that might be the issue i believe
// Create sprite
marsugo = [[Marsugo alloc]init];
marsugo.xScale = 0.3;
marsugo.yScale = 0.3;
marsugo.zPosition = 75;
// Bounds + Spawn Positions
int minX = marsugo.size.width;
int maxX = self.frame.size.width - marsugo.size.width;
int rangeX = maxX - minX;
int actualX = (arc4random() % rangeX) + minX;
marsugo.position = CGPointMake(actualX, self.frame.size.height + 50);
[self addChild:marsugo];
// Spawn Timer
int minDuration = 1;
int maxDuration = 10;
int rangeDuration = maxDuration - minDuration;
int actualDuration = (arc4random() % rangeDuration) + minDuration;
// Movement Actions
actionMoveDown = [SKAction moveTo:CGPointMake(actualX, CGRectGetMinY(self.frame)) duration:actualDuration];
actionMoveEnded = [SKAction removeFromParent];
[marsugo runAction:[SKAction sequence:#[actionMoveDown, actionMoveEnded]]];
NSLog(#"Marsugo X: %f - Speed: %i", marsugo.position.x, actualDuration);
}
#end
Like i said previously, i need the self sprite to change texture and not the "next spawning sprite".
Any help fixing this would be appreciated, thank you.
You don't want to use touchesBegan - you're better off using a tap gesture recognizer. Below I have _testView, which is an instance variable I create and add to the view in viewDidLoad. I then created a tap gesture recognizer that calls a function when the view is tapped, and that function changes the color of the view - but in your case you can call your function that changes the image:
- (void)viewDidLoad {
[super viewDidLoad];
// create the test view and add it as a subview
_testView = [[UIView alloc] initWithFrame:CGRectMake(20, 100, 100, 100)];
_testView.backgroundColor = [UIColor redColor];
[self.view addSubview:_testView];
// create the tap gesture recognizer and add it to the test view
UITapGestureRecognizer *tapGestureRecognizer = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(changeColor)];
[_testView addGestureRecognizer:tapGestureRecognizer];
}
- (void)changeColor {
// here I'm changing the color, but you can do whatever you need once the tap is recognized
_testView.backgroundColor = [UIColor blueColor];
}
This results in this at first:
Then when I tap the view:
You can override -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event in your Marsugo class and change the texture there. That way, you won't have to check if the touch is inside your node because that method won't be called if it's not.

Throwing ball in SpriteKit

Last days, I experimented some time with spriteKit and (amongst other things) tried to solve the problem to "throw" a sprite by touching it and dragging.
The same question is on Stackexchange, but they told me to first remove the bug and then let the code be reviewed.
I have tackled the major hurdles, and the code is working fine, but there consist one little problem.
(Additionally, I'd be interested if somebody has a more polished or better working solution for this. I'd also love to hear suggestions about how to perfect the feeling of realism in this interaction.)
Sometimes, the ball just gets stuck.
If you want to reproduce that, just swipe the ball really fast and short. I suspect the gestureRecognizer to make "touchesMoved" and "touchesEnded" callback asynchronous and through that some impossible state occurs in the physics simulation.
Can anybody provide a more reliable way to reproduce the issue, and what could be the solution for that?
The project is called ballThrow and BT is the class prefix.
#import "BTMyScene.h"
#import "BTBall.h"
#interface BTMyScene()
#property (strong, nonatomic) NSMutableArray *balls;
#property (nonatomic) CGFloat yPosition;
#property (nonatomic) CGFloat xCenter;
#property (nonatomic) BOOL updated;
#end
#implementation BTMyScene
const CGFloat BALLDISTANCE = 80;
-(id)initWithSize:(CGSize)size {
if (self = [super initWithSize:size]) {
_balls = [NSMutableArray arrayWithCapacity:5];
//define the region where the balls will spawn
_yPosition = size.height/2.0;
_xCenter = size.width/2.0;
/* Setup your scene here */
self.backgroundColor = [SKColor colorWithRed:0.15 green:0.15 blue:0.3 alpha:1.0];
}
return self;
}
-(void)didMoveToView:(SKView *)view {
//Make an invisible border
//this seems to be offset... Why the heck is this?
self.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:view.frame];
[self createBalls:2];
//move balls with pan gesture
//could be improved by combining with touchesBegan for first locating the touch
[self.view addGestureRecognizer:[[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(moveBall:)]];
}
-(void)moveBall:(UIPanGestureRecognizer *)pgr {
//depending on the touch phase do different things to the ball
if (pgr.state == UIGestureRecognizerStateBegan) {
[self attachBallToTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateChanged) {
[self moveBallToTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateEnded) {
[self stopMovingTouch:pgr];
}
else if (pgr.state == UIGestureRecognizerStateCancelled) {
[self stopMovingTouch:pgr];
}
}
-(void)attachBallToTouch:(UIPanGestureRecognizer *)touch {
//determine the ball to move
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch])
{
//stop ball movement
[ball.physicsBody setAffectedByGravity:NO];
[ball.physicsBody setVelocity:CGVectorMake(0, 0)];
//the ball might not be touched right in its center, so save the relative location
ball.touchLocation = [self convertPoint:[self convertPointFromView:[touch locationInView:self.view]] toNode:ball];
//update location once, just in case...
[self setBallPosition:ball toTouch:touch];
if (_updated) {
_updated = NO;
[touch setTranslation:CGPointZero inView:self.view];
}
}
}
}
-(void)moveBallToTouch:(UIPanGestureRecognizer *)touch {
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch])
{
//update the position of the ball and reset translation
[self setBallPosition:ball toTouch:touch];
if (_updated) {
_updated = NO;
[touch setTranslation:CGPointZero inView:self.view];
}
break;
}
}
}
-(void)setBallPosition:(BTBall *)ball toTouch:(UIPanGestureRecognizer *)touch {
//gesture recognizers only deliver locations in views, thus convert to node
CGPoint touchPosition = [self convertPointFromView:[touch locationInView:self.view]];
//update the location to the toucheĀ“s location, offset by touch position in ball
[ball setNewPosition:CGPointApplyAffineTransform(touchPosition,
CGAffineTransformMakeTranslation(-ball.touchLocation.x,
-ball.touchLocation.y))];
//save the velocity between the last two touch records for later release
CGPoint velocity = [touch velocityInView:self.view];
//why the hell is the y coordinate inverted??
[ball setLastVelocity:CGVectorMake(velocity.x, -velocity.y)];
}
-(void)stopMovingTouch:(UIPanGestureRecognizer *)touch {
for (BTBall *ball in self.balls) {
if ([self isMovingBall:ball forGestureRecognizer:touch]) {
//release the ball: enable gravity impact and make it move
[ball.physicsBody setAffectedByGravity:YES];
[ball.physicsBody setVelocity:CGVectorMake(ball.lastVelocity.dx, ball.lastVelocity.dy)];
break;
}
}
}
-(BOOL)isMovingBall:(BTBall *)ball forGestureRecognizer:(UIPanGestureRecognizer *)touch {
//latest location of touch
CGPoint touchPosition = [touch locationInView:self.view];
//distance covered since the last call
CGPoint touchTranslation = [touch translationInView:self.view];
//position, where the ball must be, if it is the one
CGPoint translatedPosition = CGPointApplyAffineTransform(touchPosition,
CGAffineTransformMakeTranslation(-touchTranslation.x,
-touchTranslation.y));
CGPoint inScene = [self convertPointFromView:translatedPosition];
//determine weather the last touch location was on the ball
//if last touch location was on the ball, return true
return [[self nodesAtPoint:inScene] containsObject:ball];
}
-(void)update:(CFTimeInterval)currentTime {
//updating the ball position here improved performance dramatically
for (BTBall *ball in self.balls) {
//balls that move are not gravity affected
//easiest way to determine movement
if ([ball.physicsBody affectedByGravity] == NO) {
[ball setPosition:ball.newPosition];
}
}
//ball positions are refreshed
_updated = YES;
}
-(void)createBalls:(int)numberOfBalls {
for (int i = 0; i<numberOfBalls; i++) {
BTBall *ball;
//reuse balls (not necessary yet, but imagine balls spawning)
if(i<[self.balls count]) {
ball = self.balls[i];
}
else {
ball = [BTBall newBall];
}
[ball.physicsBody setAffectedByGravity:NO];
//calculate ballposition
CGPoint ballPosition = CGPointMake(self.xCenter-BALLSIZE/2+(i-(numberOfBalls-1)/2.0)*BALLDISTANCE, self.yPosition);
[ball setNewPosition:ballPosition];
[self.balls addObject:ball];
[self addChild:ball];
}
}
#end
The BTBall (subclass of SKShapeNode, because of the custom properties needed)
#import <SpriteKit/SpriteKit.h>
#interface BTBall : SKShapeNode
const extern CGFloat BALLSIZE;
//some properties for the throw animation
#property (nonatomic) CGPoint touchLocation;
#property (nonatomic) CGPoint newPosition;
#property (nonatomic) CGVector lastVelocity;
//create a standard ball
+(BTBall *)newBall;
#end
The BTBall.m with a class method to create new balls
#import "BTBall.h"
#implementation BTBall
const CGFloat BALLSIZE = 80;
+(BTBall *)newBall {
BTBall *ball = [BTBall node];
//look
[ball setPath:CGPathCreateWithEllipseInRect(CGRectMake(-BALLSIZE/2,-BALLSIZE/2,BALLSIZE,BALLSIZE), nil)];
[ball setFillColor:[UIColor redColor]];
[ball setStrokeColor:[UIColor clearColor]];
//physics
SKPhysicsBody *ballBody = [SKPhysicsBody bodyWithCircleOfRadius:BALLSIZE/2.0];
[ball setPhysicsBody:ballBody];
[ball.physicsBody setAllowsRotation:NO];
//ball is not moving at the beginning
ball.lastVelocity = CGVectorMake(0, 0);
return ball;
}
#end
1. A couple of problems (see comments in code) are related to the spriteKit coordinate system. I just do not get the border of the scene align with its actual frame, though I make it with the exact same code that Apple gives us in the programming guide. I have moved it from initWithSize to didMoveToView due to a suggestion here on Stackoverflow, but that did not help. It is possible to manually offset the border with hardcoded values, but that does not satisfy me.
2. Does anybody know a debugging tool, which colors the physics body of a sprite, in order to see its size and whether it is at the same position as the sprite?
Update: Problems above solved by using YMC Physics Debugger:
This lines of code are correct:
[ball setPath:CGPathCreateWithEllipseInRect(CGRectMake(-BALLSIZE/2,-BALLSIZE/2,BALLSIZE,BALLSIZE), nil)];
SKPhysicsBody *ballBody = [SKPhysicsBody bodyWithCircleOfRadius:BALLSIZE/2.0];
Because 0,0 is the center of the physics body, the origin of the path must be translated.

Translating a UIView after rotating

I'm trying to translate a UIView that has been either rotated and/or scaled using touches from the user. I try to translate it with user input as well:
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
UIView *moveView = [recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
// Figure out the translation based on how we are scaled
CGAffineTransform transform = [moveView transform];
CGFloat xScale = transform.a;
CGFloat yScale = transform.d;
dX/=xScale;
dY/=yScale;
lastPoint = newCoord;
[moveView setTransform:CGAffineTransformTranslate( transform, dX, dY )];
[recognizer setTranslation:CGPointZero inView:playArea];
}
But when I touch and move the view it gets translated in all different weird ways. Can I apply some sort of formula using the rotation values to translate properly?
The best solution I've found with having to use the least amount of math was to store the original translation, rotation, and scaling values separately and redo the transform when they were changed. My solution was to subclass a UIView with the following properties:
#property (nonatomic) CGPoint translation;
#property (nonatomic) CGFloat rotation;
#property (nonatomic) CGPoint scaling;
And the following functions:
- (void)rotationDelta:(CGFloat)delta
{
[self setRotation:[self rotation]+delta];
}
- (void)scalingDelta:(CGPoint)delta
{
[self setScaling:
(CGPoint){ [self scaling].x*delta.x, [self scaling].y*delta.y }];
}
- (void)translationDelta:(CGPoint)delta
{
[self setTranslation:
(CGPoint){ [self translation].x+delta.x, [self translation].y+delta.y }];
}
- (void)transformMe
{
// Start with the translation
CGAffineTransform transform = CGAffineTransformMakeTranslation( [self translation].x, [self translation].y );
// Apply scaling
transform = CGAffineTransformScale( transform, [self scaling].x, [self scaling].y );
// Apply rotation
transform = CGAffineTransformRotate( transform, [self rotation] );
[self setTransform:transform];
}
- (void)setScaling:(CGPoint)newScaling
{
scaling = newScaling;
[self transformMe];
}
- (void)setRotation:(CGFloat)newRotation
{
rotation = newRotation;
[self transformMe];
}
- (void)setTranslation:(CGPoint)newTranslation
{
translation = newTranslation;
[self transformMe];
}
And to use the following in the handlers:
- (void)handleObjectPinch:(UIPinchGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
// Get my stuff
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat scaleDelta = [recognizer scale];
[view scalingDelta:(CGPoint){ scaleDelta, scaleDelta }];
[recognizer setScale:1.0];
}
}
- (void)handleObjectMove:(UIPanGestureRecognizer *)recognizer
{
static CGPoint lastPoint;
SelectableImageView *moveView = (SelectableImageView *)[recognizer view];
CGPoint newCoord = [recognizer locationInView:playArea];
// Check if this is the first touch
if( [recognizer state]==UIGestureRecognizerStateBegan )
{
// Store the initial touch so when we change positions we do not snap
lastPoint = newCoord;
}
// Create the frame offsets to use our finger position in the view.
float dX = newCoord.x;
float dY = newCoord.y;
dX-=lastPoint.x;
dY-=lastPoint.y;
lastPoint = newCoord;
[moveView translationDelta:(CGPoint){ dX, dY }];
[recognizer setTranslation:CGPointZero inView:playArea];
}
- (void)handleRotation:(UIRotationGestureRecognizer *)recognizer
{
if( [recognizer state] == UIGestureRecognizerStateEnded
|| [recognizer state] == UIGestureRecognizerStateChanged )
{
if( !selectedView )
return;
SelectableImageView *view = selectedView;
CGFloat rotation = [recognizer rotation];
[view rotationDelta:rotation];
[recognizer setRotation:0.0];
}
}
Try Change moveView.center instead of Set (x,y) directly or either "CGAffineTransformTranslate"
Here is the Swift 4/5 version for a transformable UIView
class TransformableImageView: UIView{
var translation:CGPoint = CGPoint(x:0,y:0)
var scale:CGPoint = CGPoint(x:1, y:1)
var rotation:CGFloat = 0
override init (frame : CGRect) {
super.init(frame: frame)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func rotationDelta(delta:CGFloat) {
rotation = rotation + delta
}
func scaleDelta(delta:CGPoint){
scale = CGPoint(x: scale.x*delta.x, y: scale.y * delta.y)
}
func translationDelta(delta:CGPoint){
translation = CGPoint(x: translation.x+delta.x, y: translation.y + delta.y)
}
func transform(){
self.transform = CGAffineTransform.identity.translatedBy(x: translation.x, y: translation.y).scaledBy(x: scale.x, y: scale.y ).rotated(by: rotation )
}
}
I'm leaving this here as I also encountered the same problem. Here is how to do it in swift 2:
Add your top view as subview to your bottom view:
self.view.addSubview(topView)
Then add a PanGesture Recognizer to move on touch:
//Add PanGestureRecognizer to move
let panMoveGesture = UIPanGestureRecognizer(target: self, action: #selector(YourViewController.moveViewPanGesture(_:)))
topView.addGestureRecognizer(panMoveGesture)
And the function to move:
//Move function
func moveViewPanGesture(recognizer:UIPanGestureRecognizer)
{
if recognizer.state == .Changed {
var center = recognizer.view!.center
let translation = recognizer.translationInView(recognizer.view?.superview)
center = CGPoint(x:center.x + translation.x,
y:center.y + translation.y)
recognizer.view!.center = center
recognizer.setTranslation(CGPoint.zero, inView: recognizer.view)
}
}
Basically, you need to translate your view based on the bottom view which is its superview not itself. Like this: recognizer.view?.superview
Or if you also rotate the bottom view, you may add a view which is not going to have any trasformation, and add your bottom view to that not transforming view (very bottom view) and add your top view to bottom view accordingly as subview. Then you should translate your top view based on the very bottom view.

Resources