Keeping SKSpriteNode in bounds of screen - ios

I am trying to check whether my SKSpriteNode will remain in bounds of the screen during a drag gesture. I've gotten to the point where I am pretty sure my logic toward approaching the problem is right, but my implementation is wrong. Basically, before the player moves from the translation, the program checks to see whether its in bounds. Here is my code:
-(CGPoint)checkBounds:(CGPoint)newLocation{
CGSize screenSize = self.size;
CGPoint returnValue = newLocation;
if (newLocation.x <= self.player.position.x){
returnValue.x = MIN(returnValue.x,0);
} else {
returnValue.x = MAX(returnValue.x, screenSize.width);
}
if (newLocation.y <= self.player.position.x){
returnValue.y = MIN(-returnValue.y, 0);
} else {
returnValue.y = MAX(returnValue.y, screenSize.height);
}
NSLog(#"%#", NSStringFromCGPoint(returnValue));
return returnValue;
}
-(void)dragPlayer: (UIPanGestureRecognizer *)gesture {
CGPoint translation = [gesture translationInView:self.view];
CGPoint newLocation = CGPointMake(self.player.position.x + translation.x, self.player.position.y - translation.y);
self.player.position = [self checkBounds:newLocation];
}
For some reason, my player is going off screen. I think my use of the MIN & MAX macros may be wrong, but I am not sure.

Exactly, you mixed up MIN/MAX. The line MIN(x, 0) will return the lower value of x or 0, meaning the result will be 0 or less.
At one line you're using -returnValue.y which makes no sense.
You can (and should for readability) omit the if/else because MIN/MAX, if used correctly, make if/else unnecessary here.

Related

Why is scaling/zooming-in an SKNode forcing the view into the left of the screen?

I'm following Ray Wenderlich's 'iOS Games by Tutorials' & I got everything in my world setup & working: The entire game is in Landscape mode & there's one SKNode called _worldNode & everything, except _uiNode (in-game UI), is added to it. Player character walks to a touched location & _worldNode moves under him like a treadmill. However, like all functionality (or as they call it: "juice") addicts I wanted to add zoom in/out functionality through UIPinchGestureRecognizer by scaling _worldNode, which I did. But now every time I zoom in, the "camera" moves to the bottom left. Zooming out moves the view to the top right of the screen. It's a mess. I need the view to stay centered on the player character & I've tried everything I could come up with & find online. The closest thing I came to was using the technique from SKNode scale from the touched point but I still get the bloody bottom left/top right mess. I realized this mess happens only when I update the camera/view (it's really _worldNode.position). Therefore, 'didSimulatePhysics' or 'didFinishUpdate' methods don't help. In fact, even a one time button that slightly moves/updates the camera view (_worldNode.position) still gives me the bottom left/top right problem. Here is my code. I hope someone can take a look & tell me what to modify to get things working.
#interface GameScene () <SKPhysicsContactDelegate, UIGestureRecognizerDelegate>
{
UIPinchGestureRecognizer *pinchGestureRecognizer;
}
//Properties of my GameScene.
#property SKNode *worldNode;
#property etc. etc.
//Called by -(id)initWithSize:(CGSize)size method & creates the in-game world.
-(void)createWorld
{
[_worldNode addChild:_backgroundLayer];
[self addChild:_worldNode];
self.anchorPoint = CGPointMake(0.5, 0.5); //RW tutorial did it this way.
_worldNode.position = CGPointMake(-_backgroundLayer.layerSize.width/2, -_backgroundLayer.layerSize.height/2); //Center.
//Then I add every node to _worldNode from this point on.
}
//Neccessary for gesture recognizers.
-(void)didMoveToView:(SKView *)view
{
pinchGestureRecognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handleZoomFrom:)];
[view addGestureRecognizer:pinchGestureRecognizer];
}
//"Camera" follows player character. 'didSimulatePhysics' doesn't help either.
-(void)didFinishUpdate
{
//IF '_pinchingScreen' == YES then screen pinching is in progress. Thus, camera position update will seize for the duration of pinching.
if (!_pinchingScreen)
{
_worldNode.position = [self pointToCenterViewOn:_player.position];
}
}
//Method that is called by my UIPinchGestureRecognizer. Answer from: https://stackoverflow.com/questions/21900614/sknode-scale-from-the-touched-point?lq=1
-(void)handleZoomFrom:(UIPinchGestureRecognizer*)recognizer
{
CGPoint anchorPoint = [recognizer locationInView:recognizer.view];
anchorPoint = [self convertPointFromView:anchorPoint];
if (recognizer.state == UIGestureRecognizerStateBegan)
{
// No code needed for zooming...
_player.movementMode = 2; //Stop character from moving from touches.
_pinchingScreen = YES; //Notifies 'didFinishUpdate' method that pinching began & camera position update should stop for now.
}
else if (recognizer.state == UIGestureRecognizerStateChanged)
{
//Technique from the above Stack Overflow link - Commented out.
// CGPoint anchorPointInMySkNode = [_worldNode convertPoint:anchorPoint fromNode:self];
//
// [_worldNode setScale:(_worldNode.xScale * recognizer.scale)];
//
// CGPoint mySkNodeAnchorPointInScene = [self convertPoint:anchorPointInMySkNode fromNode:_worldNode];
// CGPoint translationOfAnchorInScene = CGPointSubtract(anchorPoint, mySkNodeAnchorPointInScene);
//
// _worldNode.position = CGPointAdd(_worldNode.position, translationOfAnchorInScene);
//
// recognizer.scale = 1.0;
//Modified scale: 2.0
if(recognizer.scale > _previousWorldScale)
{
_previousWorldScale = recognizer.scale;
CGPoint anchorPointInMySkNode = [_worldNode convertPoint:anchorPoint fromNode:self];
[_worldNode setScale:2.0];
CGPoint worldNodeAnchorPointInScene = [self convertPoint:anchorPointInMySkNode fromNode:_worldNode];
CGPoint translationOfAnchorInScene = CGPointSubtract(anchorPoint, worldNodeAnchorPointInScene);
_worldNode.position = CGPointAdd(_worldNode.position, translationOfAnchorInScene);
//[_worldNode runAction:[SKAction scaleTo:2.0 duration:0]]; //This works too.
}
//Original scale: 1.0
if(recognizer.scale < _previousWorldScale)
{
_previousWorldScale = recognizer.scale;
CGPoint anchorPointInMySkNode = [_worldNode convertPoint:anchorPoint fromNode:self];
[_worldNode setScale:1.0];
CGPoint worldNodeAnchorPointInScene = [self convertPoint:anchorPointInMySkNode fromNode:_worldNode];
CGPoint translationOfAnchorInScene = CGPointSubtract(anchorPoint, worldNodeAnchorPointInScene);
_worldNode.position = CGPointAdd(_worldNode.position, translationOfAnchorInScene);
//[_worldNode runAction:[SKAction scaleTo:1.0 duration:0]]; //This works too.
}
}
else if (recognizer.state == UIGestureRecognizerStateEnded)
{
// No code needed here for zooming...
_pinchingScreen = NO; //Notifies 'didFinishUpdate' method that pinching has stopped & camera position update should resume.
_player.movementMode = 0; //Resume character movement.
}
}
So could anyone please tell me, by looking at the above code, why the camera/view shifts to the bottom left upon zooming in? I've sat several days on this problem & I still can't figure it out.
Thanks to JKallio, who wrote detailed code in his answer to Zoom and Scroll SKNode in SpriteKit, I've been able to find a piece of code that solves the problem. There's a method called 'centerOnNode' that is small, elegant & solves my problem perfectly. Here it is for anyone that just needs that:
-(void) centerOnNode:(SKNode*)node
{
CGPoint posInScene = [node.scene convertPoint:node.position fromNode:node.parent];
node.parent.position = CGPointMake(node.parent.position.x - posInScene.x, node.parent.position.y - posInScene.y);
}
Then you call that method inside your 'didSimulatePhysics' or inside 'didFinishUpdate' like so:
//"Camera" follows player character.
-(void)didFinishUpdate
{
//IF '_pinchingScreen' == YES then screen pinching is in progress. Thus, camera position update will seize for the duration of pinching.
if (!_pinchingScreen)
{
if (_previousWorldScale > 1.0) //If _worldNode scale is greater than 1.0
{
[self centerOnNode:_player]; //THIS IS THE METHOD THAT SOLVES THE PROBLEM!
}
else if (_previousWorldScale == 1.0) //Standard _worldNode scale: 1.0
{
_worldNode.position = [self pointToCenterViewOn:_player.position];
}
}
}
P.S. Just remember the question wasn't HOW to zoom in. But how to fix the issue with the camera once the world is ALREADY zoomed in.

SpriteKit - Determine Side A Square Collided With

So for the App I'm working on, I have two cubes colliding. I check for this in the standard way. The app tells me when they collide in my "didBeginContact" method.
-(void)didBeginContact:(SKPhysicsContact *)contact {
if (contact.bodyA.categoryBitMask == WALL_CATEGORY && contact.bodyB.categoryBitMask == CHARACTER_CATEGORY) {
CGPoint point = contact.contactPoint;
}
}
So i know where the collision takes place, but because it is two squares it can be at any point along the side including the corners. So how would I check if the collision on the left/right/top/bottom exclusively?
Edit: Correct Answer: Might not be the cleanest way to do it but it works. Hopefully it'll help someone in the future.
m_lNode = [SKNode node];
m_lNode.position = CGPointMake(-(CHARACTER_SIZE / 2), 0);
m_lNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(1, m_character.size.height)];
m_lNode.physicsBody.allowsRotation = NO;
m_lNode.physicsBody.usesPreciseCollisionDetection = YES;
m_lNode.physicsBody.categoryBitMask = CHARACTER_L_CATEGORY;
m_rNode = [SKNode node];
m_rNode.position = CGPointMake((CHARACTER_SIZE / 2), 0);
m_rNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(1, m_character.size.height)];
m_rNode.physicsBody.allowsRotation = NO;
m_rNode.physicsBody.usesPreciseCollisionDetection = YES;
m_rNode.physicsBody.categoryBitMask = CHARACTER_R_CATEGORY;
m_tNode = [SKNode node];
m_tNode.position = CGPointMake(0, (CHARACTER_SIZE / 2));
m_tNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(m_character.size.width , 1)];
m_tNode.physicsBody.allowsRotation = NO;
m_tNode.physicsBody.usesPreciseCollisionDetection = YES;
m_tNode.physicsBody.categoryBitMask = CHARACTER_T_CATEGORY;
m_bNode = [SKNode node];
m_bNode.position = CGPointMake(0, -(CHARACTER_SIZE / 2));
m_bNode.physicsBody = [SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake(m_character.size.width, 1)];
m_bNode.physicsBody.allowsRotation = NO;
m_bNode.physicsBody.usesPreciseCollisionDetection = YES;
m_bNode.physicsBody.categoryBitMask = CHARACTER_B_CATEGORY;
[m_character addChild:m_tNode];
[m_character addChild:m_bNode];
[m_character addChild:m_lNode];
[m_character addChild:m_rNode];
-(void)didBeginContact:(SKPhysicsContact *)contact {
if (contact.bodyA.categoryBitMask == WALL_CATEGORY) {
switch (contact.bodyB.categoryBitMask) {
case CHARACTER_T_CATEGORY:
NSLog(#"Top");
m_isHitTop = true;
break;
case CHARACTER_B_CATEGORY:
NSLog(#"Bottom");
m_isHitBottom = true;
break;
case CHARACTER_L_CATEGORY:
NSLog(#"Left");
m_isHitLeft = true;
break;
case CHARACTER_R_CATEGORY:
NSLog(#"Right");
m_isHitRight = true;
break;
}
}
}
Added some relevant code. It's my code so there is variables amongst other things but you should be able to figure it out.
I think the best way to determine what side is involved in your contact (top,left,bottom,right side) is to:
calculate the points leaved by a centering cross for up, down, left and right sides (for example if you have a squared sprite..)
let halfWidth = self.frame.width/2
let halfHeight = self.frame.height/2
let down = CGPointMake(self.frame.origin.x+halfWidth,self.frame.origin.y)
let up = CGPointMake(self.frame.origin.x+halfWidth,self.frame.origin.y+self.frame.size.height)
let left = CGPointMake(self.frame.origin.x,self.frame.origin.y+halfHeight)
let right = CGPointMake(self.frame.origin.x+self.frame.size.width,self.frame.origin.y+halfHeight)
Calculate the distance between the contactPoint and each point sides (up, left , right and down)
This step can be possible with this little function:
func getDistance(p1:CGPoint,p2:CGPoint)->CGFloat {
let xDist = (p2.x - p1.x)
let yDist = (p2.y - p1.y)
return CGFloat(sqrt((xDist * xDist) + (yDist * yDist)))
}
After, the point involved with the lowest distance calculated is the side point nearest the contactPoint.
What are the advantages of this method:
You don't paint zombies or ghost nodes and relative physic bodies
This method work also inside a dynamic mutable CGPath, not only with a
knowed rectangular boundaries
Is fast, few lines of code and if you add other few lines you can be
able to determine also diagonals and make more precise your
algorithm.
the easiest way is to add child sprites the top, left, right, bottom of your squares. Add physics bodies to those, and then you can tell where things are colliding. I'd recommend trying this method first unless you have many many squares on the screen.
If you have a ton of squares on the screen and youre worried about performance. then maybe use contact.contactPoint and convert that point to square coordinates. given the center point of the square, the angle of the squares rotation and that point, you should be able to tell where the square collided. That would require some math.. and I was afraid writing up that kind of solution when the first one may be all that you really need.

Collision detection between objects in Tiled and a Sprite's bounding box in cocos2d

I am trying to make a platform game for the iphone, using cocos2d and Tiled (for the maps).
All of the tutorials i've seen on the net are using Layers in Tiled to do collision detection.
I want to use objects to do that...not Layers.
With objects you can create custom shapes that can give a better 'reality' into the game.
To give an example of what i mean :
I've drawn the ground as a background and created an object layer on top.
I want to detect player collision with that, instead of the background tile.
Now using the most famous tutorial out there : http://www.raywenderlich.com/15230/how-to-make-a-platform-game-like-super-mario-brothers-part-1
I am trying to rewrite checkForAndResolveCollisions method to check collisions for the objects instead.
The problem is that in Tiled the coordinates system is different than cocos2d. Tiled starts from top left corner, cocos2d from bottom left corner....and not only that...I noticed that the width and height of the object properties in Tiled (probably) dont correspond to the same in iphone devices.
The above rectangle has properties:
its w/h is 480/128 in tiled (for retina devices) which means its probably huge inside the map if i keep them like this. My guess is i have to divide this by 2.
So far i got this:
-(void)checkForAndResolveObjCollisions:(Player *)p {
CCTiledMapObjectGroup *objectGroup = [map objectGroupNamed:#"Collision"];
NSArray* tiles = [objectGroup objects];
CGFloat x, y, wobj, hobj;
for (NSDictionary *dic in tiles) {
CGRect pRect = [p collisionBoundingBox]; //3
x = [[dic valueForKey:#"x"] floatValue];
y = [[dic valueForKey:#"y"] floatValue];
wobj = [[dic valueForKey:#"width"] floatValue];
hobj = [[dic valueForKey:#"height"] floatValue];
CGPoint position = CGPointMake(x, y);
CGPoint objPos = [self tileForPosition:position];
CGRect tileRect = CGRectMake(objPos.x, objPos.y, wobj/2, hobj/2);
if (CGRectIntersectsRect(pRect, tileRect)) {
CCLOG(#"INTERSECT");
CGRect intersection = CGRectIntersection(pRect, tileRect);
NSUInteger tileIndx = [tiles indexOfAccessibilityElement:dic];
if (tileIndx == 0) {
//tile is directly below player
p.desiredPosition = ccp(p.desiredPosition.x, p.desiredPosition.y + intersection.size.height);
p.velocity = ccp(p.velocity.x, 0.0);
p.onGround = YES;
} else if (tileIndx == 1) {
//tile is directly above player
p.desiredPosition = ccp(p.desiredPosition.x, p.desiredPosition.y - intersection.size.height);
p.velocity = ccp(p.velocity.x, 0.0);
} else if (tileIndx == 2) {
//tile is left of player
p.desiredPosition = ccp(p.desiredPosition.x + intersection.size.width, p.desiredPosition.y);
} else if (tileIndx == 3) {
//tile is right of player
p.desiredPosition = ccp(p.desiredPosition.x - intersection.size.width, p.desiredPosition.y);
} else {
if (intersection.size.width > intersection.size.height) {
//tile is diagonal, but resolving collision vertially
p.velocity = ccp(p.velocity.x, 0.0);
float resolutionHeight;
if (tileIndx > 5) {
resolutionHeight = -intersection.size.height;
p.onGround = YES;
} else {
resolutionHeight = intersection.size.height;
}
p.desiredPosition = ccp(p.desiredPosition.x, p.desiredPosition.y + resolutionHeight );
} else {
float resolutionWidth;
if (tileIndx == 6 || tileIndx == 4) {
resolutionWidth = intersection.size.width;
} else {
resolutionWidth = -intersection.size.width;
}
p.desiredPosition = ccp(p.desiredPosition.x + resolutionWidth , p.desiredPosition.y);
}
}
}
// }
}
p.position = p.desiredPosition; //8
}
- (CGPoint)tileForPosition:(CGPoint)p
{
NSInteger x = (NSInteger)(p.x / map.tileSize.width);
NSInteger y = (NSInteger)(((map.mapSize.height * map.tileSize.width) - p.y) / map.tileSize.width);
return ccp(x, y);
}
I am getting the object x,y,w,h and try to convert them to cocos2d dimensions and sizes.
The above translates to this:
Dimens: 480.000000, 128.000000
Coord: 0.000000, 40.000000
Basically its a mess. And its not working .....at all. The player just falls right through.
I am surprised noone has done collision detection based on objects before...unless i am wrong.
Does anyone know if this can be done or how it can be done ?
Kinda what he does here : https://www.youtube.com/watch?feature=player_detailpage&v=2_KB4tOTH6w#t=30
Sorry for the long post.
Thanks for any answers in advance.

In Cocos2d setting anchor point on a layer for pinch to zoom not working as expected

Right now I'm trying to implement a pinch-to-zoom feature in my Cocos2D game for iOS and I'm encountering really strange behavior. My goal is to use the handler for UIPinchGestureRecognizer to scale one of the CCNodes that represents the game level when a player pinches the screen. This has the effect of zooming.
The issue is if I set the anchor for zooming to some arbitrary value such as .5, .5 (the center of the level CCNode) it scales perfectly around the center of the level, but I want to scale around the center of the player's view. Here is what that looks like:
- (void) handlePinchFrom:(UIPinchGestureRecognizer*) recognizer
{
if(recognizer.state == UIGestureRecognizerStateEnded)
{
_isScaling = false;
_prevScale = 1.0;
}
else
{
_isScaling = true;
float deltaScale = 1.0 - _prevScale + recognizer.scale;
// Obtain the center of the camera.
CGPoint center = CGPointMake(self.contentSize.width/2, self.contentSize.height/2);
CGPoint worldPoint = [self convertToWorldSpace:center];
CGPoint areaPoint = [_area convertToNodeSpace:worldPoint];
// Now set anchor point to where areaPoint is relative to the whole _area contentsize
float areaLocationX = areaPoint.x / _area.contentSize.width;
float areaLocationY = areaPoint.y / _area.contentSize.height;
[_area moveDebugParticle:areaPoint];
[_area setAnchorPoint:CGPointMake(areaLocationX, areaLocationY)];
if (_area.scale*deltaScale <= ZOOM_RADAR_THRESHOLD)
{
_area.scale = ZOOM_RADAR_THRESHOLD;
}
else if (_area.scale*deltaScale >= ZOOM_MAX)
{
_area.scale = ZOOM_MAX;
}
else
{
// First set the anchor point.
_area.scale *= deltaScale;
}
_prevScale = recognizer.scale;
}
}
If I set the anchor point to .5, .5 and print the calculated anchor point (areaLocationX, areaLocationY) using a CCLog it looks right, but when I set the anchor point to these values the layer scales out of control and entirely leaves the view of the player. The anchor point takes on crazy values like (-80, 10), although generally it should be relatively close to something in the range of 0 to 1 for either coordinate.
What might be causing this kind of behavior?
Ok it looks like I solved it. I was continually moving the anchor point -during- the scaling rather than setting it at the very beginning once. The result was really erratic scaling rather than something smooth and expected. The resolved code looks like this:
- (void) handlePinchFrom:(UIPinchGestureRecognizer*) recognizer
{
if(recognizer.state == UIGestureRecognizerStateEnded)
{
_isScaling = false;
_prevScale = 1.0;
}
else
{
if (!_isScaling)
{
// Obtain the center of the camera.
CGPoint center = CGPointMake(self.contentSize.width/2, self.contentSize.height/2);
CGPoint areaPoint = [_area convertToNodeSpace:center];
// Now set anchor point to where areaPoint is relative to the whole _area contentsize
float anchorLocationX = areaPoint.x / _area.contentSize.width;
float anchorLocationY = areaPoint.y / _area.contentSize.height;
[_area moveDebugParticle:areaPoint];
[_area setAnchorPoint:CGPointMake(anchorLocationX, anchorLocationY)];
CCLOG(#"Anchor Point: (%f, %f)", anchorLocationX, anchorLocationY);
}
_isScaling = true;
float deltaScale = 1.0 - _prevScale + recognizer.scale;
if (_area.scale*deltaScale <= ZOOM_RADAR_THRESHOLD)
{
_area.scale = ZOOM_RADAR_THRESHOLD;
}
else if (_area.scale*deltaScale >= ZOOM_MAX)
{
_area.scale = ZOOM_MAX;
}
else
{
_area.scale *= deltaScale;
}
_prevScale = recognizer.scale;
}
}

Reading nodes position after physics simulation

So I'm trying to make an app where you pick up objects with your finger, it moves with your finger and then you flick it away to throw it. I found some threads for similar games but not with this particular problem.
Since I couldn't really keep the object along with my finger by applying force or impulse I disable the physics on it while I'm "holding it", measure velocity so when I release it I apply that vector as an impulse.
This works excellent visually but when I try to read the position of the object it's not synced up with "reality". I've made a condition so if velocity isn't strong enough when you release the object it is supposed to return to its starting position but it doesn't move at all (except if you manage to throw it, that works fine) so when I tried to output the .position to the console it didn't correlate to what I was seeing on the screen.
Any ideas? I know you shouldn't set position and simulate physics at the same time but I disable the physics for when I'm dragging the object. Why won't it keep track of .position just because it is moving it with physics?
Here I try to throw a potion:
- (void)handlePan:(UIPanGestureRecognizer *)recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) { //Picked up
potion.physicsBody.dynamic = NO;
potion.physicsBody.velocity = CGVectorMake(0.0, 0.0);
potion.physicsBody.angularVelocity = 0.0;
potion.zRotation = 0.0;
CGPoint touchLocation = [recognizer locationInView:recognizer.view];
touchLocation = [self convertPointFromView:touchLocation];
[self selectNodeForTouch:touchLocation];
} else if (recognizer.state == UIGestureRecognizerStateChanged) { //Moved around
CGPoint translation = [recognizer translationInView:recognizer.view];
translation = CGPointMake(translation.x, -translation.y);
[self panForTranslation:translation];
[recognizer setTranslation:CGPointZero inView:recognizer.view];
velocity = CGVectorMake(translation.x, translation.y);
} else if (recognizer.state == UIGestureRecognizerStateEnded) { //Dropped
_selectedNode = nil;
potion.physicsBody.dynamic = YES;
if (potion.position.y < 250 && velocity.dy < 20) { //Returned to starting position
NSLog(#"Pos:%f,%f",potion.position.x,potion.position.y);
potion.physicsBody.velocity = CGVectorMake(0.0, 0.0);
potion.physicsBody.angularVelocity = 0.0;
potion.zRotation = 0.0;
potion.position = CGPointMake(gameWidth / 2, 150);
NSLog(#"Returning potion");
NSLog(#"Pos:%f,%f",potion.position.x,potion.position.y);
} else { //Flung away
[potion.physicsBody applyImpulse:velocity];
NSLog(#"Add impulse: %f, %f", velocity.dx, velocity.dy);
NSLog(#"Pos:%f,%f",potion.position.x,potion.position.y);
}
}
}
EDIT:After some more testing I've observed that if it has never been thrown (dynamic with impulse) it always shows (160, 150) as its position no matter where it is. Sometimes it shows something like 160.00031 and I don't know why. After it has been thrown it shows the correct position in the NSLog but changing the position still doesn't work
EDIT2: Just saw that it doesn't show the correct position after a throw. Just different but they seem offset. I throw objects in the +y direction (up) and when I put it down in the left corner the position.x is correct but position.y is offset by about 300. So where y should be 0 it's 300.

Resources