Can SKPhysicsJoints be used in a scrolling SpriteKit game? - ios

This seems to be a major oversight by Apple, but since SKPhysicsJoints have their anchor points set in Scene coordinates, this makes doing any kind of scrolling game impossible.
To simulate a camera in SpriteKit you create a WorldNode which contains all of the gameplay elements, and then pan that around the scene. Unfortunately, doing this causes the Scene coordinates of every object in the game to change on every frame as you pan the world around. In turn, this breaks the joint anchor points, and things go berserk.
There isn't even a way to change the joint's anchor point, so I don't even have a way of just updating the coordinate every frame. It would seem that using SKPhysicsJoint in a scrolling game is not an option.
Does anyone know of a way around this?

Ok, I think I figured out what was going on, and I was totally incorrect in my original assumption. The reason my anchor points looked incorrect is because the [convertPoint toNode] call was returning me Scene coordinates that were incorrect. After several hours I realized it was off by exactly half the screen dimensions. My Scene has an anchorPoint of (0.5, 0.5), but this screws up the conversion values. So, if I simply offset the point by width/2, height/2 it's correct:
GPoint pt = CGPointMake(anchorWorldX, anchorWorldY);
pt = [gGameScene convertPoint:pt fromNode:gGameWorld]; // convert to scene coords, but it's WRONG
pt.x += scene.size.width * scene.anchorPoint.x; // this properly adjusts the value to be correct
pt.y += scene.size.height * scene.anchorPoint.y;
SKPhysicsJointPin* pin =[SKPhysicsJointPin jointWithBodyA:hinge.physicsBody bodyB:door.physicsBody anchor:pt];

Related

How do I convert coordinates in SpriteKit?

I'm trying to grasp the conversion thing in SpriteKit but despite having read the documentation and several posts on SO I can't seem to get it right. As far as I understand there are two coordinate systems that work independently of one another, one for the scene and one for the view, which is why I simply can't use the things like UIScreen.main.bounds.maxX to determine screen corners that the node can relate to. Am I getting this right?
Anyway, here's my attempt at converting coordinates:
let mySquare = SKShapeNode(rectOf: CGSize(width: 50, height: 50))
mySquare.fillColor = SKColor.blue
mySquare.lineWidth = 1
let myPoint = CGPoint(x: 200, y: 0)
let newPosition = mySquare.convert(myPoint, from: self)
mySquare.position = newPosition
print(newPosition)
self.addChild(mySquare)
The print returns the exact same position as went in so obviously I'm not doing this right, but I have tried a number of different constellations but with pretty much no result; the coordinates remain the same. I have also tried let myPoint = CGPoint(x: UIScreen.main.bounds.maxX, y: UIScreen.main.bounds.maxY) but same there; no conversion.
What am I missing? In my head I read the conversion above as "convert myPoint from the view coordinate system and use it for my node mySquare.
There are lots of coordinate systems floating around, and so lots of potential sources of confusion:
Scene coordinates: that's your game's world, and what you usually think about when imagining coordinates and how to position things overall.
Node: Nodes have their own coordinate system. Once you start building a hierarchy, that matters. Imagine, e.g., an on-screen joystick that has a background showing a graphic of movement directions and a central "knob" that the player can manipulate. You might represent the joystick as a node with two children. One child is a sprite for the background, and the other is a sprite for the knob. The background sprite would naturally be at position (0,0), meaning the center of the overall joystick. The knob would move around, with (0,0) meaning centered and maybe (0,100) meaning up a bit. The overall joystick might sit at (200,300) in the scene. Then the background sprite would show up at (200,300) in the scene and the knob, when up, would be at (200,300)+(0,100) = (200,400) in the scene. The convert(from:) and convert(to:) are for converting within the node hierarchy. You could ask where the knob is in the overall scene's coordinates by knob.convert(.zero, to: scene) or joystick.convert(knob.position, to: scene). You very rarely need to do that sort of conversion.
View coordinates: The view is a window on the scene, i.e., what's actually being shown. If you've got a full screen game, the view is basically determined by the screen size in points. How view coordinates map to scene coordinates determines what part of the scene you actually see. If you need to go between view coordinates and scene coordinates, you use the scene's convertPoint(fromView:) and convertPoint(toView:) methods.
If you don't do anything special and have the scene size the same as the view size, then the scene-view mapping will have (0,0) in the scene at the lower left corner of the view. Another common convention is to have (0,0) in the scene at the center of the screen by setting the scene's anchorPoint to (0.5,0.5). Or perhaps you've designed the scene so that the world is 2000x2000 in size and there will be a nontrivial scaling and possible letter-boxing or cropping involved (depending on the setting of the scene's scaleMode). Or if your game has a camera node and, e.g., the camera is set to follow the player around, then the view-to-scene mapping will be changing as the player moves.
In your code, calling mySquare.convert(from:) doesn't really even make sense since the square hasn't been added to the scene at the time you're doing the "conversion".
Anyway, if you really want to do something like "put a square in the top-left corner of the screen", then you can take the point in the view's frame and convert it to scene coordinates and set the square's position to that.
override func didMove(to view: SKView) {
...
mySquare.position = convertPoint(fromView: CGPoint(x: view.frame.minX, y: view.frame.minY))
addChild(mySquare)
...
}
Edit: I would encourage you though to think mostly in terms of the overall scene, after some initial consideration of how the game should map to devices with screens of different sizes and aspect ratios. Once you're thinking in terms of the scene, then the scene's frame (rather than the view's frame) becomes the most natural reference when you're imagining "at the left edge" or "near the bottom right".

How to tell height of a sprite after zrotation?

I have a sprite like this:
let sprite = SKSpriteNode(imageNamed: "sprite.png")
print("sprite height: \(sprite.size.height)") //results to 150
sprite.zRotation = 90 * degreesToRadians //turns sprite 90 Degrees
Results to this:
However:
print("sprite height: \(sprite.size.height)") //results to 150
Sprite height is still 150 even though it takes far less space height-wise.
Is there a way to get the actual height of a sprite after zrotating it? I know I could easily work around this with in the example above but my real problem is that I have various sprites at various zrotations and I'm trying to make sure that all of them are fully visible on screen.
So basically I have a sprite (red bar), an anchor point at 0,0 (blue dot) and visible screen (black frame).
I zrotate the sprites to random angles using arc4random_uniform but some sprites end up not being completely visible on screen. Basically I would have to know the height of the green arrow or assign it as the anchorPoint after zrotation. Or perhaps there are other ways that I have not thought of. All help appreciated!
0x141E's comment works.
sprite.frame.size.height
results to correct outcome.

What is the right way to zoom the camera out from a scene view?

I have a scene where my gameplay happens. I'm looking for a way to slowly 'zoom-out' so more and more space becomes visible as the time passes. But I want the HUD and some other elements to stay the same size. It's like the mouse wheel functionality in top-down games.
I tried to do it with sticking everything into display groups and transitioning the xScale, yScale. It works visually of course but game functionality screws up. For example I have a spawner that spawns some objects, its graphics gets scaled but spawner still spawns the objects from its original position, not the new scaled position..
Do I need to keep track of every X, Y position I'm using during gameplay and transition them too? (Which is probably possible but too hard for me since I use a lot of touch events for aiming and path creating etc.) Or is there an easier way to achieve this? Please please say yes :P
I'm looking forward for your answers, thanks in advance! :)
The problem is you are scaling your image, but not your position coordinates.
You need to convert from 'original coordinates' to 'scaled coordinates'
If you scale your map size to half, you should scale the move amounts by half too.
You need to 'scale' your coordinates in the same amount your image is scaled.
For example lets assume your scale factor is 0.5, you have an image:
local scaleFactor = 0.5
image.xScale = scaleFactor
image.yScale = scaleFactor
Then you would need to scale your move amounts by this factor
thingThatIsMoved.x = thingThatIsMoved.x + (moveAmount * scaleFactor)
thingThatIsMoved.y = thingThatIsMoved.y + (moveAmount * scaleFactor)
I hope this helps, I am assuming you have a moveAmount variable and updating the position on the enterFrame() event.

View is rotating "Ugly"

I'm creating a car driving game for iOS, but there's a problem:
I added these lines to my ViewController.m:
_carView.transform = CGAffineTransformMakeRotation(steeringTemp-45);
steeringTemp is a float variable wich is changed by the left and right steering buttons.
But when I run the app and press the steering button, the car is rotating in an ugly way. It seems like the center point is changing everytime and the car is rotating like a triangle. I tried to set the origin to the center of the _carView, which is an image view, but it didn't work.
First of all let me tell you that you should be using radians, and not degrees. You can use this macro:
#define DEGREES_TO_RADIANS(angle) (angle * M_PI / 180.0)
And wrap it around your values:
CGAffineTransformMakeRotation(DEGRESS_TO_RADIANS(steeringTemp-45));
Second usually transforms go wrong if you:
Are rotating with the wrong anchorPoint (usually happens with clock pointers)
Don't pay attention to the order of transforms
Anchor Point
You need to make sure that your anchor point to the place you wanted. The anchorPoint defaults to the center, but you can set it to any point you like in the layer that is backed by the view. The anchor point has relative coordinates, so it goes from 0 to 1. (0,0) is your left top corner and (1,1) is your bottom right corner.
You can set it like this:
_carView.layer.anchorPoint = CGPointMake(0, 0);
Transforms order is important
Additionally, be careful if you apply more transforms you should pay attention to the order of those transforms. The previous transforms will always affect the next transforms. A rotation followed by a translation is way different than a translation followed by a rotation.

What's the relationship between scaleX and anchorPoint

I'm trying to flip a sprite horizontally i.e.
sprite.scaleX = -1;
What I notice is that the sprite is flipped around its bottom left corner. However since I don't want to mess up my positioning of the sprite (I'd like the sprite to stay in the original place), so I tried to set its anchor point to (1,0)
sprite.anchorPoint = ccp(1,0);
My reasoning is this:
Since the sprite should be flipped around the anchorPoint, if I set the anchorPoint to its bottom right corner then that corner will then become the 'left bottom' corner of the changed sprite; and I should be able to move the sprite using that new anchorPoint just as I do with a normal sprite of anchorPoint (0,0).
However apparently it's not working as I expected. What am I missing?
Edit
What I really want to do is to flip a sprite and then be able to control its position via the left bottom corner - well the left bottom corner of the sprite that I see. I don't think I fully understand how scaleX = -1 is applied relating to the anchorPoint. If somebody can explain to me the concepts behind these parameters then that will greatly help me.
I have to correct myself on making the assertion that setting anchorPoint doesn't help. In fact, setting anchorPoint to (1,0) is exactly the solution to the problem, only that somehow some bug prevented me from recognizing it in my test.

Resources