I'm seeing some weird stuff and not sure why this is happening. I'm adding a sprite with a position of x:490 y:680.
This gets positioned with an inverted y axis for some reason. Naturally, when I register for touch events and I get the location of the tap, it's giving a completely different CGPoint that looks like x:481 y:89.
Any ideas why this is happening?
Edit: Based on the documentation, it says that sprites follow their parent's coordinate system. Not sure how to change or know how this is set.
Just noticed CGPoint(0,0) is on the lower left corner as opposed to top left with UIView. Why is this default?
Sprite Kit has its origin at the lower left corner because this is the default coordinate system for all OpenGL apps.
To convert from UITouch locations to the Sprite Kit coordinate system, use the UITouch Sprite Kit additions.
Related
Suppose the following:
You have a myriad of SKSpriteNodes in the view.
When the user taps the screen, you want the whatever sprite that is in / near a specific location to do an animation.
Question: How can figure out which SKSpriteNode is at the specific location without looping through all sprites?
For this, I have implemented a SKSpriteNode, box, which is transparent and has a texture which covers the span of the specific location, and is positioned accordingly.
The SKSpriteNode methods contains and intersects seem promising, but require that I pass a point or a sprite respectively.
Question: How can I get a SKSpriteNode to report what sprite, if any, it intersects with? Again, without looping through every sprite. If two sprites intersect with box, then return only that which is most prominently intersecting with box.
Diagram:
This is not my actual use case, but illustrates the point. There are a lot of sprites (more than visualized below) and there is an area of interest that:
if the user touches, and
a sprite is in that area
I want to know what sprite is there.
There is no way to do this without SOMETHING looping through the sprites. That's either:
The physics engine, as Stoneburner suggests
The scene, via update() setting flags on sprites when they're in the
region
Your code that handles the touch, searching for sprites in the region
GameplayKit offers some optimisations on doing this sort of thing: https://developer.apple.com/reference/gameplaykit/gkrtree
Attach a UITapGestureRecognizer to the view
On tap state UIGestureRecognizerStateRecognized get the location of the tap using CGPoint pointInView = [tapper locationInView:mySKScene.view]
Convert from the view's coordinate system to the scene's coordinate system using CGPoint pointInScene = [mySKScene convertPointFromView:pointInView]
Get the node at that point by asking the scene. SKNode *touchedNode = [self nodeAtPoint:pointInScene];
You can use SKPhysicsBodies to detect collisions (overlaps).
Assign physicsbodies to all sknodes, add one dynamically on the region you want to detect sknodes inside, handle the SKPhysicsContactDelegate, remove the body again
I'm trying to figure out a method for rotating a SKSpriteNode object while it is in-flight (being affected by gravity) along an arc path. I'm using SpriteKit and throw the object using applyImpulse. The problem is that the object, despite traveling in an arc path in the air, stays in the same position.
Imagine an archer shooting an arrow. The arrow is shot upwards and should point upwards in that direction. Once the arrow starts falling along the arc, it should begin to rotate downwards.
Is there some way to automate this using the SpriteKit physics? Should I throw the arrow a different way instead of using applyImpulse? Do I need to come up with some algorithm by myself for the rotation based on the objects velocity?
In your didSimulatePhysics or your update you can rotate your sprite towards its vector. Not sure theres a way to automatically make this happen.
let angle = atan2(mySprite.physicsBody!.velocity.dy, mySprite.physicsBody!.velocity.dx)
mySprite.zRotation = angle
I have a game with a character that can cast fire balls. Right now in my game, when I tap anywhere on the screen, I shoot a fireball at my touch point. For this fireball I'm using an SKEmitterNode where I've created a fireball particle emitter.
The problem I'm running into is, my fireball has an angle set already, but I want that angle to change based on where I tap, so that the trailing flames are behind the fireball, not going up or down or whatever I've set it to in the sks file.
I've never done something like this, is there something built into swift already for calculating angles? I can't find much on google
There are two options:
1) first one is that you can use :
fireBall.moveTo(touchLocation.x, touchLocation.y)
function so you can avoid use angles (if I understand you correctly). touchLocation is CGPoint of location of the touch event.
2) second one use as it was said before you will get an angle:
atan2(deltaY,deltaX)
What are the coordinates for the bottom of the screen... or how can I create a "floor" at the bottom of the screen in spritekit?
Sorry, but I don't understand screen coordinates that well in spritekit.
You need to understand the Sprite Kit coordinate system as explained in Apple's Documentation here.
Here's how you create a floor at the bottom of the screen in SpriteKit:
SKNode *floor = [SKNode node];
node.physicsBody = [SKPhysicsBody bodyWithEdgeLoopFromRect:CGRectMake(CGRectGetMidX(self.frame),1.0 , CGRectGetWidth(self.frame), 1)];
[self addChild: floor];
You need some universal approach to get coordinates of corners on the screen.
Using code from that answer you can get CGRect with necessary information.
Example:
let screenRect = getVisibleScreen(
sceneRect: self.scene!.frame,
viewRect: self.view!.frame )
And then you can use it:
screenRect.minX
screenRect.maxX
screenRect.minY
screenRect.maxY
screenRect.width
screenRect.height
This is more then enough to calculate coordinates of "floor" or any other relative positions.
The location of the bottom of the screen will depend on what coordinate system you are using for your scene.
Out of the box, the bottom of the screen will be at y coordinate zero, but there are a few things that can happen that will affect that.
For instance, if you are using the scene editor in xCode, and your scene's anchorPoint property is something other than y=0, then the "origin" of your scene will not be at the bottom of the screen. In the recent xCode beta, they changed the default behavior to have the scene's origin at the center of the scene instead of the lower left corner, so that would explain why you might be seeing things in the center of the screen when you expect them to be at the bottom.
Also, the "bottom of the screen" will be relative to whatever parenting structure you have in your scene. For instance, if you place a background sprite in your scene, and want to attach a floor sprite to that which is at the bottom of the screen, you'll have to do some computing to figure out where to place it because you are going to inherit the translation and rotation of the floor's parent node (and any parents that node has).
To keep things simple, you can just place everything directly on the stage and manage their z-order manually. This will let you, basically, use the same coordinate system for everything. This is often fine; as long as you're not trying to do anything complex with your sprites, you don't need a complicated "tree" of nodes.
But even with this approach, the metrics of your scene are going to have to be handled dynamically. The width and height of your scene are going to depend on how you approach displaying your scene on different devices with different sizes. For instance, the top right of an iPhone 4 is going to be in a different place than the top right of an iPad Pro. A full discussion of how to deal with that is beyond the scope of your question, but generally, you'll probably want to use a "reference width" or a "reference height" for your scene, use .AspectFit or .AspectFill for the scaleMode, and set your scene's size accordingly. (I.e., inspect the view's frame to get the actual aspect ratio of your scene and set your scene size to match your reference metric on one axis and scale the other axis to match the device's aspect ratio.) This will let you use the same metrics for all devices (although one of your two axes will be fluid).
I am developing an iPhone application that uses Cocos3d. I have drawn a scene in the XZ plane ( y = 0 ). Now, I want to rotate the scene around a specified point in the XZ plane, whenever the user touches the screen with two fingers; the rotation point will be the center of the two touch points.
I started by projecting the two touch points to the 3D scene, by finding the intersecting between the CC3Ray (issued from the camera and passing by the touch point) and the XZ plane.
Now that I have the two points in the XZ plane, I can calculate the rotation point (that will be the middle point between these two points).
In order to rotate the scene around this point, I have added it to a parent node. Now all I have to do is to translate it by the negation of the coordinates of the middle point, rotate its parent by the angle, and translate it back by the coordinates of middle point.
Here is the code that I am using (in the ccTouchesMoved method):
// Assuming that the root is a CC3Node and it is the scene that I need to rotate
// and middle refers to the center of rotation
[root translateBy:cc3v(-middle.x, 0, -middle.z)];
[root.parent rotateByAngle:30 aroundAxis:cc3v(0, 1, 0)];
[root translateBy:cc3v(middle.x, 0, middle.z)];
However, I am not able to rotate the scene around the middle point.
Can anyone help me to resolve this problem?
Thank you!
Edit:
I also tried to add these lines of codes in the ccTouchesBegan method:
[root translateBy:CC3VectorNegate(middle)];
[self.cc3Scene.activeCamera translateBy:CC3VectorNegate(middle)];
And in the ccTouchesMoved:
[root rotateByAngle:angle aroundAxis:cc3v(0, 1, 0)];
It works only for the first time the user touches the screen, and then, whenever he/she touches it again, a unwanted translation is happening!
I think the problem is with the ccTouchesBegan method.
On any one node, rotation, translation, and scale transforms are independent of each other. They are each applied to the rest pose of the node, and are not accummulative with each other. This is so that interaction appears natural and expected by the developer controlling the node. In other words, during gameplay, rotating a character after it has been moved, rotates the character in place, not around a translated location. Similarly, translating a node translates it regardless of how the node has previously been rotated.
This is different control than taking a single matrix and accumulating transforms into it, which is how the matrix-based rotate-around-a-distant-point technique works.
However, you can effectively rotate a node around a location that is not its origin by:
Transforming the local rotation location to the global coordinate space.
Rotating the node as normal.
Transforming the (now rotated) local rotation location to the global coordinate space.
Align the rotation locations found in steps 1 & 3 by translating the node by the difference between the two locations.
You can perform steps 1 & 3 using the node's globalTransformMatrix. Code for the steps above is as follows:
CC3Vector gblRotLocBefore = [aNode.globalTransformMatrix transformLocation: rotationLoc];
[aNode rotateByAngle: angle aroundAxis: kCC3VectorUnitYPositive];
CC3Vector gblRotLocAfter = [aNode.globalTransformMatrix transformLocation: rotationLoc];
[aNode translateBy: CC3VectorDifference(gblRotLocBefore, gblRotLocAfter)];
Using this technique, you do not need to involve the node's parent.