CC3Node rotation - Cocos3d - ios

I am developing an iPhone application that uses Cocos3d. I have drawn a scene in the XZ plane ( y = 0 ). Now, I want to rotate the scene around a specified point in the XZ plane, whenever the user touches the screen with two fingers; the rotation point will be the center of the two touch points.
I started by projecting the two touch points to the 3D scene, by finding the intersecting between the CC3Ray (issued from the camera and passing by the touch point) and the XZ plane.
Now that I have the two points in the XZ plane, I can calculate the rotation point (that will be the middle point between these two points).
In order to rotate the scene around this point, I have added it to a parent node. Now all I have to do is to translate it by the negation of the coordinates of the middle point, rotate its parent by the angle, and translate it back by the coordinates of middle point.
Here is the code that I am using (in the ccTouchesMoved method):
// Assuming that the root is a CC3Node and it is the scene that I need to rotate
// and middle refers to the center of rotation
[root translateBy:cc3v(-middle.x, 0, -middle.z)];
[root.parent rotateByAngle:30 aroundAxis:cc3v(0, 1, 0)];
[root translateBy:cc3v(middle.x, 0, middle.z)];
However, I am not able to rotate the scene around the middle point.
Can anyone help me to resolve this problem?
Thank you!
Edit:
I also tried to add these lines of codes in the ccTouchesBegan method:
[root translateBy:CC3VectorNegate(middle)];
[self.cc3Scene.activeCamera translateBy:CC3VectorNegate(middle)];
And in the ccTouchesMoved:
[root rotateByAngle:angle aroundAxis:cc3v(0, 1, 0)];
It works only for the first time the user touches the screen, and then, whenever he/she touches it again, a unwanted translation is happening!
I think the problem is with the ccTouchesBegan method.

On any one node, rotation, translation, and scale transforms are independent of each other. They are each applied to the rest pose of the node, and are not accummulative with each other. This is so that interaction appears natural and expected by the developer controlling the node. In other words, during gameplay, rotating a character after it has been moved, rotates the character in place, not around a translated location. Similarly, translating a node translates it regardless of how the node has previously been rotated.
This is different control than taking a single matrix and accumulating transforms into it, which is how the matrix-based rotate-around-a-distant-point technique works.
However, you can effectively rotate a node around a location that is not its origin by:
Transforming the local rotation location to the global coordinate space.
Rotating the node as normal.
Transforming the (now rotated) local rotation location to the global coordinate space.
Align the rotation locations found in steps 1 & 3 by translating the node by the difference between the two locations.
You can perform steps 1 & 3 using the node's globalTransformMatrix. Code for the steps above is as follows:
CC3Vector gblRotLocBefore = [aNode.globalTransformMatrix transformLocation: rotationLoc];
[aNode rotateByAngle: angle aroundAxis: kCC3VectorUnitYPositive];
CC3Vector gblRotLocAfter = [aNode.globalTransformMatrix transformLocation: rotationLoc];
[aNode translateBy: CC3VectorDifference(gblRotLocBefore, gblRotLocAfter)];
Using this technique, you do not need to involve the node's parent.

Related

How to temporarily freeze a node in front of the camera using ARKit, SceneKit in Swift

I built a complete structure as a node (with its child nodes) and the user will walk through it using ARKit.
At some point, if the user cannot continue because of some real obstacle in the real world, I added a "pause" button which should freeze whatever the user currently sees in front of the camera, the user could then move freely to some other open space and when the user will release the pause button he/she will be able to resume where they left off (only someplace else in the real world).
A while ago I asked about it in the Apple Developer forum and an Apple Frameworks Engineer gave the following reply:
For "freezing" the scene, you could transform the anchor's position (in world coordinates) to camera coordinates, and then anchor your content to the camera. This will give you the effect that the scene is "frozen", i.e., does not move relative to the camera.
I'm currently not using an anchor because I don't necessarily need to find a flat surface. Rather, my node is placed at a certain position relative to where we start at (0,0,0).
My question is how do I exactly do what the Apple engineer told me to do?
I have the following code which I'm still stuck with. When I add the node to the camera (pointOfView, last line of the code below), it does freeze in place, but I can't get it to freeze in the same position and orientation as it was before it was frozen.
#IBAction func pauseButtonClicked(_ sender: UIButton) {
let currentPosition = sceneView.pointOfView?.position
let currentEulerAngles = sceneView.pointOfView?.eulerAngles
var internalNodeTraversal = lastNodeRootPosition - currentPosition! // for now, lastNodeRootPosition is (0,0,0)
internalNodeTraversal.y = lastNodeRootPosition.y + 20 // just so it’s positioned a little higher in front of the camera
myNode?.removeFromParentNode() // remove the node from the Real World view. Looks like this line has no effect and just adding the node as a child to the camera (pointOfView) is enough, but it feels more right to do this anyway.
myNode?.position = internalNodeTraversal // the whole node is moved respectively in the opposite direction from the root to where I’m standing to reposition the camera in my current position inside the node
// myNode?.eulerAngles = (currentEulerAngles! * -1) — this code put the whole node in weird positions so I removed it
myNode?.eulerAngles.y = currentEulerAngles!.y * -1 // opposite orientation of the node so the camera will be oriented in the same direction
myNode?.eulerAngles.x = 0.3 // just tilting it up a little bit to have a better view, more similar to the view as before it was locked to the camera
// I don’t think I need to change the eulerAngles.z
myNode!.convertPosition(internalNodeTraversal, to: sceneView.pointOfView) // I’m not sure I wrote this correctly. Also, this line doesn’t seem tp change anything
sceneView.pointOfView?.addChildNode(myNode!) // attaching the node to the camera so it will remain stuck while the user moves around until the button is released
}
So I first calculate where in the node I'm currently standing and then I change the position of the node in the opposite direction so that the camera will now be in that position. That seems to be correct.
Now I need to change the orientation of the node so that it will point in the right direction and here things get funky. I've been trying so many things for days now.
I use the eulerAngles for the orientation. If I set the whole vector multiplied by -1, it would show weird orientations. I ended up only using the eulerAngles.y which is the left/right orientation and I hardcoded the x orientation (up/down).
Ultimately what I have in the code above is the closest that I was able to get. If I'm pointing straight, the freeze will be correct. If I turn just a little bit, the freeze will be pretty close as well. Almost the same as what the user saw before the freeze. But the more I turn, the more the frozen image is off and more slanted. At some point (say I turn 50 or 60 degrees to the side) the whole node is off the camera and cannot be seen.
Somehow I have a feeling that there must be an easier and more correct way to achieve the above.
The Apple engineer wrote to "transform the anchor's position (in world coordinates) to camera coordinates". For that reason I added the "convertPosition" function in my code, but a) I'm not sure I used it correctly and b) it doesn't seem to change anything in my code if I have that line or not.
What am I doing wrong?
Any help would be very much appreciated.
Thanks!
I found the solution!
Actually, the problem I had was not even described as I didn't think it was relevant. I built the AR nodes 2 meters in front of the origin (-2 for the z-coordinate) while the center of my node was still at the origin. So when I changed the rotation or eulerAngles, it rotated around the origin so my nodes moved in a large curve and in fact also changed their position as a result.
The solution was to use a simdPivot. Instead of changing the position and rotation of the node itself, I created a translation matrix and a rotation matrix which was at the point of the camera (where the user is standing) and I then multiplied both matrices. Now when I added the node as a child of the camera (pointOfView) this would freeze the image and in effect show exactly what the user was seeing before it was frozen as the position is the same and the rotation is exactly around the user's standing position.

How do I convert coordinates in SpriteKit?

I'm trying to grasp the conversion thing in SpriteKit but despite having read the documentation and several posts on SO I can't seem to get it right. As far as I understand there are two coordinate systems that work independently of one another, one for the scene and one for the view, which is why I simply can't use the things like UIScreen.main.bounds.maxX to determine screen corners that the node can relate to. Am I getting this right?
Anyway, here's my attempt at converting coordinates:
let mySquare = SKShapeNode(rectOf: CGSize(width: 50, height: 50))
mySquare.fillColor = SKColor.blue
mySquare.lineWidth = 1
let myPoint = CGPoint(x: 200, y: 0)
let newPosition = mySquare.convert(myPoint, from: self)
mySquare.position = newPosition
print(newPosition)
self.addChild(mySquare)
The print returns the exact same position as went in so obviously I'm not doing this right, but I have tried a number of different constellations but with pretty much no result; the coordinates remain the same. I have also tried let myPoint = CGPoint(x: UIScreen.main.bounds.maxX, y: UIScreen.main.bounds.maxY) but same there; no conversion.
What am I missing? In my head I read the conversion above as "convert myPoint from the view coordinate system and use it for my node mySquare.
There are lots of coordinate systems floating around, and so lots of potential sources of confusion:
Scene coordinates: that's your game's world, and what you usually think about when imagining coordinates and how to position things overall.
Node: Nodes have their own coordinate system. Once you start building a hierarchy, that matters. Imagine, e.g., an on-screen joystick that has a background showing a graphic of movement directions and a central "knob" that the player can manipulate. You might represent the joystick as a node with two children. One child is a sprite for the background, and the other is a sprite for the knob. The background sprite would naturally be at position (0,0), meaning the center of the overall joystick. The knob would move around, with (0,0) meaning centered and maybe (0,100) meaning up a bit. The overall joystick might sit at (200,300) in the scene. Then the background sprite would show up at (200,300) in the scene and the knob, when up, would be at (200,300)+(0,100) = (200,400) in the scene. The convert(from:) and convert(to:) are for converting within the node hierarchy. You could ask where the knob is in the overall scene's coordinates by knob.convert(.zero, to: scene) or joystick.convert(knob.position, to: scene). You very rarely need to do that sort of conversion.
View coordinates: The view is a window on the scene, i.e., what's actually being shown. If you've got a full screen game, the view is basically determined by the screen size in points. How view coordinates map to scene coordinates determines what part of the scene you actually see. If you need to go between view coordinates and scene coordinates, you use the scene's convertPoint(fromView:) and convertPoint(toView:) methods.
If you don't do anything special and have the scene size the same as the view size, then the scene-view mapping will have (0,0) in the scene at the lower left corner of the view. Another common convention is to have (0,0) in the scene at the center of the screen by setting the scene's anchorPoint to (0.5,0.5). Or perhaps you've designed the scene so that the world is 2000x2000 in size and there will be a nontrivial scaling and possible letter-boxing or cropping involved (depending on the setting of the scene's scaleMode). Or if your game has a camera node and, e.g., the camera is set to follow the player around, then the view-to-scene mapping will be changing as the player moves.
In your code, calling mySquare.convert(from:) doesn't really even make sense since the square hasn't been added to the scene at the time you're doing the "conversion".
Anyway, if you really want to do something like "put a square in the top-left corner of the screen", then you can take the point in the view's frame and convert it to scene coordinates and set the square's position to that.
override func didMove(to view: SKView) {
...
mySquare.position = convertPoint(fromView: CGPoint(x: view.frame.minX, y: view.frame.minY))
addChild(mySquare)
...
}
Edit: I would encourage you though to think mostly in terms of the overall scene, after some initial consideration of how the game should map to devices with screens of different sizes and aspect ratios. Once you're thinking in terms of the scene, then the scene's frame (rather than the view's frame) becomes the most natural reference when you're imagining "at the left edge" or "near the bottom right".

Prevent Location Offset With SpriteKit Camera Node

I have noticed that the centerOnNode: method as shown,
- (void)centerOnNode:(SKNode *)node {
cameraOffset = [node.scene convertPoint:node.position fromNode:node.parent];
node.parent.position = CGPointMake(node.parent.position.x - cameraOffset.x, node.parent.position.y - cameraOffset.y);
}
greatly impacts the relative positioning of child nodes. As soon as this method runs, the child nodes do in fact seem to be impacted. The following image show the logic of NO movement then with moving left and slightly down:
I drew a light blue box to estimate the physics body that it seems the paths are referencing instead of the updated frame. The lines and circles that you see represent 2 methods I am using for pathfinding.
The line with the dots are just ray tests that I am doing to see if a ray along the green line intersects the square physics body.
The single point you see at the top corner is from using GameplayKit to construct a path that avoids the black square as an obstacle.
I am struggling to figure out how to avoid the camera repositioning from effecting the positioning of the children in the scene.
FYI: I have tested the pathfinding with moving the character but NOT the camera and it works perfectly (shown below)
Clearly the camera offset is the issue. If anyone can tell me what to do to keep the camera movement and the precision of the pathfinding I would greatly appreciate it.

SpriteKit and combining multiple drags for a single rotation

What I'm attempting to recreate is the draggable arrow that's used in the popular iOS game called "Fragger", complete with both adjusting the rotation AND the strength of the pull all at the same time based on a finger drag but using SpriteKit - I believe they did it in Cocos2D.
I'll start by saying that I've honestly spent many weeks relearning trigonometry (http://www.mathsisfun.com/ is a great resource) and combing through Ray Wenderlich's tutorials (http://www.raywenderlich.com/35866/trigonometry-for-game-programming-part-1) but I can't find a solution for executing a "collective" drag rotation AND strength on a SpriteKit object.
Yes, it's relatively easy to set an anchor point and rotate that object (I'm using an arrow) so that it's pointing at your finger and thanks to Ray, I've got a really smooth action utilizing the following:
float angleRadians = atan2f(firstFingerTouchY - _arrow.position.y, firstFingerTouchX - _arrow.position.x);
float angleDegrees = RADIANS_TO_DEGREES(angleRadians);
float rotateDegreesPerSecond = 180 / 0.5; // Would take 0.5 seconds to rotate 180 degrees, or half a circle
float degreesDiff = (angleCurrent - angleDegrees) * 1.0;
float rotateDuration = fabs(degreesDiff / rotateDegreesPerSecond);
however, that's only part of the equation ...
Again, the first drag is fine as the angle of rotation is derived from the fulcrum of the arrow (it's anchor point) and where ever my finger ends up. However, let's say due to the limit of the screen size I can't get the angle I desire in a single drag, so I lift up my finger and then place it back down to continue the drag? Well I can't use the above mentioned code because as soon as I place my finger down for the second time, the arrow's rotation jumps over to where my finger now starts - again ending the same way.
So I thought the solution would be to not use the anchor point of the arrow at all, but instead use the initial finger touchpoint (as the new anchor) and calculate the arrow's angle based on that touchpoint and where ever my finger moves to? That works in theory, however when you then try to factor in the "strength draw" aspect that's represented in "Fragger" (where pulling the finger closer or farther away from the arrow adjusts the strength of the grenade throw), then you're not moving your finger towards the arrow to weaken the pull, but instead towards where ever you initially touched - which is not only visually difficult but as a bonus creates a lot of rotational jerky-ness as you get your finger closer to the origin.
I've been working on it for about a month now (did I mention that I hate math) trying to create a homogenized method (SIN ... COS ... TAN ... I hate you all!) to do the following:
First pull will rotate the arrow based on the direction of the pull but NOT automatically point to your finger (initial angle)
The ability to lift my finger up, place it back down and drag to ADD TO or SUBTRACT FROM the current rotation of the arrow. (delta angle)
The drag of my finger towards or away from the anchor of the arrow (relative to the current angle of the arrow) will adjust the strength of the "pull" accordingly. For example, if the arrow is already pointing straight up, then continuing the drag straight up would increase the strength and down would decrease the strength respectively. (hypotenuse derived from arrow anchor point).
I also need to be able to use another finger (2 finger touch) to assist in the rotation/strength calculation, so that adds even more chaos into my cluster ...
If you've ever played the 'Fragger' game, you'll recognize the complexity of this single finger action which is also it's functional beauty. I can get pieces of it working - I can rotate it, I can adjust a color mask to indicate strength - but not all of it working together. Perhaps I'm going about it completely wrong, however every example I find online stops at simply rotating an object to point to your finger with every new touch drag ending with the same results.

Controlling movement of flying a vehicle with SceneKit

I currently have a scene which contains a central node at the root of the scene with earth-like geometry and a node representing a flying vehicle.
However I cannot find the right approach to control the movement of the vehicle. I need the ability to turn left and right while orbiting at a static altitude and speed.
I have tried many combinations of animations and physics body forces all leading to undesirable results.
The closest I've come is:
Setting the pivot property of the vehicle to the centre of the scene
Then setting an Action like below to control moving forward
[_vehicleNode runAction:[SCNAction repeatActionForever:[SCNAction rotateByX:-1 y:0 z:0 duration:10.0]]];
Then finally applying forces for turning left and right with
[_vehicleNode.physicsBody applyTorque:SCNVector4Make(0, 1, 0, 1) impulse:YES];
However I cannot seem to set the pivot and/or position to the right value to get the desired result.
Edit: It appears as the above method would be the solution I'm looking for, however for some reason when I add geometry to the vehicle node, it's position in the scene graph gets changed dramatically. When I add hardcoded buttons to change it's position to where it belongs it appears correct for only that single frame then straight back to being in the middle of nowhere.
Edit 2: After replacing all geometry with a primitive sphere for testing the node is now rotating as intended but is now unaffected by physics forces appearing to ignore it's declaration as a dynamicBody.
If I understand what you are trying to achieve correctly, you can try this:
add a first node to your scene, positioned at (0,0,0) and make it rotate forever along the Y axis using an SCNAction
add your ship node as a child of the first node. Position it at (X,0,0) so that it orbits around the earth
rotate your ship node along the X axis with an action or an animation

Resources