What is SKNode "origin"? - ios

Docs for SKPhysicsBody bodyWithCircleOfRadius: says:
Creates a circular physics body centered on the owning node’s origin.
So is origin a node position? Couldnt find it anywere.

So is origin a node position?
The origin is the point with coordinates (0, 0) in a given coordinate system. However, you typically deal with a number of different coordinate systems when working with computer graphics, and each one has its own origin. For example, every view in a view hierarchy has its own coordinate system. So, you need some context to know exactly which one you're talking about.
From the SKNode docs:
Every node in a node tree provides a coordinate system to its
children. After a child is added to the node tree, it is positioned
inside its parent’s coordinate system by setting its position
properties.
You also commented:
Im confused becouse 'origin' also is mention in SKScene docs, and its different than frames origin
SKScene is a subclass of SKNode, and so it's also the case that every scene provides a coordinate system. This isn't surprising -- a scene is just the root node in a tree of nodes.
Don't confuse the origin of a node's frame with the origin of the node's own coordinate system. A CGRect is defined by two things: a CGPoint called origin, and a CGSize called size. A node's frame is the rectangle that defines the node's bounds in the coordinate system of the parent node. The node's origin might happen to be at the same place as it's frame's origin, but they're not the same thing. For example, the origin of a scene is at it's anchorPoint -- that is, the anchorPoint property of a scene indicates some point in the view that contains the scene, and that point has coordinates (0, 0) in the scene. Neither of these have anything to do with the frame or frame.origin.

Origin is normally a point. However, it is slightly confusing to speak of a SKNode's origin, because that is not a property of the SKNode in the same sense that it is a property of the SKNode's frame. If you do a keyword search in apples' sknode docs for the word 'origin' you will find it zero times. However, I agree with the consensus below that despite this lack of mention in the SKNode's docs, the origin of a node is more likely to refer to its position than to its frame's origin. So while you would get the SKNode's frame's origin's x and y values like this:
node.frame.origin.x
node.frame.origin.y
That does not necessarily describe the node's origin. I hope I have made this answer more accurate and less confusing :)

Related

Translate and Rotate specific node in ARSCNView

I am adding Nodes in ARSCNView as child nodes and clone nodes depending on what i choose from the menu. The object is placed where I tap on the screen. How to translate and scale the specific node.
Use transform attribute of SCNNode for that:
The transformation is the combination of the node’s rotation ,
position , and scale properties. The default transformation is
SCNMatrix4Identity.
When you set the value of this property, the
node’s rotation, orientation, eulerAngles, position, and scale
properties automatically change to match the new transform, and vice
versa. SceneKit can perform this conversion only if the transform you
provide is a combination of rotation, translation, and scale
operations. If you set the value of this property to a skew
transformation or to a nonaffine transformation, the values of these
properties become undefined. Setting a new value for any of these
properties causes SceneKit to compute a new transformation, discarding
any skew or nonaffine operations in the original transformation. You
can animate changes to this property’s value. See Animating SceneKit
Content.
Or you can use:
position,
rotation,
eulerAngles,
orientation,
scale
You can look how Apple's sample code are doing same thing here
With its VirtualObjectManager and recognizers for Scale/Translate.

SpriteKit scaling affects physics

I've implemented zooming by rescaling a root "world" node housing all the objects in my game. However, with the smaller size of the world node & children, applying forces now has a greater effect.
Even if I scale forces by the same scaling as my world node, they are still huge and objects go flying.
I've seen some pretty hectic solutions around:
Scale the scene (but then overlays are scaled too)
Create a whole new invisible layer that obeys normal physics then a visible layer on top which you scale...
Is there a more straightforward approach to somehow just scale the physics world with the world node?
Yes, don't scale in order to zoom. Try resizing instead.
myScene.scaleMode = SKSceneScaleModeAspectFill;
And then while zooming:
myScene.size = CGSizeMake(myScene.size.width + dx, myScene.size.height + dy);
*Apple Documentation says:
Set the scaleMode property to SKSceneScaleModeResizeFill. Sprite Kit automatically resizes the scene so that it always matches the view’s size.

Convert UIKit coordinates to SpriteKit coordinates

I am trying to convert normal uikit coordinates (top left origin, width and height go right and down) to Sprite Kit coordinates (origin is at center of node, left and down are negative, right and up are positive).
Does anyone know how to do this? Unfortunately I can't work around this to my knowledge because I am using QuartzCore for drawing on a SKNode. Without converting, I am getting really off drawings.
According to the docs, SKView has methods for Converting Between View and Scene Coordinates.
Remember the scene is itself an SKNode, so if you need to get from scene coordinates to a particular node coordinate system there's also the methods in Converting to and from the Node’s Coordinate System.

physicsBody is misaligned with spriteNode

I am creating a spritenode, setting its position and changing its anchorpoint to (0, .5) and then creating a phyicsbody.
The physicsbody thinks my anchorpoint is still at (.5, .5), stupidly.
The same problem is referenced here, but unsolved: Physicsbody doesn't adhere to node's anchor point
The order I am doing things in is correct, it's just my physicsbody is stubborn.
The anchorPoint determines where the node's texture is drawn relative to the node's position. It simply does not affect physics bodies because it's a purely visual property (a texture offset).
For physics-driven nodes it is actually counter-productive to change the anchorPoint from its default because that will change the point around which the texture will rotate. And the physics body will usually also change the node's rotation.
So even if you were to move the physics body shape's vertices to match the sprite with a modified anchorPoint, the physics shape will be misaligned with the image as soon as the body starts rotating. And it'll seem to behave weird.
Plus whatever you want to achieve using anchorPoint you can more flexibly achieve by using the node hierarchy to your advantage. Use a SKNode as the physics node, and add a non-physics sprite node as child to that node and offset it the way you wanted the image to be offset by changing the sprite's anchorPoint.
You end up having two nodes, one invisible representing the physics body and one (or more) sprite(s) representing the visuals for the body but not necessarily tied to the body's center position.

OpenCV POSIT algorithm - coordinate systems

I know that Posit calculates the translation and rotation between your camera and a 3d object.
the only problem i have right now is, i have no idea how the coordinate systems of the camera and the object are defined.
So for example if i get 90° around the z-axis, in which direction is the z axis pointing and is the object rotating around this axis or is the camera rotating around it?
Edit:
After some testing and playing around with different coordinate systems, i think this is right:
definition of the camera coordinate system:
z-axis is pointing in the direction, in which the camera is looking.
x-axis is pointing to the right, while looking in z-direction.
y-axis is pointing up, while looking in z-direction.
the object is defined in the same coordinate system, but each point is defined relative to the starting point and not to the coordinate systems origin.
the translation vector you get, tells you how point[0] of the object is moved away from the origin of the camera coordinate system.
the rotationmatrix tells you how to rotate the object in the cameras coordinate system, in order to get the objects starting orientation. so the rotation matrix basically doesnt tell you how the object is rotated right now, but it tells you how you have to reverse its current orientation.
can anyone approve this?
Check out this answer.
The Y axis is pointing downward. I don't know what do You mean by starting point. The camera lays in the origin of it's coordinate system, and object points are defined in this system.
You are right with the rotation matrix, well, half of. The rotation matrix tells You, how to rotate the coordinate system to make it oriented the same as the coordinate system used to define model of the object. So it does tell You how the object is oriented with respect to the camera coordinate system.

Resources