I am trying to develop a robotic arm tracking system.
I used scenekit to develop the visualization and the control of the system.
The SCNNodes of my system is:
Shoulder--->Upper_arm--->Fore_arm--->Palm.
I could now rotate each node using the rotation property of each SCNNode.
And I am now interested in whether there's any existing API to compute the angle between two SCNNode when the system is moving, eg. the angle between the Upper_arm and Fore_arm?
Try SCNNode.eulerAngles, you will get SCNVector3, which has components:
Pitch (the x component) is the rotation about the node’s x-axis.
Yaw (the y component) is the rotation about the node’s y-axis.
Roll (the z component) is the rotation about the node’s z-axis.
Fore_arm.eulerAngles will give you rotation angles relative to Upper_arm
Related
I'm new to ARKit, and I need to know what's exactly a rotation order for the camera in ARKit?
I googled a lot for this info but there is no clear answer.
Please I need answer with print example of camera transform matrix.
(Rx, Ry, Rz)
or
(Rz, Ry, Rx) ?
In ARKit, SceneKit and RealityKit the default node's (entity's) orientation is expressed as
pitch, or rotation about X
yaw, or rotation about Y
roll, or rotation about Z
Apple Developer Documentation says:
SceneKit applies these rotations relative to the node’s pivot property in the reverse order of the components: first roll Z, then yaw Y, then pitch X. The rotation, eulerAngles, and orientation properties all affect the rotational aspect of the node’s transform property. Any change to one of these properties is reflected in the others.
Answer
Rotation order for ARKit objects is:
(Rz, Ry, Rx)
When I add a new node with ARKit (ARSKView), the object is positioned base on the device camera. So if your phone is facing down or tilted, the object will be in that direction as well. How can I instead place the object base on the horizon?
For that, right after a new node's creation, use a worldOrientation instance property that controls the node's orientation relative to the scene's world coordinate space.
var worldOrientation: SCNQuaternion { get set }
This quaternion isolates the rotational aspect of the node's worldTransform matrix, which in turn is the conversion of the node's transform from local space to the scene's world coordinate space. That is, it expresses the difference in axis and angle of rotation between the node and the scene's rootNode.
let worldOrientation = sceneView.scene.rootNode.worldOrientation
yourNode.rotation = worldOrientation /* X, Y, Z, W components */
P.S. (as you updated your question) :
If you're using SpriteKit, 2D sprites you spawn in ARSKView are always face the camera. So, if the camera moves around a definite point of real scene, all the sprites must be rotated about their pivot point, still facing a camera.
Nothing can prevent you from using SceneKit and SpriteKit together.
I'm using some computer vision algorithm to aid the motion sensor (inertial measurements unit [IMU]) that are built on iPhone 6.
Its important to know the difference between the camera and IMU coordinate systems definition.
I'm sure that apple defines the IMU coordinate system as follow:
But I do not know how they define the x,y,z axis of the camera.
my ultimate goal is to transfer the IMU measurement to the camera coordinate system
The trick here is to view the axis from the top and reference it to the Right rotation and notice the rotational movement of the axis. If it doesn't rotate it's positive. If it rotates check the direction of the rotation; if it rotates to the CW then it's negative CCW is positive.
I know that Posit calculates the translation and rotation between your camera and a 3d object.
the only problem i have right now is, i have no idea how the coordinate systems of the camera and the object are defined.
So for example if i get 90° around the z-axis, in which direction is the z axis pointing and is the object rotating around this axis or is the camera rotating around it?
Edit:
After some testing and playing around with different coordinate systems, i think this is right:
definition of the camera coordinate system:
z-axis is pointing in the direction, in which the camera is looking.
x-axis is pointing to the right, while looking in z-direction.
y-axis is pointing up, while looking in z-direction.
the object is defined in the same coordinate system, but each point is defined relative to the starting point and not to the coordinate systems origin.
the translation vector you get, tells you how point[0] of the object is moved away from the origin of the camera coordinate system.
the rotationmatrix tells you how to rotate the object in the cameras coordinate system, in order to get the objects starting orientation. so the rotation matrix basically doesnt tell you how the object is rotated right now, but it tells you how you have to reverse its current orientation.
can anyone approve this?
Check out this answer.
The Y axis is pointing downward. I don't know what do You mean by starting point. The camera lays in the origin of it's coordinate system, and object points are defined in this system.
You are right with the rotation matrix, well, half of. The rotation matrix tells You, how to rotate the coordinate system to make it oriented the same as the coordinate system used to define model of the object. So it does tell You how the object is oriented with respect to the camera coordinate system.
How do I figure out the new angle and rotation vectors for the most visible side of the cube?
Why: The user can rotate cube, but when finished I'd like the cube to snap to a side facing the user.
What: I'm currently using CoreAnimation in iOS to do the rotation with CATransform3D. I have the current angle and the rotation vectors so I can do this:
CATransform3DMakeRotation(angle, rotationVector[0], rotationVector[1], rotationVector[2]);
Additional Info: I'm currently using Bill Dudney's Trackball code to generate movement and calculate angle and rotation vector.
Your camera's lookAt vector - probably {0, 0, 1} - determines what side is closer to user.
You need to create normal for every side of the cube. Then rotate them in same way as cube. After that, calculate the angle between every normal vector and the camera lookAt vector using a dot product. Whichever normal has the largest dot product is the side closest to the camera.