CMMotionData to SceneKit SCNNode orientation - augmented-reality

Trying to use CoreMotion to correctly rotate a SceneKit camera. The scene I've built is done rather simple ... all I do is create a bunch of boxes, distributed in an area, and the camera just points down the Z axis.
Unfortunately, the data coming back from device motion doesn't seem to relate to the device's physical position and orientation in any way. It just seems to meander randomly.
As suggested in this SO post, I'm passing the attitude's quaternion directly to the camera node's orientation property.
Am I misunderstanding what data core motion is giving me here? shouldn't the attitude reflect the device's physical orientation? or is it incremental movement and I should be building upon the prior orientation?

This snippet here might help you:
var motionManager = CMMotionManager()
motionManager?.deviceMotionUpdateInterval = 1.0 / 60.0
motionManager?.startDeviceMotionUpdatesToQueue(
NSOperationQueue.mainQueue(),
withHandler: { (motion: CMDeviceMotion!, error: NSError!) -> Void in
let currentAttitude = motion.attitude
var roll = Float(currentAttitude.roll) + (0.5*Float(M_PI))
var yaw = Float(currentAttitude.yaw)
var pitch = Float(currentAttitude.pitch)
self.cameraNode.eulerAngles = SCNVector3(
x: -roll,
y: yaw,
z: -pitch)
})
This setting is for the device in landscape right. You can play around with different orientations by changing the + and -
Import CoreMotion.

For anyone who stumbles on this, here's a more complete answer so you can understand the need for negations and pi/2 shifts. You first need to know your reference frame. Spherical coordinate systems define points as vectors angled away from the z- and x- axes. For the earth, let's define the z-axis as the line from the earth's center to the north pole and the x-axis as the line from the center through the equator at the prime meridian (mid-Africa in the Atlantic).
For (lat, lon, alt), we can then define roll and yaw around the z- and y- axes in radians:
let roll = lon * Float.pi / 180
let yaw = (90 - lat) * Float.pi / 180
I'm pairing roll, pitch, and yaw with z, x, and y, respectively, as defined for eulerAngles.
The extra 90 degrees accounts for the north pole being at 90 degrees latitude instead of zero.
To place my SCNCamera on the globe, I used two SCNNodes: an 'arm' node and the camera node:
let scnCamera = SCNNode()
scnCamera.camera = SCNCamera()
scnCamera.position = SCNVector3(x: 0.0, y: 0.0, z: alt + EARTH_RADIUS)
let scnCameraArm = SCNNode()
scnCameraArm?.position = SCNVector3(x: 0, y: 0, z: 0)
scnCameraArm?.addChildNode(scnCamera)
The arm is positioned at the center of the earth, and the camera is place at alt + EARTH_RADIUS away, i.e. the camera is now at the north pole. To move the camera on every location update, we can now just rotate the arm node with new roll and yaw values:
scnCameraArm.eulerAngles.z = roll
scnCameraArm.eulerAngles.y = yaw
Without changing the camera's orientation, it's virtual lens is always facing the ground and it's virtual 'up' direction is pointed westward.
To change the virtual camera's orientation, the CMMotion callback returns a CMAttitude with roll, pitch, and yaw values relative to a different z- and x- axis reference of your choosing. The magnetometer-based ones use a z-axis pointed away from gravity and an x-axis pointed at the north pole. So a phone with zero pitch, roll, and yaw, would have its screen facing away from gravity, it's back camera pointed at the ground, and its right side of portrait mode facing north. Notice that this orientation is relative to gravity, not to the phone's portrait/landscape mode (which is also relative to gravity). So portrait/landscape is irrelevant.
If you imagine the phone's camera in this orientation near the north pole on the prime meridian, you'll notice that the CMMotion reference is in a different orientation than the virtual camera (SCNCamera). Both cameras are facing the ground, but their respective y-axes (and x) are 180 degrees apart. To line them up, we need to spin one around its respective z-axis, i.e. add/subtract 180 degrees to the roll ...or, since they're expressed in radians, negate them for the same effect.
Also, as far as I can tell, CMAttitude doesn't explicitly document that its roll value means a rotation about the z-axis coming out of the phone's screen, and from experimenting, it seems that attitude.roll and attitude.yaw have opposite definitions than defined in eulerAngles, but maybe this is an artifact of the order that the rotational transformations are applied in virtual space with eulerAngles (?). Anyway, the callback:
motionManager?.startDeviceMotionUpdates(using: .xTrueNorthZVertical, to: OperationQueue.main, withHandler: { (motion: CMDeviceMotion?, err: Error?) in
guard let m = motion else { return }
scnCamera.eulerAngles.z = Float(m.attitude.yaw - Double.pi)
scnCamera.eulerAngles.x = Float(m.attitude.pitch)
scnCamera.eulerAngles.y = Float(m.attitude.roll)
})
You can also start with a different reference frame for your virtual camera, e.g. z-axis pointing through the prime meridian at the equator and x-axis pointing through the north pole (i.e. the CMMotion reference), but you'll still need to invert the longitude somewhere.
With this set up, you can build a scene heavily reliant on GPS locations pretty easily.

Related

ArUco Markers, pose estimatimation- Exactly for which point the traslation and rotation is given?

I Detected the ArUco marker and estimated the pose. See the image below. However, Xt (X translation) I get is a positive value. According to the drawAxis function, the positive direction is away from the image center. So I thought it was supposed to be a negative value. Why I am getting positive instead.
My camera is about 120 mm away from the imaging surface. But I am getting Zt (Z translation) in the range of 650 mm. Is pose estimation giving the pose of marker with respect to physical camera or image plane center? I didn't get why the Zt is so high.
I kept measuring Pose while changing Z, and obtained roll, pitch, yaw. I noticed roll ( rotation w.r.t. cam X-axis) is changing its sign back and forth magnitude change 166-178, but the sign of Xt did not change with the sign change in roll. Any thoughts on why it behaves like that?
Any suggestion to get more consistent data?
image=cv.imread(fname)
arucoDict = cv.aruco.Dictionary_get(cv.aruco.DICT_4X4_1000)
arucoParams = cv.aruco.DetectorParameters_create()
(corners, ids, rejected) = cv.aruco.detectMarkers(image, arucoDict,
parameters=arucoParams)
print(corners, ids, rejected)
if len(corners) > 0:
# flatten the ArUco IDs list
ids = ids.flatten()
# loop over the detected ArUCo corners
#for (markerCorner, markerID) in zip(corners, ids):
#(markerCorner, markerID)=(corners, ids)
# extract the marker corners (which are always returned in
# top-left, top-right, bottom-right, and bottom-left order)
#corners = corners.reshape((4, 2))
(topLeft, topRight, bottomRight, bottomLeft) = corners[0][0][0],corners[0][0][1],corners[0][0][2],corners[0][0][3]
# convert each of the (x, y)-coordinate pairs to integers
topRight = (int(topRight[0]), int(topRight[1]))
bottomRight = (int(bottomRight[0]), int(bottomRight[1]))
bottomLeft = (int(bottomLeft[0]), int(bottomLeft[1]))
topLeft = (int(topLeft[0]), int(topLeft[1]))
# draw the bounding box of the ArUCo detection
cv.line(image, topLeft, topRight, (0, 255, 0), 2)
cv.line(image, topRight, bottomRight, (0, 255, 0), 2)
cv.line(image, bottomRight, bottomLeft, (0, 255, 0), 2)
cv.line(image, bottomLeft, topLeft, (0, 255, 0), 2)
# compute and draw the center (x, y)-coordinates of the ArUco
# marker
cX = int((topLeft[0] + bottomRight[0]) / 2.0)
cY = int((topLeft[1] + bottomRight[1]) / 2.0)
cv.circle(image, (cX, cY), 4, (0, 0, 255), -1)
if topLeft[1]!=topRight[1] or topLeft[0]!=bottomLeft[0]:
rot1=np.degrees(np.arctan((topLeft[0]-bottomLeft[0])/(bottomLeft[1]-topLeft[1])))
rot2=np.degrees(np.arctan((topRight[1]-topLeft[1])/(topRight[0]-topLeft[0])))
rot=(np.round(rot1,3)+np.round(rot2,3))/2
print(rot1,rot2,rot)
else:
rot=0
# draw the ArUco marker ID on the image
rotS=",rotation:"+str(np.round(rot,3))
cv.putText(image, ("position: "+str(cX) +","+str(cY)),
(100, topLeft[1] - 15), cv.FONT_HERSHEY_SIMPLEX,0.5, (255, 0, 80), 2)
cv.putText(image, rotS,
(400, topLeft[1] -15), cv.FONT_HERSHEY_SIMPLEX,0.5, (255, 0, 80), 2)
print("[INFO] ArUco marker ID: {}".format(ids))
d=np.round((math.dist(topLeft,bottomRight)+math.dist(topRight,bottomLeft))/2,3)
# Get the rotation and translation vectors
rvecs, tvecs, obj_points = cv.aruco.estimatePoseSingleMarkers(corners,aruco_marker_side_length,mtx,dst)
# Print the pose for the ArUco marker
# The pose of the marker is with respect to the camera lens frame.
# Imagine you are looking through the camera viewfinder,
# the camera lens frame's:
# x-axis points to the right
# y-axis points straight down towards your toes
# z-axis points straight ahead away from your eye, out of the camera
#for i, marker_id in enumerate(marker_ids):
# Store the translation (i.e. position) information
transform_translation_x = tvecs[0][0][0]
transform_translation_y = tvecs[0][0][1]
transform_translation_z = tvecs[0][0][2]
# Store the rotation information
rotation_matrix = np.eye(4)
rotation_matrix[0:3, 0:3] = cv.Rodrigues(np.array(rvecs[0]))[0]
r = R.from_matrix(rotation_matrix[0:3, 0:3])
quat = r.as_quat()
# Quaternion format
transform_rotation_x = quat[0]
transform_rotation_y = quat[1]
transform_rotation_z = quat[2]
transform_rotation_w = quat[3]
# Euler angle format in radians
roll_x, pitch_y, yaw_z = euler_from_quaternion(transform_rotation_x,transform_rotation_y,transform_rotation_z,transform_rotation_w)
roll_x = math.degrees(roll_x)
pitch_y = math.degrees(pitch_y)
yaw_z = math.degrees(yaw_z)
Disclaimer: this goes for OpenCV v4.5.5 and corresponding aruco module (contrib repo). They redid a lot of aruco stuff for v4.6.0 and v4.7.0, so best check everything I say here.
Without checking all the code (looks roughly okay), a few basics about OpenCV and aruco:
Both use right-handed coordinate systems. Thumb X, index Y, middle Z.
OpenCV uses X right, Y down, Z far, for screen/camera frames. Origin for screens and pictures is the top left corner. For cameras, the origin is the center of the pinhole model, which would be the center of the aperture. I can't comment on lenses or lens systems. Assume the lens center is the origin. That's probably close enough.
Aruco uses X right, Y far, Z up, if the marker is lying flat on a table. Origin is in the center of the marker. The top left corner of the marker is considered the "first" corner.
The marker can be considered to have its own coordinate system/frame.
The pose given by rvec and tvec is the pose of the marker in the camera frame. That means np.linalg.norm(tvec) gives you the direct distance from the camera to the marker's center. tvec's Z is just the component parallel to optical axis.
If the marker is in the right half of the picture ("half" defined by camera matrix's cx,cy), you'd expect tvec's X to grow. Lower half, Y positive/growing.
Conversely, that transformation transforms marker-local coordinates to camera-local. Try transforming some marker-local points, such as origin or points on the axes. I believe that cv::transform can help with that. Using OpenCV's projectPoints to map 3D space points to 2D image points, you can then draw the marker's axes, or a cube on top of it, or anything you like.
Say the marker sits upright and faces the camera dead-on. When you consider the frame triads of the marker and the camera in space ("world" space), both would be X "right", but one's Y and Z are opposite the other's Y and Z, so you'd expect to see a rotation around the X axis by half a turn (rotating Z and Y).
You could imagine the transformation to happen like this:
initially the camera looks through the marker, from the marker's back out into the world. The camera would be "upside down". The camera sees marker-space.
the pose's rotation component rotates the whole marker-local world around the camera's origin. Seen from the world frame (point of reference), the camera rotates, into an attitude you'd find natural.
the pose's translation moves the marker's world out in front of the camera (Z being positive), or equivalently, the camera backs away from the marker.
If you get implausible values, check aruco_marker_side_length and camera matrix. f would be around 500-3000 for typical resolutions (VGA-4k) and fields of view (60-80 degrees).

How to point the camera towards a SCNVector3 point below iOS 11

I just started learning how to use SceneKit yesterday, so I may get some stuff wrong or incorrect. I am trying to make my cameraNode look at a SCNVector3 point in the scene.
I am trying to make my app available to people below iOS 11.0. However, the look(at:) function is only for iOS 11.0+.
Here is my function where I initialise the camera:
func initCamera() {
cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(5, 12, 10)
if #available(iOS 11.0, *) {
cameraNode.look(at: SCNVector3(0, 5, 0)) // Calculate the look angle
} else {
// How can I calculate the orientation? <-----------
}
print(cameraNode.rotation) // Prints: SCNVector4(x: -0.7600127, y: 0.62465125, z: 0.17941462, w: 0.7226559)
gameScene.rootNode.addChildNode(cameraNode)
}
The orientation of SCNVector4(x: -0.7600127, y: 0.62465125, z: 0.17941462, w: 0.7226559) in degrees is x: -43.5, y: 35.8, z: 10.3, and I don't understand w. (Also, why isn't z = 0? I thought z was the roll...?)
Here is my workings out for recreating what I thought the Y-angle should be:
So I worked it out to be 63.4 degrees, but the returned rotation shows that it should be 35.8 degrees. Is there something wrong with my calculations, do I not fully understand SCNVector4, or is there another method to do this?
I looked at Explaining in Detail the ScnVector4 method for what SCNVector4 is, but I still don't really understand what w is for. It says that w is the 'angle of rotation' which I thought was what I thought X, Y & Z were for.
If you have any questions, please ask!
Although #rickster has given the explanations of the properties of the node, I have figured out a method to rotate the node to look at a point using maths (trigonometry).
Here is my code:
// Extension for Float
extension Float {
/// Convert degrees to radians
func asRadians() -> Float {
return self * Float.pi / 180
}
}
and also:
// Extension for SCNNode
extension SCNNode {
/// Look at a SCNVector3 point
func lookAt(_ point: SCNVector3) {
// Find change in positions
let changeX = self.position.x - point.x // Change in X position
let changeY = self.position.y - point.y // Change in Y position
let changeZ = self.position.z - point.z // Change in Z position
// Calculate the X and Y angles
let angleX = atan2(changeZ, changeY) * (changeZ > 0 ? -1 : 1)
let angleY = atan2(changeZ, changeX)
// Calculate the X and Y rotations
let xRot = Float(-90).asRadians() - angleX // X rotation
let yRot = Float(90).asRadians() - angleY // Y rotation
self.eulerAngles = SCNVector3(CGFloat(xRot), CGFloat(yRot), 0) // Rotate
}
}
And you call the function using:
cameraNode.lookAt(SCNVector3(0, 5, 0))
Hope this helps people in the future!
There are three ways to express a 3D rotation in SceneKit:
What you're doing on paper is calculating separate angles around the x, y, and z axes. These are called Euler angles, or pitch, yaw, and roll. You might get results that more resemble your hand-calculations if you use eulerAngles or simdEulerAngles instead of `rotation. (Or you might not, because one of the difficulties of an Euler-angle system is that you have to apply each of those three rotations in the correct order.)
simdRotation or rotation uses a four-component vector (float4 or SCNVector4) to express an axis-angle representation of the rotation. This relies on a bit of math that isn't obvious for many newcomers to 3D graphics: the result of any sequence of rotations around different axes can be minimally expressed as a single rotation around a new axis.
For example, a rotation of π/2 radians (90°) around the z-axis (0,0,1) followed by a rotation of π/2 around the y-axis (0,1,0) has the same result as a rotation of 2π/3 around the axis (-1/√3, 1/√3, 1/√3).
This is where you're getting confused about the x, y, z, and w components of a SceneKit rotation vector — the first three components are lengths, expressing a 3D vector, and the fourth is a rotation in radians around that vector.
Quaternions are another way to express 3D rotation (and one that's even further off the beaten path for those of us with the formal math education common to undergraduate computer science curricula, but not crazy advanced, either). These have lots of great features for 3D graphics, like being easy to compose and interpolate between. In SceneKit, the simdOrientation or orientation property lets you work with a node's rotation as a quaternion.
Explaining how quaternions work is too much for one SO answer, but the practical upshot is this: if you're working with a good vector math library (like the SIMD library built into iOS 9 and later), you can basically treat them as opaque — just convert from whichever other rotation representation is easiest for you, and reap the benefits.

Detect initial position of iOS Device in 3D Space - Core Motion

Using Coremotion we can get the change in position of device using attitude, rotation using gyro. But to know the actual position of device in 3D space, we would need the initial position of the device. So that accordingly the userAcceleration, gyro data could be applied the get new actual position after changes. How to get the initial actual position of device. I want to detect the position like "45 degree tilted in left with face up" or "45 degree tilted in right with rotated at 30 degrees in y axis".
You can use CoreMotion accelerometers to estimate the initial position of your device using some equations.
let x = data.acceleration.x
let y = data.acceleration.y
let z = data.acceleration.z
let roll = atan (y / sqrt(pow(x,2.0) + pow(z, 2.0)));
let pitch = atan (x / sqrt(pow(y, 2.0) + pow(z, 2.0)));
let yaw = atan (sqrt(pow(x, 2.0) + pow(z, 2.0))/z);
Somehow, the Yaw is still relative to the starting position of your Device. In order to fix that issue, you should look for using the Compass.

SceneKit – SCNCamera Top-down view

I'm new to SceneKit coming from 2D SpriteKit and was trying to figure out how to adjust the camera so that it's at the top of the world facing down. I have the location part right, however on the rotation I'm getting stuck. If I adjust the X,YorZaxis, nothing seems to happen, however on the W axis the slightest change (even0.1` higher or lower) seems to move the camera in an unknown direction. What am I doing wrong?
cameraNode.position = SCNVector3Make(0, 10, 0)
cameraNode.rotation = SCNVector4Make(0, 0, 0, 0.5)
the rotation vector is decomposed as (x_axis, y_axis, z_axis, angle)
Setting a rotation axis with a null angle is the identity (no effective rotation). Setting an angle with a null rotation axis does not actually define a rotation.
As for why a small change of the angle has a huge effect, it's because they are expressed in radians.
A rotation of 90º around the x axis can be achieved as follows
node.rotation = SCNVector4Make(1, 0, 0, M_PI_2)
But you can also use Euler angles (see SCNNode.eulerAngles) if you find it easier:
node.eulerAngles = SCNVector3Make(M_PI_2, 0, 0)

how can i get the heading of the device with CMDeviceMotion in iOS 5

I'm developing an AR app using the gyro. I have use an apple code example pARk. It use the rotation matrix to calculate the position of the coordinate and it do really well, but now I'm trying to implement a "radar" and I need to rotate this in function of the device heading. I'm using the CLLocationManager heading but it's not correct.
The question is, how can I get the heading of the device using the CMAttitude to reflect exactly what I get in the screen??
I'm new with rotation matrix and that kind of things.
This is part of the code used to calculate the AR coordinates. Update the cameraTransform with the attitude:
CMDeviceMotion *d = motionManager.deviceMotion;
if (d != nil) {
CMRotationMatrix r = d.attitude.rotationMatrix;
transformFromCMRotationMatrix(cameraTransform, &r);
[self setNeedsDisplay];
}
and then in the drawRect code:
mat4f_t projectionCameraTransform;
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
int i = 0;
for (PlaceOfInterest *poi in [placesOfInterest objectEnumerator]) {
vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;
I also rotate the view with the pitch angle.
The motions updates are started using the north:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical];
So I think that must be possible to get the "roll"/heading of the device in any position (with any pitch and yaw...) but I don't know how.
There are a few ways to calculate heading from the rotation matrix returned by CMDeviceMotion. This assumes you use the same definition of Apple's compass, where the +y direction (top of the iPhone) pointing due north returns a heading of 0, and rotating the iPhone to the right increases the heading, so East is 90, South is 180, and so forth.
First, when you start updates, be sure to check to make sure headings are available:
if (([CMMotionManager availableAttitudeReferenceFrames] & CMAttitudeReferenceFrameXTrueNorthZVertical) != 0) {
...
}
Next, when you start the motion manager, ask for attitude as a rotation from X pointing true North (or Magnetic North if you need that for some reason):
[motionManager startDeviceMotionUpdatesUsingReferenceFrame: CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue: self.motionQueue
withHandler: dmHandler];
When the motion manager reports a motion update, you want to find out how much the device has rotated in the X-Y plane. Since we are interested in the top of the iPhone, we'll pick a point in that direction and rotate it using the returned rotation matrix to get the point after rotation:
[m11 m12 m13] [0] [m12]
[m21 m22 m23] [1] = [m22]
[m31 m32 m33] [0] [m32]
The funky brackets are matrices; it's the best I can do using ASCII. :)
The heading is the angle between the rotated point and true North. We can use the X and Y coordinates of the rotated point to extract the arc tangent, which gives the angle between the point and the X axis. This is actually 180 degrees off from what we want, so we have to adjust accordingly. The resulting code looks like this:
CMDeviceMotionHandler dmHandler = ^(CMDeviceMotion *aMotion, NSError *error) {
// Check for an error.
if (error) {
// Add error handling here.
} else {
// Get the rotation matrix.
CMAttitude *attitude = self.motionManager.deviceMotion.attitude;
CMRotationMatrix rm = attitude.rotationMatrix;
// Get the heading.
double heading = PI + atan2(rm.m22, rm.m12);
heading = heading*180/PI;
printf("Heading: %5.0f\n", heading);
}
};
There is one gotcha: If the top of the iPhone is pointed straight up or straight down, the direction is undefined. The result is m21 and m22 are zero, or very close to it. You need to decide what this means for your app and handle the condition accordingly. You might, for example, switch to a heading based on the -Z axis (behind the iPhone) when m12*m12 + m22*m22 is close to zero.
This all assumes you want to rotate about the X-Y plane, as Apple usually does for their compass. It works because you are using the rotation matrix returned by the motion manager to rotate a vector pointed along the Y axis, which is this matrix:
[0]
[1]
[0]
To rotate a different vector--say, one pointed along -Z--use a different matrix, like
[0]
[0]
[-1]
Of course, you also have to take the arc tangent in a different plane, so instead of
double heading = PI + atan2(rm.m22, rm.m12);
you would use
double heading = PI + atan2(-rm.m33, -rm.m13);
to get the rotation in the X-Z plane.

Resources