I am using ARKit's ARFaceTrackingConfiguration with ARConfiguration.WorldAlignment.camera alignment, but I found that the documentation (seemingly) does not reflect the reality;
Based on the excerpt of documentation below, I would expect that the face anchor's transform is expressed in right handed coordinate system. However, when I tried moving my head, I noticed that the Z coordinate of the face anchor is always negative (i.e. faceAnchor.transform.columns.3.z < 0). Note that moving head in the X and Y directions corresponds to expected outcome (unlike Z coordinate).
Camera alignment defines a coordinate system based on the native sensor orientation of the device camera. Relative to a AVCaptureVideoOrientation.landscapeRight-oriented camera image, the x-axis points to the right, the y-axis points up, and the z-axis points out the front of the device (toward the user).
I want the transform to behave as per the documentation, i.e. the Z coordinate of face anchor should be positive given that documentation says "the z-axis points out the front of the device (toward the user)". So far it seems the Z-axis points out the back of the device…
Am I missing something obvious?
I tried to repair the rotation by the following code, but I am not sure if it's correct way to fix this:
// Repair rotation
let oldFaceRotation = simd_quatf(face.transform) // get quaternion from
let repairedFaceRotation = simd_quatf(ix: oldFaceRotation.axis.y, iy: oldFaceRotation.axis.x, iz: -oldFaceRotation.axis.z, r: oldFaceRotation.real)
// Repair translation
var repairedPosition = face.transform.columns.3
repairedPosition.z *= -1
// Combine
var correctedFaceTransform = float4x4(repairedFaceRotation)
correctedFaceTransform.columns.3 = repairedPosition
It seems quite obvious:
When ARSession is running and ARCamera begins tracking environment, it places WorldOriginAxis in front of your face at (x: 0, y: 0, z: 0). Just check it using:
sceneView.debugOptions = [.showWorldOrigin]
So your face's position must be at positive part of Z axis of World Coordinates.
Thus, ARFaceAnchor will be placed at positive Z-axis direction, as well.
And when you use ARFaceTrackingConfiguration vs ARWorldTrackingConfiguration there's two things to consider:
Rear Camera moves towards objects along negative Z-axes (positive X-axis is on the right).
Front Camera moves towards faces along positive Z-axes (positive X-axis is on the left).
Hence, when you are "looking" through TrueDepth Camera, a 4x4 Matrix is mirrored.
Although I still don't know why does not the face anchor behave as described in the documentation, I can at least answer how to correct its left-handed system into the Metal- and SceneKit-friendly right-handed system (X axis to the right, Y axis up, Z axis from the screen towards user):
func faceAnchorPoseToRHS(_ mat: float4x4) -> float4x4 {
let correctedPos = float4(x: mat.columns.3.x, y: mat.columns.3.y, z: -mat.columns.3.z, w: 1)
let quat = simd_quatf(mat)
let newQuat = simd_quatf(angle: -quat.angle, axis: float3(quat.axis.x, quat.axis.y, -quat.axis.z))
var newPose = float4x4(newQuat)
newPose.columns.3 = correctedPos
return newPose
}
Related
I'd like to simulate the shift of a tilt-shift/perspective-control lens in Scene Kit on MacOS.
Imagine the user has the camera facing a tall building at ground level, I'd like to be able to shift the 'lens' so that the projective distortion shifts (see e.g. Wikipedia).
Apple provides lots of physically-based parameters for SCNCamera (sensor height, aperture blade count), but I can't see anything obvious for this. It seems to exist in Unity.
Crucially I'd like to shift the lens so that the object stays in the same position relative to the camera. Obviously I could move the camera to get the effect, but the object needs to stay centred in the viewport (and I can't see a way to modify the viewport either). I've tried to modify the .projectionTransform matrix directly, but it was unsuccessful.
Thanks!
There's is no API on SCNCamera that does that out of the box. As you guessed one has to create a custom projection matrix and set it to the projectionTransform property.
I finally worked out the correct adjustment to the projection matrix – it's quite confusing to follow the maths, because it is a 4x4 matrix rather than 3x4 or 4x3 as you'd use for a plain camera projection matrix, which additionally makes it especially confusing to work out whether it is expecting row vectors or column vectors.
Anyway, the correct element is .m32 for the y axis
let camera = SCNNode()
camera.camera = SCNCamera()
let yShift: CGFloat = 1.0
camera.camera!.projectionTransform.m32 = yShift
Presumably .m31 will shift in the x axis, but I have to admit I haven't tested this.
When I thought about it a bit more, I also realised that the effect I actually wanted involves moving the camera too. Adjusting .m32 simulates moving the sensor, which will appear to move the subject relative to the camera, as if you had a wide angle lens and you were moving the crop. To keep the subject centred in frame, you need to move the camera's position too.
With a bit (a lot) of help from this blog post and in particular this code, I implemented this too:
let distance: CGFloat = 1.0 // calculate distance from subject here
let fovRadians = camera.camera!.fieldOfView * CGFloat.pi / 180.0
let yAdjust = tan(fovRadians / 2) * distance * yShift
camera.position = camera.position - camera.worldUp * yAdjust
(any interested readers could presumably work out the x axis shift from the source above)
As I understand OpenCV's coordinate system, as in this diagram.
The left camera of a calibrated stereo pair is located at the origin facing the Z direction.
I have a pair of 2464x2056 pixel cameras that I have calibrated (with a stereo rms of around 0.35), computed the disparity on a pair of images and reprojected this to get the 3D pointcloud. However, I've noticed that the Z axis is not in line with the optical centre of the camera.
This does kind of mess with some of the pointcloud manipulation I'm hoping to do- is this expected, or does it indicate that that something has gone wrong along the way?
Below is the point I've generated, plus the axis- the red green and blue lines indicate the x,y and z axes respectively, coming out from the origin.
As you can see, the Z axis intercepts the pointcloud between the head and the post- this corresponds to a pixel coordinate of approximately x = 637, y = 1028 when I fix the principal point during calibration to cx = 1232,y=1028. When I remove the CV_FIX_PRINCIPAL_POINT flag, this is calculated as approximatly cx = 1310, cy=1074, and the Z axis intercepts at around x=310,y=1050.
Compared to the rectified image here where the midpoint x = 1232,y=1028 is marked by a yellow cross, the centre of the image is over the mannequin had, the intersection between the Z axis is significantly off from where I would expect.
Does anyone have any idea as to why this could be occuring? Any help would be greatly appreciated.
I'm new to ARKit. I want to get the direction from anchor 1 to anchor 2. Currently, I can get the position from transform.columns.3. However, this works only for fixed axis.(z-axis always toward user)
How can I compare two anchor with respect to 6 axes (pitch, yaw, roll)? What should I read to get more detail information about this?
func showDirection(of object: ARAnchor) { // only work for fixed axis
if let currentFrame = sceneView.session.currentFrame {
print("diff(x) = \(currentFrame.camera.transform.columns.3.x - object.transform.columns.3.x)")
print("diff(y) = \(currentFrame.camera.transform.columns.3.y - object.transform.columns.3.y)")
print("diff(z) = \(currentFrame.camera.transform.columns.3.z - object.transform.columns.3.z)")
}
}
I think my answer to this other user's question may be helpful. Basically, using SceneKit or ARKit, you can find the orientations of the camera and of your target anchor, and do some quaternion math to find the axis and angle of the relative rotation between them on x, y and z axes. My example assumed a SceneKit/ARKit app, which allows you to use quaternions instead of matrices, but the math should essentially be the same for ARKit transforms. If you use ARKit's simd_float4x4 transform matrices, you could find one matrix in the space of the other (A.inverse * B) and use the resulting matrix to glean relative position and orientation.
Your question was a little hard to follow, as I'm not sure if the orientation of the anchor you're targeting matters in your case, but this should help as far as comparing two anchors with respect to pitch, yaw and roll.
The effect I'm trying to achieve is to have an arrow pointed out from the camera's pointOfView position, aligned with the scene (and gravity) on the x and z axis, but pointing in the same direction as the camera. It might look something like this:
Right now, I have its euler angles x and z set to 0, and it's y set to match that of the ARSCNView.pointOfView.eulerAngles.y. The problem is that as I rotate the device, the eulerAngles.y can end up having the same value for different points. For example, facing the device in one direction, my eulerAngles are:
x: 2.52045, y: -0.300239, z: 3.12887
Facing it in another direction, the eulerAngles are:
euler angles x: -0.383826, y: -0.305686, z: -0.0239297
Even though these directions are quite far apart, the eulerAngles is still pretty much the same. The different x and z values mean the y value doesn't represent which direction the device is facing. As a result, my arrow follows the camera's heading to some point, and then starts rotating back in the opposite direction. How can I zero-out the x and z values in a way that I'll get a truthful y value, that I can then use to orient my arrow?
Do not use euler angles to detect 'real' or 'truthful' values.
Those values are always correct. To work with rotation you have to use either matricies or quaternions.
Remember. It is possible to define lots of 'euler angles' using matrix.
Each euler angle is relative to the previous one.
SceneKit applies these rotations in the reverse order of the
components:
1. first roll
2. then yaw
3. then pitch
So, to calculate the value you angle I would suggest to calculate the vector and its projection to Oxy plane. This angle is not 'euler' angle.
Let's say I have a pinhole camera with known intristic values like camera matrix and distortion coefficients. Let's say there is a point in large enough distance from the camera, so we can say it is placed in infinity.
Given image coordinates of this point in pixels, I would like to calculate camera rotation relative to the axis that connects camera and this point (so rotation is 0,0 if camera is directed at this point and it is in the optical center of the image).
How can this be done using opencv?
Many thanks!
You need to specify an additional constraint - rotating the camera from its current pose to one that aligns the optical axis with an arbitrary ray leaves the camera free to rotate about the ray itself (i.e. it leaves the "roll" angle unspecified).
Let's assume that you want the roll to be zero, i.e. that you want the motion to be a pure pan-tilt. This has a unique solution as long as the ray you want to align to is not parallel to the vertical image axis (in which case pan and roll are the same motion).
Then the solution is computed as follows. Let's use the OpenCV camera frame: Z=[0,0,1]' (, where " ' " means transpose) be the camera focal axis, oriented going out of the lens, Y=[0,1,0]' the vertical axis going down, and X = Z x Y (where 'x' is the cross product) the horizontal camera axis going toward the right of the image. So "pan" is a rotation about Y, "tilt" is a rotation about X.
Let U = [u1, u2, u3]', with || u || = 1 be the ray you want to rotate to. You want to apply a pan that brings Z onto the plane Puy defined by the vectors u and Y, then apply a tilt that brings Z onto u.
The angle of the first rotation is (angle between Z and Puy) = [90 deg - (angle between Z and Y x U)]. this is because Y x U is orthogonal to Puy. Look up the expressions for computing the angle between vectors on Wikipedia or elsewhere online. Once you have the angle (or its cosine and sine), the rotation about Y can be expressed as a standard rotation matrix Ry.
The angle of the second rotation, about X after once Z is onto Puy, is the angle between vector Z and U after Ry is applied to Z, or equivalently, between Z and inv(Ry) * U. Compute the angle between the vector, and use to build a standard rotation matrix about X, Rx
The final transformation is then Rx * Ry.