Get reference frame with y axis pointing to magnetic north - ios

With CoreMotion, is it possible to get a reference frame with the Y axis pointing to magnetic north? I would like to make the readings similiar to the ones from Android, which has the Y axis pointing to magnetic north. Thanks in advance.

Related

What is the point of reference / origin for coordinates obtained from a stereo-set up? (OpenCV)

I set up a stereo-vision system to triangulate 3D point given two 2D points from 2 views (corresponding to the same point.) I have some questions on the interpretability of the results.
So the size of my calibration squares are '25mm' a side, and after triangulating and normalizing the homogeneous coordinates (dividing the array of points by the fourth coordinate), I multiply all of them by 25mm and divide by 10 (to get in cm) to get the actual distance from the camera set up.
For eg - the final coordinates that I got were something like ([-13.29, -5.94, 68.41]) So how do I interpret this? 68.41 is the distance in the z direction, and -5.94 is the position in the y and -13.29 is the position in the x. But what is the origin here? By convention is it the left camera? Or is it the center of the epipolar baseline? I am using OpenCV for reference.

OpenCV Stereo Photogrammetry- why my Z axis not in line with the principal point?

As I understand OpenCV's coordinate system, as in this diagram.
The left camera of a calibrated stereo pair is located at the origin facing the Z direction.
I have a pair of 2464x2056 pixel cameras that I have calibrated (with a stereo rms of around 0.35), computed the disparity on a pair of images and reprojected this to get the 3D pointcloud. However, I've noticed that the Z axis is not in line with the optical centre of the camera.
This does kind of mess with some of the pointcloud manipulation I'm hoping to do- is this expected, or does it indicate that that something has gone wrong along the way?
Below is the point I've generated, plus the axis- the red green and blue lines indicate the x,y and z axes respectively, coming out from the origin.
As you can see, the Z axis intercepts the pointcloud between the head and the post- this corresponds to a pixel coordinate of approximately x = 637, y = 1028 when I fix the principal point during calibration to cx = 1232,y=1028. When I remove the CV_FIX_PRINCIPAL_POINT flag, this is calculated as approximatly cx = 1310, cy=1074, and the Z axis intercepts at around x=310,y=1050.
Compared to the rectified image here where the midpoint x = 1232,y=1028 is marked by a yellow cross, the centre of the image is over the mannequin had, the intersection between the Z axis is significantly off from where I would expect.
Does anyone have any idea as to why this could be occuring? Any help would be greatly appreciated.

Euler Angles Y value is affected by X and Z values - how do I zero them out to get a "truthful" y value?

The effect I'm trying to achieve is to have an arrow pointed out from the camera's pointOfView position, aligned with the scene (and gravity) on the x and z axis, but pointing in the same direction as the camera. It might look something like this:
Right now, I have its euler angles x and z set to 0, and it's y set to match that of the ARSCNView.pointOfView.eulerAngles.y. The problem is that as I rotate the device, the eulerAngles.y can end up having the same value for different points. For example, facing the device in one direction, my eulerAngles are:
x: 2.52045, y: -0.300239, z: 3.12887
Facing it in another direction, the eulerAngles are:
euler angles x: -0.383826, y: -0.305686, z: -0.0239297
Even though these directions are quite far apart, the eulerAngles is still pretty much the same. The different x and z values mean the y value doesn't represent which direction the device is facing. As a result, my arrow follows the camera's heading to some point, and then starts rotating back in the opposite direction. How can I zero-out the x and z values in a way that I'll get a truthful y value, that I can then use to orient my arrow?
Do not use euler angles to detect 'real' or 'truthful' values.
Those values are always correct. To work with rotation you have to use either matricies or quaternions.
Remember. It is possible to define lots of 'euler angles' using matrix.
Each euler angle is relative to the previous one.
SceneKit applies these rotations in the reverse order of the
components:
1. first roll
2. then yaw
3. then pitch
So, to calculate the value you angle I would suggest to calculate the vector and its projection to Oxy plane. This angle is not 'euler' angle.

Do different iphones have the same accelerometer axes?

I was wondering if two different iPhones would have the same accelerometer axes. I know the Z axes would be pointing in the same direction because the accelerometer uses the acceleration of gravity but would the X and Y axes be pointing the same way respectively? For example if iPhone 1 accelerated east and the corresponding accelerator reading was in the X direction, would iPhone 2's acceleration North be in the Y direction? Or would it be in whatever direction the iPhone is calibrated to read?
The axes use the device as a reference, see this from the docs on UIAcceleration:

The angle between an object and Kinect's optic axis

Here's my Setup: Kinect mounted on an actuator for horizontal movement.
Here's a short demo of what I am doing. http://www.youtube.com/watch?v=X1aSMvDQhDM
Here's my Scenario:
Please refer to above figure. Assume the distance between the center of the Actuator,'M', and the Center of the optic axis of Kinect, 'C', is 'dx'(millimeters), the depth information 'D'(millimeters) obtained from Kinect is relative to the optic axis. Since I now have a actuator mounted onto the Center of Kinect, the actual depth between object and Kinect is 'Z'.
X is the distance between optical axis and object, in pixels. Theta2 is the angle between optic axis and object. 'dy' can be ignored.
Here's my Problem.
To obtain Z, I can simply use the distance equation in Figure 2. However I do not know the real world value of X in mm. If I have the angle between the object and optical axis 'theta2', I could use Dsin(theta2) to obtain X in mm. However theta2 is also unknown. Since if X (in mm) is know, I can get theta2, if theta2 is known, I can get X. So how should I obtain either the X value in mm or the angle between optic axis and Object P?
Here's what I've tried:
Since I know the max field of view for Kinect is 57degrees, and the max horizontal resolution of Kinect is 640pixels, I can say that 1 degree for kinect covers 11.228 (640/57) pixels. However, through experiments I discover that this results in error of at least 2 degrees. I suspect its due to lens distortion on the Kinect. But I don't know how to compensate/normalize it.
Any ideas/helps are greatly appreciated.

Resources