I was wondering if two different iPhones would have the same accelerometer axes. I know the Z axes would be pointing in the same direction because the accelerometer uses the acceleration of gravity but would the X and Y axes be pointing the same way respectively? For example if iPhone 1 accelerated east and the corresponding accelerator reading was in the X direction, would iPhone 2's acceleration North be in the Y direction? Or would it be in whatever direction the iPhone is calibrated to read?
The axes use the device as a reference, see this from the docs on UIAcceleration:
Related
I'm new to opencv and computer vision. I want to find the R and t matrix between two camera pose. So I generally follows the wikipedia:
https://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E
I find a group of the related pixel location of the same point in the two images. I get the essential matrix. Then I run the SVD. And print the 2 possible R and 2 possible t.
[What runs as expected]
If I only change the rotation alone (one of roll, pitch, yaw) or translation alone (one of x, y, z), it works perfect. For example, if I increase pitch to 15 degree, then I would get R and find the delta pitch is +14.9 degree. If I only increase x for 10 cm, then the t matrix is like [0.96, -0.2, -0.2].
[What goes wrong]
However, if I change both rotation and translation, the R and t is non-sense. For example, if I increase x for 10 cm and increase pitch to 15 degree, then the delta degree is like [-23, 8,0.5], and the t matrix is like [0.7, 0.5, 0.5].
[Question]
I'm wondering why I could not get a good result if I change the rotation and translation at the same time. And it is also confusing why the unrelated rotation or translation (roll, yaw, y, z) also changes so much.
Would anyone be willing to figure me out? Thanks.
[Solved and the reason]
OpenCV use a right-hand coordinate system. This is to say that the z-axis is projected from xy plane to the viewer direction. And our system is using a left-hand coordinate system. So as long as the changes are related to the z-axis, the result is non-sense.
This is solved due to the difference of the coordinates using.
OpenCV use a right-hand coordinate system. This is to say that the z-axis is projected from xy plane to the viewer direction. And our system is using a left-hand coordinate system. So as long as the changes are related to the z-axis, the result is non-sense.
I'm using some computer vision algorithm to aid the motion sensor (inertial measurements unit [IMU]) that are built on iPhone 6.
Its important to know the difference between the camera and IMU coordinate systems definition.
I'm sure that apple defines the IMU coordinate system as follow:
But I do not know how they define the x,y,z axis of the camera.
my ultimate goal is to transfer the IMU measurement to the camera coordinate system
The trick here is to view the axis from the top and reference it to the Right rotation and notice the rotational movement of the axis. If it doesn't rotate it's positive. If it rotates check the direction of the rotation; if it rotates to the CW then it's negative CCW is positive.
I’m working on an iOS augmented reality application.
It is location-based, not marker-based.
I use the GPS, compass and accelerometers to get latitude, longitude, altitude and the 3 euler angles: yaw, pitch and roll. I know using NSLog() that those 6 variables contain valid data.
My application shows some 3d objects over the camera view.
It works fine as long as I use everything but the roll angle.
If I add that third angle, the rotation applied to my opengl world is not good. I do it that way in the main OpenGL draw method
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
//glRotatef(roll, 0, 0, 1);
I think there is something wrong with this approach but am certainly not a specialist. Maybe I should create some sort of unique rotation matrix rather than 3 different ones?
Maybe that’s not possible easily? After all most desktop video games, FPS and the like, just let the user change the yaw and the pitch using the mouse, so only 2 angles, not 3. But unlike the mouse, which is a 2d device, a phone used for augmented reality can move in any angles.
But then again, all AR tutorials I have seen online couldn’t handle ‘roll’ properly. ‘Rolling’ your phone would either completely mess AR stuff up or do nothing at all, using some roll-compensation strategies.
So my question is, assuming I have my 3 Euler angles using the phone sensors, how should I apply them to my 3d opengl view?
I think you're likely talking about gimbal lock.
The essence of the problem is that if you rotate with Eulers then there's always a sequence to it. For example, you rotate around x, then around y, then z. But then one axis can always becomes ambiguous because a preceding can move it onto a different axis.
Suppose the rotation were 0 degrees around x, 90 degrees around y, then 20 degrees around z. So you do the x rotation and nothing has changed. You do the y rotation and everything moves 90 degrees. But now you've moved the z axis onto where the x axis was previously. So the z rotation will appear to be around x.
No matter what most people's instincts tell them, there's no way to avoid the problem. The kneejerk reaction is that you'll always rotate around the global axes rather than the local one. That doesn't resolve the problem, it just reverses the order. The z rotation could then the y rotation — which has already occurred — into an x rotation.
You're right that you should aim to create a unique description of rotation separated from measuring angles.
For augmented reality it's actually not all that difficult.
The accelerometer tells you which way down is. The compass tells you which way north is. The two may not be orthogonal though — the compass reading should vary from being exactly at a right angle to the floor on the equator to being exactly parallel to the accelerometer at the poles.
So:
just accept the accelerometer vector as down;
get the cross product of down and the compass vector to get your side vector — it should point along a line of longitude;
then get the cross product of your side vector and your down vector to get a north vector that is suitably perpendicular.
You could equally use the dot product to remove that portion of the compass vector that is in the direction of gravity and cross product from there.
You'll want to normalise everything.
That gives you three basis vectors, so just put them directly into a matrix. No further work required.
Here's my Setup: Kinect mounted on an actuator for horizontal movement.
Here's a short demo of what I am doing. http://www.youtube.com/watch?v=X1aSMvDQhDM
Here's my Scenario:
Please refer to above figure. Assume the distance between the center of the Actuator,'M', and the Center of the optic axis of Kinect, 'C', is 'dx'(millimeters), the depth information 'D'(millimeters) obtained from Kinect is relative to the optic axis. Since I now have a actuator mounted onto the Center of Kinect, the actual depth between object and Kinect is 'Z'.
X is the distance between optical axis and object, in pixels. Theta2 is the angle between optic axis and object. 'dy' can be ignored.
Here's my Problem.
To obtain Z, I can simply use the distance equation in Figure 2. However I do not know the real world value of X in mm. If I have the angle between the object and optical axis 'theta2', I could use Dsin(theta2) to obtain X in mm. However theta2 is also unknown. Since if X (in mm) is know, I can get theta2, if theta2 is known, I can get X. So how should I obtain either the X value in mm or the angle between optic axis and Object P?
Here's what I've tried:
Since I know the max field of view for Kinect is 57degrees, and the max horizontal resolution of Kinect is 640pixels, I can say that 1 degree for kinect covers 11.228 (640/57) pixels. However, through experiments I discover that this results in error of at least 2 degrees. I suspect its due to lens distortion on the Kinect. But I don't know how to compensate/normalize it.
Any ideas/helps are greatly appreciated.
I want to write an app that gives the degrees of position from some coordinate (bottom of the phone).
For example... If I'm holding the phone at a 45 degree angle, I want to display: 45 degrees on the screen. If the user holds the phone at 45 degrees and rotates the phone around an axis going from the ear piece to the home button, I want to display that angle (between 0-180degrees).
I've implemented the accelerometer and I get the x, y, z values, however, how do I convert them? I know they are in G's (1G, 0.9G, -0.5G on the respective axis), but what's the conversion? Am I even going on the correct track? Should I be using the gyroscope instead?
Thanks.
This question has an example. You can use atan2(y, x) and convert from radians to degrees with * (180/M_PI).
For any real arguments x and y not both equal to zero, atan2(y, x) is the angle in radians between the positive x-axis of a plane and the point given by the coordinates (x, y) on it.
- Wikipedia article on atan2
If you can rely on gyroscope support I'd recommend to use it, because you can get the (Euler) angles directly without any calculations. See iOS - gyroscope sample and follow the links inside.
Don't use UIAccelerometer because it will be deprecated soon. The newer CoreMotion framework is always the better choice, even for old devices.