This is a small background and introduction to the problem:
I have some functionality in my motion- and location-based iOS app, which needs a rotation matrix as an input. Some graphical output is dependent on this matrix. With every movement of the device, graphical output is changed. This is a part of the code which makes that:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical
toQueue:motionQueue
withHandler:
^(CMDeviceMotion* motion, NSError* error){
//get and process matrix data
}
In this structure only 4 frames are available:
XArbitraryZVertical
XArbitraryCorrectedZVertical
XMagneticNorthZVertical
XTrueNorthZVertical
I need to have another reference, f.e. gyroscope value instead of North and these frames can not offer me exactly what I want.
In order to reach my goal, I use next structure:
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXArbitraryCorrectedZVertical
toQueue:motionQueue
withHandler:
^(CMDeviceMotion* motion, NSError* error){
//get Euler angles and transform it to rotation matrix
}
You may ask me, why I do not use built in rotation matrix? The answer is simple. I need to make some kind of own reference frame and I can make this via putting inside modified values of angles.
The problem:
In order to get rotation matrix from Euler angles we need to make matrix for each angle and after that multiply them. For 3D case we will have matrix for each axis (3 of them). After that we multiply matrixes. The problem is that the output is dependent on the order of multiplication. XYZ is not equal to ZYX. Wikipedia tells me, that there are 12 variants and I do not know which one is the right one for iOS implementation. I need to know in which order I need to multiply them. In addition, I need to know which angles represents X, Y, Z. For example, X - roll, Y - pitch, Z - yaw.
Actually, this problem was solved by Apple years ago, but I do not have access to .m-files and I do not know which order of multiplication is the right one for iOS device.
Similar question was published here, but order from that math example in the solution does not work for me.
Regarding: Which angles relate to which axis.
See this:
link:https://developer.apple.com/documentation/coremotion/getting_processed_device-motion_data/understanding_reference_frames_and_device_attitude
Regarding rotation order for calculating rotation matrix & Euler angles (Pitch, Roll, Yaw)
Short Answer: ZXY is the rotation order on iOS.
I kept searching for this answer too. Got tired. Not sure why this is not documented somewhere easy to lookup. I decided to collect empirical data and test out which rotation order best matches the values. My values are below.
Methodology:
Wrote a small iPhone App to return quaternion values & corresponding pitch, roll, yaw angles
Computed pitch, roll, yaw values from the quaternions for various rotation orders (XYZ, XZY, YZX, YXZ, ZYX, ZXY)
Calculated RMS error with respect to the pitch, yaw, roll values reported by iOS device motion. Identified the orientation with the least error.
Results:
Rotation orders: ZYX & ZXY both returned values very close to the iOS reported values. However, the Error on ZXY was ~46-597X lower than ZXY for every case. Hence I believe ZXY is the rotation order.
Related
I have been working on Orientation estimation and I need to estimate correct heading when I am walking in straight line. After facing some roadblocks, I started from basics again.
I have implemented a Complementary Filter from here, which uses Gravity vector obtained from Android (not Raw Acceleration), Raw Gyro data and Raw Magnetometer data. I am also applying a low pass filter on Gyro and Magnetometer data and use it as input.
The output of the Complementary filter is Euler angles and I am also recording TYPE_ROTATION_VECTOR which outputs device orientation in terms of 4D Quaternion.
So I thought to convert the Quaternions to Euler and compare them with the Euler obtained from Complementary filter. The output of Euler angles is shown below when the phone is kept stationary on a table.
As it can be seen, the values of Yaw are off by a huge margin.
What am I doing wrong for this simple case when the phone is stationary
Then I walked in my living room and I get the following output.
The shape of Complementary filter looks very good and it is very close to that of Android. But the values are off by huge margin.
Please tell me what am I doing wrong?
I don't see any need to apply a low-pass filter to the Gyro. Since you're integrating the gyro to get rotation, it could mess everything up.
Be aware that TYPE_GRAVITY is a composite sensor reading synthesized from gyro and accel inside Android's own sensor fusion algorithm. Which is to say that this has already been passed through a Kalman filter. If you're going to use Android's built-in sensor fusion anyway, why not just use TYPE_ROTATION_VECTOR?
Your angles are in radians by the looks of it, and the error in the first set wasn't too far from 90 degrees. Perhaps you've swapped X and Y in your magnetometer inputs?
Here's the approach I would take: first write a test that takes accel and gyro and synthesizes Euler angles from it. Ignore gyro for now. Walk around the house and confirm that it does the right thing, but is jittery.
Next, slap an aggressive low-pass filter on your algorithm, e.g.
yaw0 = yaw;
yaw = computeFromAccelMag(); // yaw in radians
factor = 0.2; // between 0 and 1; experiment
yaw = yaw * factor + yaw0 * (1-factor);
Confirm that this still works. It should be much less jittery but also sluggish.
Finally, add gyro and make a complementary filter out of it.
dt = time_since_last_gyro_update;
yaw += gyroData[2] * dt; // test: might need to subtract instead of add
yaw0 = yaw;
yaw = computeFromAccelMag(); // yaw in radians
factor = 0.2; // between 0 and 1; experiment
yaw = yaw * factor + yaw0 * (1-factor);
They key thing is to test every step of the way as you develop your algorithm, so that when the mistake happens, you'll know what caused it.
I am have been using solvepnp() for the calculation of the rotation and translation matrix. But the euler angles calculated from the obtained rotation matrix gave very erratic values. Trying to find the problem, I had a set of 2D projection points for my marker and kept the other parameters of solvepnp() constant.
Eg values:
2D points
[219.67473, 242.78395; 363.4151, 238.61298; 503.04855, 234.56117; 501.70917, 628.16742; 500.58069, 959.78564; 383.1756, 972.02679; 262.8746, 984.56982; 243.17044, 646.22925]
The euler angle theta(x) calculated from the output rotation matrix of solvepnp() was -26.4877
Next, I incremented only the x value of the first point(i.e 219.67473) by 0.1 to check the variation of the theta(x) euler angle (keeping the remaining points and the other parameters constant) and ran the solvepnp() again .For that very small change,I had values which were decreasing from -19 degree, -18 degree (for x coord = 223.074) then suddenly jump to 27 degree for a while (for x coord = 223.174 to 226.974) then come down to 1.3 degree (for x coord = 227.074).
I cannot understand this behaviour at all.Could somebody please explain?
My euler angle calculation from the rotation matrix uses this procedure.
Try Rodrigues() for conversion between rotation matrix and rotation vector to make sure everything is clean and right. Non RANSAC version can be very sensitive to outliers that create a huge error in the parameters and thus bias a solution. Using RANSAC version of solvePnP may make it more stable to outliers. For example, adding too much to one of the points coordinates will eventually make it an outlier and it won’t influence a solution after that.
If everything fails, write a series unit tests: create an artificial set of points in 3D (possibly non planar), apply a simple translation first, in second variant apply rotation only, and in a third test apply both. Project using your camera matrix and then plug in your 2D, 3D points and projection matrix into your code to find the pose. If the result deviates from the inverse of the translations and rotations your applied to the points look for the bug in feeding parameters to PnP.
It seems the coordinate systems are different.OpenCV uses right-hand coordinate-system Y-pointing downwards. At nghiaho.com it says the calculations are based on this and if you look at the axis they don't seem to match. I guess you are using Rodrigues for matrix computation? Try comparing rotation vectors as well.
I’m working on an iOS augmented reality application.
It is location-based, not marker-based.
I use the GPS, compass and accelerometers to get latitude, longitude, altitude and the 3 euler angles: yaw, pitch and roll. I know using NSLog() that those 6 variables contain valid data.
My application shows some 3d objects over the camera view.
It works fine as long as I use everything but the roll angle.
If I add that third angle, the rotation applied to my opengl world is not good. I do it that way in the main OpenGL draw method
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
//glRotatef(roll, 0, 0, 1);
I think there is something wrong with this approach but am certainly not a specialist. Maybe I should create some sort of unique rotation matrix rather than 3 different ones?
Maybe that’s not possible easily? After all most desktop video games, FPS and the like, just let the user change the yaw and the pitch using the mouse, so only 2 angles, not 3. But unlike the mouse, which is a 2d device, a phone used for augmented reality can move in any angles.
But then again, all AR tutorials I have seen online couldn’t handle ‘roll’ properly. ‘Rolling’ your phone would either completely mess AR stuff up or do nothing at all, using some roll-compensation strategies.
So my question is, assuming I have my 3 Euler angles using the phone sensors, how should I apply them to my 3d opengl view?
I think you're likely talking about gimbal lock.
The essence of the problem is that if you rotate with Eulers then there's always a sequence to it. For example, you rotate around x, then around y, then z. But then one axis can always becomes ambiguous because a preceding can move it onto a different axis.
Suppose the rotation were 0 degrees around x, 90 degrees around y, then 20 degrees around z. So you do the x rotation and nothing has changed. You do the y rotation and everything moves 90 degrees. But now you've moved the z axis onto where the x axis was previously. So the z rotation will appear to be around x.
No matter what most people's instincts tell them, there's no way to avoid the problem. The kneejerk reaction is that you'll always rotate around the global axes rather than the local one. That doesn't resolve the problem, it just reverses the order. The z rotation could then the y rotation — which has already occurred — into an x rotation.
You're right that you should aim to create a unique description of rotation separated from measuring angles.
For augmented reality it's actually not all that difficult.
The accelerometer tells you which way down is. The compass tells you which way north is. The two may not be orthogonal though — the compass reading should vary from being exactly at a right angle to the floor on the equator to being exactly parallel to the accelerometer at the poles.
So:
just accept the accelerometer vector as down;
get the cross product of down and the compass vector to get your side vector — it should point along a line of longitude;
then get the cross product of your side vector and your down vector to get a north vector that is suitably perpendicular.
You could equally use the dot product to remove that portion of the compass vector that is in the direction of gravity and cross product from there.
You'll want to normalise everything.
That gives you three basis vectors, so just put them directly into a matrix. No further work required.
The iPhone gyroscope receives rotation data relative to some reference attitude and it doesn't change (unless multiplied.) Lets say I face the wall using my iPhone camera, and rotate 45 degrees left (roll += PI/4.)
Now, if I lift the phone towards the ceiling, both yaw and pitch change since the coordinate space is fixed (world coordinate space, doesn't move or rotate with the phone.) Is there a way to determine this angle (the one between the floor plane and the camera direction vector), roll, yaw and pitch given?
Edit: Instead of opening another question I'll try here. Luc's solution works. But how to get the other two angles of rotation? I've read the info on the posted link but it's been years since I studied linear algebra. This might be more math than a programming question, actually.
I don't really code for iPhone so I'll trust you on the "real world coordinates" frame.
In that case, you want the dot product between both z-axis' vectors. That'll give you the cosine of the angle you're looking for, pretty close thus. Since an angle between planes only really makes sense as a value between 0° and 90°, you actually have all the information you need in that cosine.
And there is no latex formatting here, otherwise I'd go into a bit more of detail, but read this page if you're interested, I'll just include the final result here, the rotation matrix for your three rotations :
Now the z-axis' vector of the horizontal plan is (0,0,1) (read this as a vertical vector though) and rotated with this matrix, you simply get its third column.
So we want to have the dot product between that third column and our (0,0,1) vector, so you get cos(β)cos(γ) which is cos(pitch)*cos(roll)
In conclusion, the angle between your plans is arccos(cos(pitch)*cos(roll)). This value will tell you how much your iPhone is inclined, not in which direction of course. But you can work that out from the values of the vector (rightmost column of the matrix) we spoke of.
I'm trying to estimate the relative camera pose using OpenCV. Cameras in my case are calibrated (i know the intrinsic parameters of the camera).
Given the images captured at two positions, i need to find out the relative rotation and translation between two cameras. Typical translation is about 5 to 15 meters and yaw angle rotation between cameras range between 0 - 20 degrees.
For achieving this, following steps are adopted.
a. Finding point corresponding using SIFT/SURF
b. Fundamental Matrix Identification
c. Estimation of Essential Matrix by E = K'FK and modifying E for singularity constraint
d. Decomposition Essential Matrix to get the rotation, R = UWVt or R = UW'Vt (U and Vt are obtained SVD of E)
e. Obtaining the real rotation angles from rotation matrix
Experiment 1: Real Data
For real data experiment, I captured images by mounting a camera on a tripod. Images captured at Position 1, then moved to another aligned Position and changed yaw angles in steps of 5 degrees and captured images for Position 2.
Problems/Issues:
Sign of the estimated yaw angles are not matching with ground truth yaw angles. Sometimes 5 deg is estimated as 5deg, but 10 deg as -10 deg and again 15 deg as 15 deg.
In experiment only yaw angle is changed, however estimated Roll and Pitch angles are having nonzero values close to 180/-180 degrees.
Precision is very poor in some cases the error in estimated and ground truth angles are around 2-5 degrees.
How to find out the scale factor to get the translation in real world measurement units?
The behavior is same on simulated data also.
Have anybody experienced similar problems as me? Have any clue on how to resolve them.
Any help from anybody would be highly appreciated.
(I know there are already so many posts on similar problems, going trough all of them has not saved me. Hence posting one more time.)
In chapter 9.6 of Hartley and Zisserman, they point out that, for a particular essential matrix, if one camera is held in the canonical position/orientation, there are four possible solutions for the second camera matrix: [UWV' | u3], [UWV' | -u3], [UW'V' | u3], and [UW'V' | -u3].
The difference between the first and third (and second and fourth) solutions is that the orientation is rotated by 180 degrees about the line joining the two cameras, called a "twisted pair", which sounds like what you are describing.
The book says that in order to choose the correct combination of translation and orientation from the four options, you need to test a point in the scene and make sure that the point is in front of both cameras.
For problems 1 and 2,
Look for "Euler angles" in wikipedia or any good math site like Wolfram Mathworld. You would find out the different possibilities of Euler angles. I am sure you can figure out why you are getting sign changes in your results based on literature reading.
For problem 3,
It should mostly have to do with the accuracy of our individual camera calibration.
For problem 4,
Not sure. How about, measuring a point from camera using a tape and comparing it with the translation norm to get the scale factor.
Possible reasons for bad accuracy:
1) There is a difference between getting reasonable and precise accuracy in camera calibration. See this thread.
2) The accuracy with which you are moving the tripod. How are you ensuring that there is no rotation of tripod around an axis perpendicular to surface during change in position.
I did not get your simulation concept. But, I would suggest the below test.
Take images without moving the camera or object. Now if you calculate relative camera pose, rotation should be identity matrix and translation should be null vector. Due to numerical inaccuracies and noise, you might see rotation deviation in arc minutes.