Convert world to object coordinates - ios

The iPhone gyroscope receives rotation data relative to some reference attitude and it doesn't change (unless multiplied.) Lets say I face the wall using my iPhone camera, and rotate 45 degrees left (roll += PI/4.)
Now, if I lift the phone towards the ceiling, both yaw and pitch change since the coordinate space is fixed (world coordinate space, doesn't move or rotate with the phone.) Is there a way to determine this angle (the one between the floor plane and the camera direction vector), roll, yaw and pitch given?
Edit: Instead of opening another question I'll try here. Luc's solution works. But how to get the other two angles of rotation? I've read the info on the posted link but it's been years since I studied linear algebra. This might be more math than a programming question, actually.

I don't really code for iPhone so I'll trust you on the "real world coordinates" frame.
In that case, you want the dot product between both z-axis' vectors. That'll give you the cosine of the angle you're looking for, pretty close thus. Since an angle between planes only really makes sense as a value between 0° and 90°, you actually have all the information you need in that cosine.
And there is no latex formatting here, otherwise I'd go into a bit more of detail, but read this page if you're interested, I'll just include the final result here, the rotation matrix for your three rotations :
Now the z-axis' vector of the horizontal plan is (0,0,1) (read this as a vertical vector though) and rotated with this matrix, you simply get its third column.
So we want to have the dot product between that third column and our (0,0,1) vector, so you get cos(β)cos(γ) which is cos(pitch)*cos(roll)
In conclusion, the angle between your plans is arccos(cos(pitch)*cos(roll)). This value will tell you how much your iPhone is inclined, not in which direction of course. But you can work that out from the values of the vector (rightmost column of the matrix) we spoke of.

Related

Compose Rotation Matrix from XYZ (Gravity/Acceleration)

I've been playing around with both Matlab & Apples documentation in regards to CMRotationMatrix for weeks.
I've found that I could easily re-create CMRotationMatrix by calculating it with Roll, Yaw & Pitch.
However, I've found no resources/documentation on how to create a Rotation Matrix from XYZ rotations from either gravity or userAcceleration.
All I found was how they create a 4x4 matrix in their VideoSnake demo.
So my question is, does anyone have any input of how to create a 3x3 matrix from XYZ rotations?
To begin with rotation matrix has vast applications in Physics, Geometry and Computer Graphics according to Wikipedia. Now looking at it from this angle in relation to your question where you made mention of gravity and userAcceleration we are seeing a synergy between principles in relation to physics where we can make mention of spacecraft exploration which depends 100 percent on gravity.
Now getting to the meat of the matter on XYZ rotations in relation to Rotation Matrix there is an abstract figure which is denoted on the origin point of the XYZ axes without any specifics to a particular angle as a starting point.
Now this is the part you have to understand, since we are using abstract and arbitrary figures we need to convert this XYZ axis point into direction vectors which can then be understood in real life world coordinates.
Only then we will be able to synergistically relate Rotation Matrix and XYZ coordinate points
Now to conclude
The essence of using this direction vector is to convert the direction into equivalent direction in cognisance with the rotation matrix which can then be effectively utilised and expressed on the platform-local coordinates

iPhone augmented reality Euler angles rotation – roll issue

I’m working on an iOS augmented reality application.
It is location-based, not marker-based.
I use the GPS, compass and accelerometers to get latitude, longitude, altitude and the 3 euler angles: yaw, pitch and roll. I know using NSLog() that those 6 variables contain valid data.
My application shows some 3d objects over the camera view.
It works fine as long as I use everything but the roll angle.
If I add that third angle, the rotation applied to my opengl world is not good. I do it that way in the main OpenGL draw method
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
//glRotatef(roll, 0, 0, 1);
I think there is something wrong with this approach but am certainly not a specialist. Maybe I should create some sort of unique rotation matrix rather than 3 different ones?
Maybe that’s not possible easily? After all most desktop video games, FPS and the like, just let the user change the yaw and the pitch using the mouse, so only 2 angles, not 3. But unlike the mouse, which is a 2d device, a phone used for augmented reality can move in any angles.
But then again, all AR tutorials I have seen online couldn’t handle ‘roll’ properly. ‘Rolling’ your phone would either completely mess AR stuff up or do nothing at all, using some roll-compensation strategies.
So my question is, assuming I have my 3 Euler angles using the phone sensors, how should I apply them to my 3d opengl view?
I think you're likely talking about gimbal lock.
The essence of the problem is that if you rotate with Eulers then there's always a sequence to it. For example, you rotate around x, then around y, then z. But then one axis can always becomes ambiguous because a preceding can move it onto a different axis.
Suppose the rotation were 0 degrees around x, 90 degrees around y, then 20 degrees around z. So you do the x rotation and nothing has changed. You do the y rotation and everything moves 90 degrees. But now you've moved the z axis onto where the x axis was previously. So the z rotation will appear to be around x.
No matter what most people's instincts tell them, there's no way to avoid the problem. The kneejerk reaction is that you'll always rotate around the global axes rather than the local one. That doesn't resolve the problem, it just reverses the order. The z rotation could then the y rotation — which has already occurred — into an x rotation.
You're right that you should aim to create a unique description of rotation separated from measuring angles.
For augmented reality it's actually not all that difficult.
The accelerometer tells you which way down is. The compass tells you which way north is. The two may not be orthogonal though — the compass reading should vary from being exactly at a right angle to the floor on the equator to being exactly parallel to the accelerometer at the poles.
So:
just accept the accelerometer vector as down;
get the cross product of down and the compass vector to get your side vector — it should point along a line of longitude;
then get the cross product of your side vector and your down vector to get a north vector that is suitably perpendicular.
You could equally use the dot product to remove that portion of the compass vector that is in the direction of gravity and cross product from there.
You'll want to normalise everything.
That gives you three basis vectors, so just put them directly into a matrix. No further work required.

Relative Camera Pose Estimation using OpenCV

I'm trying to estimate the relative camera pose using OpenCV. Cameras in my case are calibrated (i know the intrinsic parameters of the camera).
Given the images captured at two positions, i need to find out the relative rotation and translation between two cameras. Typical translation is about 5 to 15 meters and yaw angle rotation between cameras range between 0 - 20 degrees.
For achieving this, following steps are adopted.
a. Finding point corresponding using SIFT/SURF
b. Fundamental Matrix Identification
c. Estimation of Essential Matrix by E = K'FK and modifying E for singularity constraint
d. Decomposition Essential Matrix to get the rotation, R = UWVt or R = UW'Vt (U and Vt are obtained SVD of E)
e. Obtaining the real rotation angles from rotation matrix
Experiment 1: Real Data
For real data experiment, I captured images by mounting a camera on a tripod. Images captured at Position 1, then moved to another aligned Position and changed yaw angles in steps of 5 degrees and captured images for Position 2.
Problems/Issues:
Sign of the estimated yaw angles are not matching with ground truth yaw angles. Sometimes 5 deg is estimated as 5deg, but 10 deg as -10 deg and again 15 deg as 15 deg.
In experiment only yaw angle is changed, however estimated Roll and Pitch angles are having nonzero values close to 180/-180 degrees.
Precision is very poor in some cases the error in estimated and ground truth angles are around 2-5 degrees.
How to find out the scale factor to get the translation in real world measurement units?
The behavior is same on simulated data also.
Have anybody experienced similar problems as me? Have any clue on how to resolve them.
Any help from anybody would be highly appreciated.
(I know there are already so many posts on similar problems, going trough all of them has not saved me. Hence posting one more time.)
In chapter 9.6 of Hartley and Zisserman, they point out that, for a particular essential matrix, if one camera is held in the canonical position/orientation, there are four possible solutions for the second camera matrix: [UWV' | u3], [UWV' | -u3], [UW'V' | u3], and [UW'V' | -u3].
The difference between the first and third (and second and fourth) solutions is that the orientation is rotated by 180 degrees about the line joining the two cameras, called a "twisted pair", which sounds like what you are describing.
The book says that in order to choose the correct combination of translation and orientation from the four options, you need to test a point in the scene and make sure that the point is in front of both cameras.
For problems 1 and 2,
Look for "Euler angles" in wikipedia or any good math site like Wolfram Mathworld. You would find out the different possibilities of Euler angles. I am sure you can figure out why you are getting sign changes in your results based on literature reading.
For problem 3,
It should mostly have to do with the accuracy of our individual camera calibration.
For problem 4,
Not sure. How about, measuring a point from camera using a tape and comparing it with the translation norm to get the scale factor.
Possible reasons for bad accuracy:
1) There is a difference between getting reasonable and precise accuracy in camera calibration. See this thread.
2) The accuracy with which you are moving the tripod. How are you ensuring that there is no rotation of tripod around an axis perpendicular to surface during change in position.
I did not get your simulation concept. But, I would suggest the below test.
Take images without moving the camera or object. Now if you calculate relative camera pose, rotation should be identity matrix and translation should be null vector. Due to numerical inaccuracies and noise, you might see rotation deviation in arc minutes.

Is there a reverse function of lookat for glMatrix?

I am using the glMatrix to code Webgl and want to get the eye position, focal point and up direction from the existing projection and view matrix (kinda like the reverse of lookat function). Is there any way to do this?
I didn't implement one, no. I'm not even sure that you could decompose it into the original vectors, for that matter. The lookAt point could be anywhere along a ray from the origin, and how would you determine what the appropriate up vector was? I'm thinking this is a one-way algorithm (just too lazy to prove it!)
Beyond that, however, I question wether you would want to do this even if there was a method for it. I'll be willing to bet that it's almost always more beneficial to track the values you're using and manipulate them rather than to try and pull them back and forth from matrix to vectors and back.
Yes and No: Yes you can invert the model view transformation and no you will not get exactly all three vectors the same.
The model view transformation of lookAt is very similar to the connectTo operation as used in CSG models. It is mounting your scene in front of your camera. This is done by translation and three axis rotations. The eye point is translated to (0,0,0) and all further rotation is done around it. You can easily derive the eye point by transforming (0,0,0) with the inverse matrix.
But the center point is just used for adjusting the axis of view along the -Z axis. In openGL the eye is facing to -Z. The distance between center and eye is lost. So you can easy get a center point along your axis of view if you define the distance yourself. Let's say we want a distance of d. Then we just need to transform (0,0,-d) with the inverse matrix and we get a valid center point, but not exactly the same. The center point is defining only two rotation angles, the camera pan and tilt.
Even more worse is the reconstruction of the up vector. It is only used for the roll angle of the camera and thus only for one scalar value. Thus for the inverse transformation you can not only choose any positive value along the Y axis, you could choose any point in the YZ plane with a positive Y value. To get a up vector perfectly normal to the viewing axis and of size 1 we just transform (0,1,0) with the inverse matrix. Remember to transform as vector this time (not as point).
Now we have eye, center and up reconstructed in a way to get exactly the same result of lookAt next time. But since this matrix contains only 6 values of information (translation,pan,tilt,roll) we had to choose 3 values that were lost (distance center to eye, size and angle of up vector in YZ plane of camera).
The model view matrix can of course do other transformation (any affine) but the lookAt function is using this matrix only for translation and rotation. It is adjusting the scene in front of the camera without distorting it.

XNA rotation over given vector

I'm newbie in XNA, so sorry about the simple question, but I can't find any solution.
I've got simple model (similar to flat cuboid), which I cannot change (model itself). I would like to create rotate animation. In this particular problem, my model is just a cover of piano. However, the axis over which I'm going to rotate is covered by cover's median. As a result, my model is rotating like a turbine, instead of opening and closing.
I would like to rotate my object over given "line". I found Matrix.CreateLookAt(currentPosition, dstPosition, Vector.Up); method, but still don't know how o combine rotation with such matrix.
Matrix.CreateLookAt is meant for use in a camera, not for manipulating models (although I'm sure some clever individuals who understand what sort of matrix it creates have done so).
What you are wanting to do is rotate your model around an arbitrary axis in space. It's not an animation (those are created in 3D modeling software, not the game), it's a transformation. Transformations are methods by which you can move, rotate and scale a model, and are obviously the crux of 3D game graphics.
For your problem, you want to rotate this flat piece around its edge, yes? To do this, you will combine translation and axis rotation.
First, you want to move the model so the edge you want to rotate around intersects with the origin. So, if the edge was a straight line in the Z direction, it would be perfectly aligned with the Z axis and intersecting 0,0,0. To do this you will need to know the dimensions of your model. Once you have those, create a Matrix:
Matrix originTranslation = Matrix.CreateTranslation(new Vector3(-modelWidth / 2f, 0, 0))
(This assumes a square model. Manipulate the Vector3 until the edge you want is intersecting the origin)
Now, we want to do the rotating. This depends on the angle of your edge. If your model is a square and thus the edge is straight forward in the Z direction, we can just rotate around Vector3.Forward. However, if your edge is angled (as I imagine a piano cover to be), you will have to determine the angle yourself and create a Unit Vector with that same angle. Now you will create another Matrix:
Matrix axisRotation = Matrix.CreateFromAxisAngle(myAxis, rotation)
where myAxis is the unit vector which represents the angle of the edge, and rotation is a float for the number of radians to rotate.
That last bit is the key to your 'animation'. What you are going to want to do is vary that float amount depending on how much time has passed to create an 'animation' of the piano cover opening over time. Of course you will want to clamp it at an upper value, or your cover will just keep rotating.
Now, in order to actually transform your cover model, you must multiply its world matrix by the two above matrices, in order.
pianoCover.World *= originTranslation * axisRotation;
then if you wish you can translate the cover back so that its center is at the origin (by multiplying by a Transform Matrix with the Vector3 values negative of what you first had them), and then subsequently translate your cover to wherever it needs to be in space using another Transform Matrix to that point.
So, note how matrices are used in 3D games. A matrix is created using the appropriate Matrix method in order to create qualities which you desire (translation, rotation around and axis, scale, etc). You make a matrix for each of these properties. Then you multiply them in a specific order (order matters in matrix multiplication) to transform your model as you wish. Often, as seen here, these transformations are intermediate in order to get the desired effect (we could not simply move the cover to where we wanted it then rotate it around its edge; we had to move the edge to the origin, rotate, move it back, etc).
Working with matrices in 3D is pretty tough. In fact, I may not have gotten all that right (I hope by now I know that well enough, but...). The more practice you get, the better you can judge how to perform tasks like this. I would recommend reading tutorials on the subject. Any tutorial that covers 3D in XNA will have this topic.
In closing, though, if you know 3D Modeling software well enough, I would probably suggest you just make an actual animation of a piano and cover opening and closing and use that animated model in your game, instead of using models for both the piano and cover and trying to keep them together.

Resources