Converting between Arc Definitions - ellipse

I'm struggling to figure out the math to convert between two different definitions of an arc.
The source definition includes a start point, end point and control point on the arc as well as the eccentricity of the ellipse and the angle of rotation of the major axis.
I need to convert this into an Arc definition I can use to initialize an ArcSegment which need a start and endpoint, semimajor and semiminor axes (given as a Size structure), and angle of rotation of the major axis.
I believe the start/end points and angle of major axis rotation transfer nicely but I'm not sure how to get the semimajor and semiminor axes given the eccentricity and control point in the source definition of the arc.
Any geometry experts able to help?

Related

Why angle of parallel lines is not same? opencv c++

I detected lane lines in opencv and calculated their angles (which are shown by read lines in the image), although they look almost the same angle, the angle calculated by the program shows quiet a difference with left line always greater than right.
I am using arctan(slope) to find angles.
Is is due to the fact that y-axis in MAT matrix is inverted?
I am trying to detect the difference in the lane line angles to detect turns and straight road. How can I do achieve my goal? which I can not right now because lines do not have same(but opposite) angle on the straight road.
Below is the image.
Image
The difference of the two angles is not close to zero, because the lines are not parallel in 2D, simple as that. You are comparing angles of 2D lines in the image plane!
What you want to do is to check how close is the sum of the angles to zero, i.e. fabs(angle1 + angle2). You probably also want to check if fabs(angle1) and fabs(angle2) are within a specific range.
Furthermore, you shouldn't use slopes, as the slope of a vertical line is infinity. You probably have 2D direction vectors for each line at some point. Either use atan2(dy, dx) to compute the angle for each line, or you could stick with the direction vectors, in the latter case adding the normalized direction vectors and comparing their angle to the vector (0, 1), which is the vertical line.
Be aware that all this assumes that the camera points into the direction of the (straight) lane.

Robotics: Homogenous Transformation Matrix for DH parameters

I'm studying Introduction to robotic and found there is different equations to determine the position and orientation for the end effector of a robot using DH parameters transformation matrix, they are :
1.
Translate by d_i along the z_i-axis.
Rotate counterclockwise by theta_i about the z_i-axis.
Translate by a_{i-1} along the x_{i-1}-axis.
Rotate counterclockwise by alpha_{i-1} about the x_{i-1}-axis.
2.
Rotate by theta_i about the Z_i-axis.
Translate by d_i along the z_i-axis.
Translate by a_(i-1) along the x(i-1)-axis.
Rotate by alpha_(i-1)along the x(i-1)-axis.
3.
Rotate by alpha_(i-1)along the x(i-1)-axis.
Translate by a_(i-1)along the x(i-1)-axis.
Rotate by theta_i about the Z_i-axis.
Translate by d_i about the Z_i-axis.
What is the difference between them? Will the result be different?
Which one should I use when calculating the position and orientation?
As far as I know there is no difference. They should all give you the same end result, but be consistent. pick one form and stick with it.
The main problem comes when you are trying to reverse the process. Using method 1 to got from time t to t+1 is fine, but if you wanted to go from t+1 to t you would need to use method 1. Using another method to do the transform (though it should technically work) usually doesn't because nonlinearities in modeling and rounding errors for rotation (cos and sin terms).
This isn't really surprising though, it's the same issue you encounter when going from a local reference(with respect to a robot) to a global reference. The order of translations and rotations must be maintained for forward and backword transformations

Understanding the output of solvepnp?

I am have been using solvepnp() for the calculation of the rotation and translation matrix. But the euler angles calculated from the obtained rotation matrix gave very erratic values. Trying to find the problem, I had a set of 2D projection points for my marker and kept the other parameters of solvepnp() constant.
Eg values:
2D points
[219.67473, 242.78395; 363.4151, 238.61298; 503.04855, 234.56117; 501.70917, 628.16742; 500.58069, 959.78564; 383.1756, 972.02679; 262.8746, 984.56982; 243.17044, 646.22925]
The euler angle theta(x) calculated from the output rotation matrix of solvepnp() was -26.4877
Next, I incremented only the x value of the first point(i.e 219.67473) by 0.1 to check the variation of the theta(x) euler angle (keeping the remaining points and the other parameters constant) and ran the solvepnp() again .For that very small change,I had values which were decreasing from -19 degree, -18 degree (for x coord = 223.074) then suddenly jump to 27 degree for a while (for x coord = 223.174 to 226.974) then come down to 1.3 degree (for x coord = 227.074).
I cannot understand this behaviour at all.Could somebody please explain?
My euler angle calculation from the rotation matrix uses this procedure.
Try Rodrigues() for conversion between rotation matrix and rotation vector to make sure everything is clean and right. Non RANSAC version can be very sensitive to outliers that create a huge error in the parameters and thus bias a solution. Using RANSAC version of solvePnP may make it more stable to outliers. For example, adding too much to one of the points coordinates will eventually make it an outlier and it won’t influence a solution after that.
If everything fails, write a series unit tests: create an artificial set of points in 3D (possibly non planar), apply a simple translation first, in second variant apply rotation only, and in a third test apply both. Project using your camera matrix and then plug in your 2D, 3D points and projection matrix into your code to find the pose. If the result deviates from the inverse of the translations and rotations your applied to the points look for the bug in feeding parameters to PnP.
It seems the coordinate systems are different.OpenCV uses right-hand coordinate-system Y-pointing downwards. At nghiaho.com it says the calculations are based on this and if you look at the axis they don't seem to match. I guess you are using Rodrigues for matrix computation? Try comparing rotation vectors as well.

iPhone augmented reality Euler angles rotation – roll issue

I’m working on an iOS augmented reality application.
It is location-based, not marker-based.
I use the GPS, compass and accelerometers to get latitude, longitude, altitude and the 3 euler angles: yaw, pitch and roll. I know using NSLog() that those 6 variables contain valid data.
My application shows some 3d objects over the camera view.
It works fine as long as I use everything but the roll angle.
If I add that third angle, the rotation applied to my opengl world is not good. I do it that way in the main OpenGL draw method
glRotatef(pitch, 1, 0, 0);
glRotatef(yaw, 0, 1, 0);
//glRotatef(roll, 0, 0, 1);
I think there is something wrong with this approach but am certainly not a specialist. Maybe I should create some sort of unique rotation matrix rather than 3 different ones?
Maybe that’s not possible easily? After all most desktop video games, FPS and the like, just let the user change the yaw and the pitch using the mouse, so only 2 angles, not 3. But unlike the mouse, which is a 2d device, a phone used for augmented reality can move in any angles.
But then again, all AR tutorials I have seen online couldn’t handle ‘roll’ properly. ‘Rolling’ your phone would either completely mess AR stuff up or do nothing at all, using some roll-compensation strategies.
So my question is, assuming I have my 3 Euler angles using the phone sensors, how should I apply them to my 3d opengl view?
I think you're likely talking about gimbal lock.
The essence of the problem is that if you rotate with Eulers then there's always a sequence to it. For example, you rotate around x, then around y, then z. But then one axis can always becomes ambiguous because a preceding can move it onto a different axis.
Suppose the rotation were 0 degrees around x, 90 degrees around y, then 20 degrees around z. So you do the x rotation and nothing has changed. You do the y rotation and everything moves 90 degrees. But now you've moved the z axis onto where the x axis was previously. So the z rotation will appear to be around x.
No matter what most people's instincts tell them, there's no way to avoid the problem. The kneejerk reaction is that you'll always rotate around the global axes rather than the local one. That doesn't resolve the problem, it just reverses the order. The z rotation could then the y rotation — which has already occurred — into an x rotation.
You're right that you should aim to create a unique description of rotation separated from measuring angles.
For augmented reality it's actually not all that difficult.
The accelerometer tells you which way down is. The compass tells you which way north is. The two may not be orthogonal though — the compass reading should vary from being exactly at a right angle to the floor on the equator to being exactly parallel to the accelerometer at the poles.
So:
just accept the accelerometer vector as down;
get the cross product of down and the compass vector to get your side vector — it should point along a line of longitude;
then get the cross product of your side vector and your down vector to get a north vector that is suitably perpendicular.
You could equally use the dot product to remove that portion of the compass vector that is in the direction of gravity and cross product from there.
You'll want to normalise everything.
That gives you three basis vectors, so just put them directly into a matrix. No further work required.

Is there a reverse function of lookat for glMatrix?

I am using the glMatrix to code Webgl and want to get the eye position, focal point and up direction from the existing projection and view matrix (kinda like the reverse of lookat function). Is there any way to do this?
I didn't implement one, no. I'm not even sure that you could decompose it into the original vectors, for that matter. The lookAt point could be anywhere along a ray from the origin, and how would you determine what the appropriate up vector was? I'm thinking this is a one-way algorithm (just too lazy to prove it!)
Beyond that, however, I question wether you would want to do this even if there was a method for it. I'll be willing to bet that it's almost always more beneficial to track the values you're using and manipulate them rather than to try and pull them back and forth from matrix to vectors and back.
Yes and No: Yes you can invert the model view transformation and no you will not get exactly all three vectors the same.
The model view transformation of lookAt is very similar to the connectTo operation as used in CSG models. It is mounting your scene in front of your camera. This is done by translation and three axis rotations. The eye point is translated to (0,0,0) and all further rotation is done around it. You can easily derive the eye point by transforming (0,0,0) with the inverse matrix.
But the center point is just used for adjusting the axis of view along the -Z axis. In openGL the eye is facing to -Z. The distance between center and eye is lost. So you can easy get a center point along your axis of view if you define the distance yourself. Let's say we want a distance of d. Then we just need to transform (0,0,-d) with the inverse matrix and we get a valid center point, but not exactly the same. The center point is defining only two rotation angles, the camera pan and tilt.
Even more worse is the reconstruction of the up vector. It is only used for the roll angle of the camera and thus only for one scalar value. Thus for the inverse transformation you can not only choose any positive value along the Y axis, you could choose any point in the YZ plane with a positive Y value. To get a up vector perfectly normal to the viewing axis and of size 1 we just transform (0,1,0) with the inverse matrix. Remember to transform as vector this time (not as point).
Now we have eye, center and up reconstructed in a way to get exactly the same result of lookAt next time. But since this matrix contains only 6 values of information (translation,pan,tilt,roll) we had to choose 3 values that were lost (distance center to eye, size and angle of up vector in YZ plane of camera).
The model view matrix can of course do other transformation (any affine) but the lookAt function is using this matrix only for translation and rotation. It is adjusting the scene in front of the camera without distorting it.

Resources