Draw arc through three point opencv - emgucv

I have three Point A(a1,a2) , B (b1, b2) , C (c1, c2). How to draw arc through three point and calculate arc angle.
Thanks all.
[HERE] http://photo.ssc.vn/view.php?filename=374df.png

In the event that you choose a quadratic you will have
y = ax*x + bx + c
Three points A(x1, y1) B(x2, y2) C(x3, y3)
This gives a Linear system
y1 = ax1*x + bx1 + c
y2 = ax2*x + bx2 + c
y3 = ax3*x + bx3 + c
Which can be solved for a, b and c
In the event that you are using a circle, use
Emgu.CV.PointCollection.MinEnclosingCircle
This will give you an object of type CircleF, which has a property Center of type PointF.
Find the vectors between the points and the center.
Va = A - Center
Vb = B - Center
Vc = C - Center
Find the angles between these vectors. You are looking for the largest acute angle.
You can use dot product to calculate the angle.

Related

Getting angle between 2 lines in swift [duplicate]

I have two points in my coordinate system (x,y) that I want to know the angle of their line and x-axis.
I use swift for solving this but I can't get the angle.
I need this angle in radians to use it in the following equation:
(x0 + r cos theta, y0 + r sin theta)
r : radius of circle
If you have two points, (x0, y0) and (x1, y1), then the angle of the line joining them (relative to the X axis) is given by:
theta = atan2((y1 - y0), (x1 - x0))
The angle between a line, for reference, let's call this A, defined by two points p1=(x1,y1),p2=(x2, y2) and the x-axis is related to finding the slope/ gradient of the line, A.
# To solve a problem you sometimes have to simplify it and then work up to the full solution"
Let's start by obtaining the gradient of the line A.
The gradient of line A:
slope = (y2 - y1)/(x2 - x1)
for a straight line, that makes an angle theta with the x-axis
tan(theta) = slope = (change in y) / (change in x)
Therefore, theta = tan_inverse (slope)
theta = atan(slope)

Track a spot on a turning circle

I had a question about tracking a spot on a turning circle. As you see in the image I am trying to calculate the x2 and only known parameters are θ1, L and x1. The challenge is to track that spot on each turn of circle which each step size is θ1.
The calculation which gives approximately correct answer is:
x2 = x1 - (L/2 - L/2 * cos(θ1))
Spot Tracking
The problem is as the circle turns x1 deviates more from the correct answer. Is there anyway to calculate θ2 as circle turns?
Hint:
The spot motion is described by
X = Xc + r cos Θ
Y = Yc + r sin Θ
Hence the angle seen from the origin,
φ = arctan((Yc + r sin Θ)/(Xc + r cos Θ)).
Notice that your problem is indeterminate, as the center of the circle is free to move at distance L of the origin, giving different intersections with the vertical at x1.

triangulate points using epipolar geometry

I'm using in OpenCV the method
triangulatePoints(P1,P2,x1,x2)
to get the 3D coordinates of a point by its image points x1/x2 in the left/right image and the projection matrices P1/P2.
I've already studied epipolar geometry and know most of the maths behind it. But what how does this algorithm get mathematically the 3D Coordinates?
Here are just some ideas, to the best of my knowledge, should at least work theoretically.
Using the camera equation ax = PX, we can express the two image point correspondences as
ap = PX
bq = QX
where p = [p1 p2 1]' and q = [q1 q2 1]' are the matching image points to the 3D point X = [X Y Z 1]' and P and Q are the two projection matrices.
We can expand these two equations and rearrange the terms to form an Ax = b system as shown below
p11.X + p12.Y + p13.Z - a.p1 + b.0 = -p14
p21.X + p22.Y + p23.Z - a.p2 + b.0 = -p24
p31.X + p32.Y + p33.Z - a.1 + b.0 = -p34
q11.X + q12.Y + q13.Z + a.0 - b.q1 = -q14
q21.X + q22.Y + q23.Z + a.0 - b.q2 = -q24
q31.X + q32.Y + q33.Z + a.0 - b.1 = -q34
from which we get
A = [p11 p12 p13 -p1 0; p21 p22 p23 -p2 0; p31 p32 p33 -1 0; q11 q12 q13 0 -q1; q21 q22 q23 0 -q2; q31 q32 q33 0 -1], x = [X Y Z a b]' and b = -[p14 p24 p34 q14 q24 q34]'. Now we can solve for x to find the 3D coordinates.
Another approach is to use the fact, from camera equation ax = PX, that x and PX are parallel. Therefore, their cross product must be a 0 vector. So using,
p x PX = 0
q x QX = 0
we can construct a system of the form Ax = 0 and solve for x.

Formula for CGPointApplyAffineTransform function in iOS

I am trying to affine rotate matrix in Android based on iOS Code
iOS has two functions CGAffineTransformMakeRotation and CGPointApplyAffineTransform used for calculation
Step 1: CGAffineTransformMakeRotation();
Input:
2.2860321998596191
Result:
a = -0.65579550461444569,
b = 0.75493857771840255,
c = -0.75493857771840255,
d = -0.65579550461444569,
tx = 0, ty = 0
Formula:
double A = Math.cos(RadiansRotated);
double B = -Math.sin(RadiansRotated);
double C = Math.sin(RadiansRotated);
double D = Math.cos(RadiansRotated);
I am able to calculate a,b,c,d for step 1 using above formula
Step 2: CGPointApplyAffineTransform()
Input :
x = 612.55191924649432,
y = -391.95960729287646
And Matrix return from Step 1
Result:
x = -105.80336653205421,
y = 719.48442314773808
Does anyone know the formula used in ApplyAffineTransform?
I need help for Step 2
I have tried with Android's Matrix class - Not Working
I have also tried with Java's AffineTransform - Not working
The math behind the CGAffineTransform functions is described in “The Math Behind the Matrices” in the Quartz 2D Programming Guide.
The formulae for transforming a point using an affine transform are given as:
x' = ax + cy + tx
y' = bx + dy + ty
Incidentally, in your step 1, you have reversed the signs of b and c, which has the effect of reversing the direction of rotation.

Revert a function to get a specific value

I have this function which returns x and y position an just adding up degrees, it make objects to move around in circular movements like a satellite around a planet.
In my case it moves like an ellipse because I added +30 to dist.
-(CGPoint)circularMovement:(float)degrees moonDistance:(CGFloat)dist
{
if(degrees >=360)degrees = 0;
float x = _moon.position.x + (dist+30 + _moon.size.height/2) *cos(degrees);
float y = _moon.position.y + (dist + _moon.size.height/2) *sin(degrees);
CGPoint position= CGPointMake(x, y);
return position;
}
What I would like is to reverse this function, giving the x and y position of an object and getting back the dist value.
Is this possible?
If so, how would I go about achieving it?
If you have an origin and a target, the origin having the coordinates (x1, y1) and the target has the coordinates (x2, y2) the distance between them is found using the Pythagorean theorem.
The distance between the points is the square root of the difference between x2 and x1 plus the difference between y2 and y1.
In most languages this would look something like this:
x = x2 - x1;
y = y2 - y1;
distance = Math.SquareRoot(x * x + y * y);
Where Math is your language's math library.
float x = _moon.position.x + (dist+30 + _moon.size.height/2) *cos(degrees);
float y = _moon.position.y + (dist + _moon.size.height/2) *sin(degrees);
is the way you have originally calculated the values, so the inverse formula would be:
dist = ((y - _moon.position.y) / (sin(degrees))) - _moon.size.height/2
You could calculate it based on x as well, but there is no point, it is simpler based on y.

Resources