Converting between coordinates system WGS 74 and WGS 84 - geolocation

I'm working on rasterisation of the GSHHS database basically converting shoreline polygons and river lines to raster.
http://www.soest.hawaii.edu/pwessel/gshhg/
The rivers and shores databases are two different files.
I noticed misalignment of hundreds of meters between rivers and shores at points where they are clearly should be aligned. One thing I noticed in the README is that the shorelines database uses WGS84 coordinates and river database was generated form other source using WGS 72. The difference should be in shift of prime meridian and difference in primary axis dimensions of the Earth.
I've searched over the internet about conversion between the two sets and couldn't found.
Answers I need:
How can I convert between them?
Or alternatively
How do I solve misalignment in GSHHS database?

Here are the formulas and parameters to transform WGS 72 coordinates to WGS 84 coordinates
FORMULAS
Δφ" = (4.5 cos φ) / (a sin 1") + (Δf sin 2φ) / (sin 1") (Unit = Arc Seconds)
Δλ" = 0.554 (Unit = Arc Seconds)
Δh = 4.5 sin φ + a Δf sin2 φ - Δa + Δr (Unit = Meters)
Δφ = Δφ" x 1" (Unit = radians)
Δλ = Δλ" x 1" (Unit = radians)
PARAMETERS
Δf = 0.3121057 x 10-7
a = 6378135 m
Δa = 2.0 m
Δr = 1.4 m
1" = pi / (3600.0 * 180.0) rad
To obtain WGS 84 coordinates, add the Δφ, Δλ, Δh changes calculated using WGS 72 coordinates to the WGS 72 coordinates (φ, λ, h, respectively). Latitude is positive north and longitude is positive east (0° to 180°).
RESULT
φ WGS84 = φ + Δφ
λ WGS84 = λ + Δλ
h WGS84 = h + Δh
Documentation [p105-106]

You can use proj4:https://trac.osgeo.org/proj/wiki/man_proj. There are many ellipsoid identifiers supported. You can get a list with the -le option switch. To convert from wgs74 you can use the "towgs84" option switch.

One authoritative source is the National Geospatial Intelligence Agency (NGA). They publish open source, the Geotrans program at http://earth-info.nga.mil/GandG/geotrans/. This gives GUI, batch processing. I expect internally there is an API you could call as a developer.

Related

Line Follower based on Image Processing

I’m starting to work on a line follower project but it is required that I use image processing techniques. I have a few ideas to consider, but I would like some input as there are some doubts I would like to clarify. This is my approach to solve this problem: I first read the image, then apply thresholding to detect the object (the line). I do color filtering and then edge detection. After this I start to do image classification to detect all the lines, then extrapolate those lines to only output/detect parallel lines (like a lane detection algorithm). With this parallel lines I can calculate the center to maintain my vehicle centered and the angle to make turns.
I do not know the angles in the path so the system must be able to turn any angle, that’s why I will calculate the angle. I have included a picture of a line with a turn, this is the kind of turns I will be dealing with. I have managed to implement almost everything. My main problem is in the change of angle, basically the turns. After I have detected the parallel lines, how can I make my system know when is time to make a turn? The question might be kind of confusing, but basically the vehicle will be moving forward as long the angle is near to zero. But when the vehicle approach a turn, it might detect two set of parallel lines. Maybe I can define a length of the detected lines that will define whether or not the vehicle must move forward?
Any ideas would be appreciated.
If you have two lines (the center line of each path):
y1 = m1 * x + b1
y2 = m2 * x + b2
They intersect when you choose an x such that y1 and y2 are equal (if they are not parallel of course, so m1 != m2)
m1 * x + b1 = m2 * x + b2
(do a bunch of algebra)
x = (b2 - b1) / (m1 - m2)
(y should be the same for both line formulas)
When you are near this point, switch lines.
NOTE: This won't handle the case of perfectly vertical lines, because they have infinite slope, and no y-intercept -- for that see the parametric form of lines. You will have 2 equations per line:
x = f1(t1)
y = f2(t1)
and
x = f3(t2)
y = f4(t2)
Set f1(t1) == f3(t2) and f2(t1) == f4(t2) to find the intersection of non-parallel lines. Then plug t1 into the first line formula to find (x, y)
Basically the answer by Lou Franco explains you how to get the intersection of the two center line of each path and then that intersection is a good point to start your turn.
I would add a suggestion on how to compute the center line of a path.
In my experience, when working with floating point representation of lines extracted from images, the lines are really never parallel, they just intersect usually at a point that falls out of the image (maybe far away).
The following C++ function bisector_of_lines is inspired by the method bisector_of_linesC2 found at CGAL source code.
A line is expressed as a*x+b*y+c=0, the following function
constructs the bisector of the two lines p and q.
line p is pa*x+pb*y+pc=0
line q is qa*x+qb*y+qc=0
The a, b, c of the bisector line are the last three parameters of the function: a, b and c.
In the general case, the bisector has the direction of the vector which is the sum of the normalized directions of the two lines, and which passes through the intersection of p and q. If p and q are parallel, then the bisector is defined as the line which has the same direction as p, and which is at the same distance from p and q (see the official CGAL documentation for CGAL::Line_2<Kernel> CGAL::bisector).
void
bisector_of_lines(const double &pa, const double &pb, const double &pc,
const double &qa, const double &qb, const double &qc,
double &a, double &b, double &c)
{
// We normalize the equations of the 2 lines, and we then add them.
double n1 = sqrt(pa*pa + pb*pb);
double n2 = sqrt(qa*qa + qb*qb);
a = n2 * pa + n1 * qa;
b = n2 * pb + n1 * qb;
c = n2 * pc + n1 * qc;
// Care must be taken for the case when this produces a degenerate line.
if (a == 0 && b == 0) {// maybe it is best to replace == with https://stackoverflow.com/questions/19837576/comparing-floating-point-number-to-zero
a = n2 * pa - n1 * qa;
b = n2 * pb - n1 * qb;
c = n2 * pc - n1 * qc;
}
}

DICOM: alignment

I am relatively new with working with dicom files.Thanks in advance.
I have 2 dicom files of the same patient taken at different intervals.
They are not exactly the same dimensions.
The first one is: dimesions of cube1 104X163X140 and the second one is dimesions of cube2 107X164X140. I would like to align both cubes at the origin and compare them.
The ImagePositionPatient of the first file is: [-207.4748, -151.3715
-198.7500]
The ImagePositionPatient of the second file is: [-207.4500, -156.3500
-198.7500]
Both files have the same ImageOrientationPatient - [ 1 0 0 0 1 0]
Any chance someone could please show me an example? I am not sure how to map the physical plane back to the image plane?
Thanks a lot in advance,
Ash
===============================================================
Added: 23/2/17
I have used the matrix formula below based on the link where in my case :
IPP (Sxyz) of cube 1 = [-207.4748, -151.3715-198.7500]
Xxyz (IOP) = [1,0,0]
Yxyz (IOP) = [1,0,0]
delta_i = 2.5
delta_j = 2.5
So for values of i = 0: 103 and j = 0:162 of cube1, I should compute the values of Pxyz?
What is the next step? Sorry, I do not see how this will help me to align the two cubes with different IPP to the image plane?
Sorry for the newbie question ...
I did not verify the matrix you built. But if it is calculated correctly, you can transform between the volume coordinate system (VCS) (x1,y1,z1), where x1 = column, y1 = row and z1 = slice number to the patient coordinate system (PCS) (x2, y2, z2) - these coordinates define the point within the patient in milimeters.
By inverting the matrix, you can transform back from PCS to VCS.
Let's say, the transformation matrix for volume 1 := M1 and the transformation matrix PCS -> VCS for volume 2 := M2. Then you can transform a point p1 from volume 1 to the corresponding point p2 in volume 2 by transforming it to the PCS using M1 and transforming from PCS to volume 2 using M2' (the inverted M2).
By multiplying M1 and M2', you can calculate a matrix transforming directly from volume1 to volume2.
So:
p2 = (M1 * M2') * p1

Uniform discretization of Bezier curve

I need to discretise a 3rd order Bezier curve with points equally distributed along the curve. The curve is defined by four points p0, p1, p2, p3 and a generic point p(t) with 0 < t < 1 is given by:
point_t = (1 - t) * (1 - t) * (1 - t) * p0 + 3 * (1 - t) * (1 - t) * t * p1 + 3 * (1 - t) * t * t * p2 + t * t * t * p3;
My first idea was to discretise t = 0, t_1, ... t_n, ..., 1
This doesn't work as, in general, we don't end up with a uniform distance between the discretised points.
To sum up, what I need is an algorithm to discretise the parametric curve so that:
|| p(t_n) - p(t_n_+_1) || = d
I thought about recursively halving the Bezier curve with the Casteljau algorithm up to required resolution, but this would require a lot of distance calculations.
Any idea on how to solve this problem analytically?
What you are looking for is also called "arc-length parametrisation".
In general, if you subdivide a bezier curve at fixed interval of the default parametrisation, the resulting curve segments will not have the same arc-length. Here is one way to do it http://pomax.github.io/bezierinfo/#tracing.
A while ago, I was playing around with a bit of code (curvature flow) that needed the points to be as uniformly separated as possible. Here is a comparison (without proper labeling on axes! ;)) using linear interpolation and monotone cubic interpolation from the same set of quadrature samples (I used 20 samples per curve, each evaluated using a 24 point gauss-legendre Quadrature) to reparametrise a cubic curve.
[Please note that, this is compared with another run of the algorithm using a lot more nodes and samples taken as ground truth.]
Here is a demo using monotone cubic interpolation to reparametrise a curve. The function Curve.getLength is the quadrature function.

Using accelerometer and gyrometer to track the phone movement independently from the orientation

I have been experimenting with the Coremotion APIS, with the goal of transforming the coordinates from the phone reference system to the "earth" reference system as the one that is stored in the CMAttitude object.
What I tried till now is by getting the CMRotationMatrix
CMRotationMatrix r = motionManager.deviceMotion.attitude.rotationMatrix;
and do a matrix multiplication with
CMAcceleration a = motionManager.deviceMotion.gravity;
in the following way (typedef float vec3f[3])
vec3f accelerationInReferenceSystem;
accelerationInReferenceSystem[0] = a.x * r.m11 + a.y * r.m12 + a.z * r.m13;
accelerationInReferenceSystem[1] = a.x * r.m21 + a.y * r.m22 + a.z * r.m23;
accelerationInReferenceSystem[2] = a.x * r.m31 + a.y * r.m32 + a.z * r.m33;
But, no luck. I am not that keen in 3D graphics, and there's not much documentation on how to use the various CMRotationMatrix, quaternions, etc. Before going this way, I have tried to log the values of d.attitude.pitch, d.attitude.roll and d.attitude.yaw. For what I have read, it's not the way to go, but besides that, I have found that the value returned by d.attitude.pitch spans just PI radians so I would need to include the yaw in the computation of the pitch angle from 0 to 2PI in order to understand where the head of the phone is heading. It seemed to me that using the rotation matrix would be the good way to go. Also, I wonder if I need to use a matrix that is the inverse of the current rotation matrix in order to take the coordinates into the system identified by r.attitude, thank you if you can help!
I'm not sure I understand what you want to do.. But if by the 'earth' reference system you mean a fixed reference system, and by 'coordinates' you mean x, y, z coordinates, then it's impossible to do what you're trying to do (or at least, impossible to do accurately).
Because you could use these to calculate the displacement of the device, which is impossible to do satisfactorily - see this for example: How can I find distance traveled with a gyroscope and accelerometer?

Determine if a point is within the range of two other points that create infinitely extending lines from an origin

If I have three points that create an angle, what would be the best way to determine if a fourth point resides within the angle created by the previous three?
Currently, I determine the angle of the line to all three points from the origin point, and then check to see if the test angle is in between the two other angles but I'm trying to figure out if there's a better way to do it. The function is run tens of thousands of times an update and I'm hoping that there's a better way to achieve what I'm trying to do.
Let's say you have angle DEF (E is the "pointy" part), ED is the left ray and EF is the right ray.
* D (Dx, Dy)
/
/ * P (Px, Py)
/
/
*---------------*
E (Ex, Ey) F (Fx, Fy)
Step 1. Build line equation for line ED in the classic Al * x + Bl * y + Cl = 0 form, i.e. simply calculate
Al = Dy - Ey // l - for "left"
Bl = -(Dx - Ex)
Cl = -(Al * Ex + Bl * Ey)
(Pay attention to the subtraction order.)
Step 2. Build line equation for line FE (reversed direction) in the classic Ar * x + Br * y + Cr = 0 form, i.e. simply calculate
Ar = Ey - Fy // r - for "right"
Br = -(Ex - Fx)
Cr = -(Ar * Ex + Br * Ey)
(Pay attention to the subtraction order.)
Step 3. For your test point P calculate the expressions
Sl = Al * Px + Bl * Py + Cl
Sr = Ar * Px + Br * Py + Cr
Your point lies inside the angle if and only if both Sl and Sr are positive. If one of them is positive and other is zero, your point lies on the corresponding side ray.
That's it.
Note 1: For this method to work correctly, it is important to make sure that the left and right rays of the angle are indeed left and right rays. I.e. if you think about ED and EF as clock hands, the direction from D to F should be clockwise. If it is not guaranteed to be the case for your input, then some adjustments are necessary. For example, it can be done as an additional step of the algorithm, inserted between steps 2 and 3
Step 2.5. Calculate the value of Al * Fx + Bl * Fy + Cl. If this value is negative, invert signs of all ABC coefficients:
Al = -Al, Bl = -Bl, Cl = -Cl
Ar = -Ar, Br = -Br, Cr = -Cr
Note 2: The above calculations are made under assumption that we are working in a coordinate system with X axis pointing to the right and Y axis pointing to the top. If one of your coordinate axes is flipped, you have to invert the signs of all six ABC coefficients. Note, BTW, that if you perform the test described in step 2.5 above, it will take care of everything automatically. If you are not performing step 2.5 then you have to take the axis direction into account from the very beginning.
As you can see, this a precise integer method (no floating point calculations, no divisions). The price of that is danger of overflows. Use appropriately sized types for multiplications.
This method has no special cases with regard to line orientations or the value of the actual non-reflex angle: it work immediately for acute, obtuse, zero and straight angle. It can be easily used with reflex angles (just perform a complementary test).
P.S. The four possible combinations of +/- signs for Sl and Sr correspond to four sectors, into which the plane is divided by lines ED and EF.
* D
/
(-,+) / (+,+)
/
-------*------------* F
/ E
(-,-) / (+,-)
/
By using this method you can perform the full "which sector the point falls into" test. For an angle smaller than 180 you just happen to be interested in only one of those sectors: (+, +). If at some point you'll need to adapt this method for reflex angles as well (angles greater than 180), you will have to test for three sectors instead of one: (+,+), (-,+), (+,-).
Describe your origin point O, and the other 2 points A and B then your angle is given as AOB. Now consider your test point and call that C as in the diagram.
Now consider that we can get a vector equation of C by taking some multiple of vector OA and some multiple of OB. Explicitly
C = K1 x OA + K2 OB
for some K1,K2 that we need to calculate. Set O to the origin by subtracting it (vectorially) from all other points. If coordinates of A are (a1,a2), B = (b1,b2) and C = (c1,c2) we have in matrix terms
[ a1 b1 ] [ K1 ] = [ c1 ]
[ a2 b2 ] [ K2 ] = [ c2 ]
So we can solve for K1 and K2 using the inverse of the matrix to give
1 / (a1b2 - b1a2) [ b2 -b1 ] [ c1 ] = [ K1 ]
[ -a2 a1 ] [ c2 ] = [ K2 ]
which reduces to
K1 = (b2c1 - b1c2)/(a1b2 - b1a2)
K2 = (-a2c1 + a1c2)/(a1b2 - b1a2)
Now IF the point C lies within your angle, the multiples of the vectors OA and OB will BOTH be positive. If C lies 'under' OB, then we need a negative amount of OA to get to it similarly for the other direction. So your condition is satisfied when both K1 and K2 are greater than (or equal to) zero. You must take care in the case where a1b2 = b1a2 as this corresponds to a singular matrix and division by zero. Geometrically it means that OA and OB are parallel and hence there is no solution. The algebra above probably needs verifying for any slight typo mistake but the methodology is correct. Maybe long winded but you can get it all simply from point coordinates and saves you calculating inverse trig functions to get angles.
The above applies to angles < 180 degrees, so if the your angle is greater than 180 degrees, you should check instead for
!(K1 >= 0 && K2 >= 0)
as this is exterior to the segment less than 180 degree. Remember that for 0 and 180 degrees you will have a divide by zero error which must be checked for (ensure a1b2 - b1a2 != 0 )
Yes, I meant the smallest angle in my comment above. Look at this thread for an extensive discussion on cheap ways to find the measure of the angle between two vectors. I have used the lookup-table approach on many occasions with great success.
Triangle O B C has to be positive oriented and also triangle O C A. To calaculate orientation, just use Shoelace formula. Both values has to be positive.

Resources