How to calculate minimum distance between point and line in 3D space - geolocation

I need to calculate the minimum distance between a 3D point (latitude, longitude, elevation) and a line (defined as two points).
The elevation is not necessary on the ground, I need to consider flying objects.
I only found an article that explains how to do that on a generic space but my points are defined with lat/lon/altitude(meters).
Thank you for pointing in the right direction, in my case I need to do that in Javascript but couldn't find any library that takes into consideration the altitude.
Point-Line Distance--3-Dimensional

If you want to compare a 3d point to a 2d line, I suppose you mean a "line" on our earth, at elevation 0. Take a look at st_distance in postgis.
If I understand you correctly, that'll give you what you want.
https://postgis.net/docs/ST_Distance.html

Related

what does the "wag" manim mobject method do

I was just looking at the documentation for mobjects and I saw a method named wag the mobject methods listed here and I can't really decipher what it does and I can't find any other references to it
Grant has used it twice in his old videos -- and indeed, the method is a bit weird. If I had to write a docstring for it, it would look something like this:
def wag(self, direction=RIGHT, axis=DOWN, wag_factor=1.0):
"""Distorts a mobject along and in specified directions.
Parameters
----------
direction
The direction in which points of the mobject are shifted.
axis
The direction that determines the magnitude of the shift.
The more the points of this mobject lie in this direction,
the more they get shifted. Points on the opposite end of the
direction specified by ``axis`` are not shifted at all, and
those in the direction of ``axis`` are shifted by
the unscaled direction vector. The magnitude for points in
between is linearly interpolated between these two extremes.
wag_factor
The power to which the linearly interpolated shifting
magnitudes are raised.
"""
But honestly, this is a fairly specific and obscure method that should probably just be removed.

how to convert projected coordinate system worldfile to geographic coordinate system worldfile?

I have an image with the worldfile like this:
0.298582141739
0.000000000000
0.000000000000
-0.298582141739
1283836.327077804830
6134835.890168172310
I think it is the projected coordinate system. I want to convert it to geographic coordinate system, because I want to know the latitude/longitude of each pixel of the image.
How to convert that? I check the worldfileread in the matlab, it can read both. But I did not find how to convert.
Could anyone know about that? Thank you.
I'd say you need to know the coordinate system of the map. The world file simply describes a generic affine transformation in the plane. This allows you to convert from image pixels to map coordinates or vice versa. From there to geographic coordinates depends on the map projection.
If you don't know that projection, you need to ask the creator of the image or make some wild guesses. If you know what the image depicts then you can rule out some map projections. For example if the image is of a city in the US, then the UTM cells for France probably won't apply. But chances are that there is a number of rather similar projections and coordinate systems in use for the depicted location, and deciding between these will be tough.
One thing you can already rule out: the world file does not directly map to geographic coordinates measured in degrees (in a equirectangular projection). That's because the upper left corner of the image would have an easting of 1283836.327077804830 and a northing of 6134835.890168172310 which doesn't make sense as degrees. It could be meters or something like that, but I doubt it: The first and fourth line have the same absolute value, so your pixels are square. So unless your picture displays something close to the equator or your picture looks highly distorted in terms of horizontal vs. vertical extents, I'd say this is not a geographic coordinate system.

Convert world to object coordinates

The iPhone gyroscope receives rotation data relative to some reference attitude and it doesn't change (unless multiplied.) Lets say I face the wall using my iPhone camera, and rotate 45 degrees left (roll += PI/4.)
Now, if I lift the phone towards the ceiling, both yaw and pitch change since the coordinate space is fixed (world coordinate space, doesn't move or rotate with the phone.) Is there a way to determine this angle (the one between the floor plane and the camera direction vector), roll, yaw and pitch given?
Edit: Instead of opening another question I'll try here. Luc's solution works. But how to get the other two angles of rotation? I've read the info on the posted link but it's been years since I studied linear algebra. This might be more math than a programming question, actually.
I don't really code for iPhone so I'll trust you on the "real world coordinates" frame.
In that case, you want the dot product between both z-axis' vectors. That'll give you the cosine of the angle you're looking for, pretty close thus. Since an angle between planes only really makes sense as a value between 0° and 90°, you actually have all the information you need in that cosine.
And there is no latex formatting here, otherwise I'd go into a bit more of detail, but read this page if you're interested, I'll just include the final result here, the rotation matrix for your three rotations :
Now the z-axis' vector of the horizontal plan is (0,0,1) (read this as a vertical vector though) and rotated with this matrix, you simply get its third column.
So we want to have the dot product between that third column and our (0,0,1) vector, so you get cos(β)cos(γ) which is cos(pitch)*cos(roll)
In conclusion, the angle between your plans is arccos(cos(pitch)*cos(roll)). This value will tell you how much your iPhone is inclined, not in which direction of course. But you can work that out from the values of the vector (rightmost column of the matrix) we spoke of.

What earth radius should I use to calculate distances near the Poles?

I'm monitoring a GPS unit which is on it's way from Cape Discovery in Canada, to the North Pole. I need to keep track of the distance travelled and distance remaining each day, so I'm using the Haversine Formula, which I'm told is very accurate for smaller distances.
I'm really bad a Math, but I have a sneaking suspicion that the accuracy depends greatly on the radius of the Earth, and since the universe decided to make Earth out of an oblate spheroid, I have a choice of approximations for Earths radius to choose from.
Since I'm monitoring coordinates very close to the north pole, I'm wondering wether anyone knows which radius is going to provide the most accuracy.
Mean Equatorial: 6,378.1370km
Mean Polar: 6,356.7523
Authalic/Volumetric: 6,371km
Meridional: 6367km
Or any other kind of Radius that anyone knows about?
I'm hoping some maths or cartography whizz might know the answer to this one.
You could approximate the actual radius at the point(s) where you're measuring the distance (provided that you calculate a sequence of relative small distances).
Assuming the earth being an ellipsoid with the main axis a being the mean equatorial radius and the second axis b being the mean polar radius, you can calculate the point on the ellipse represented by these two axes by using the current latidude. The calculation is shown and explained here.
(Note: This ellipse can be thought as a cross section of the earth through the poles and the point where you want to calculate the distance)
This gives you a point q=(qx,qy), the radius at this point being r=sqrt(qx^2+qy^2). That's what I'd use for calculating the Haversine formula.
It doesn't really matter - they are all going to be wrong if you just treat the earth as a sphere. I would probably use the polar since you are mostly going north.

Given a set of points to define a shape, how can I contract this shape like Photoshop's Selection>Contract

I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.

Resources