Finding the geometric mean of points on a globe - ruby-on-rails

I'm starting a project where I am mapping a set of points on the Earth using google maps. I want to find the point on the globe which is the average (shortest total distance to all points), but I'm unsure how to handle it considering the distance may be shorter going the other way around the earth. (-178 degrees to 178 degrees longitude is only 4 degrees longitude apart, not 356). What is the best way to approach this, either via an api call or from a mathematical perspective?

I highly doubt there is a slick geometric argument giving a closed form expression for the desired point. Nonetheless here's a simple-minded algorithm which gives an answer to within any desired precision:
https://gist.github.com/amitkgupta/5019163
If you want a mathematically more satisfying solution, I recommend asking over at http://math.stackexchange.com, or they don't avail you, escalate it to http://mathoverflow.net.

I can suggest simple and fast solution (but not to exact initial task). Find the center of gravity of points, then there may be 2 situations:
it's located at center of sphere - don't know what to do (if initial points distributed close to each other - this will not happen)
in other case - consider vector with center of mass and center of sphere as finish and start points, find where such vector intersects surface of sphere, that point - is the answer.
So, you'll get point somewhat similar to 'mid-point', but only in cases when surface under consideration is very small (may be all point lay within the same city). But it is also has nothing to do with minimal average distances from result to initial points.

Related

How to find hinge point or axis of rotation point from top view using image processing?

I have a problem at hand where I need to detect/predict the coordinates of the hinge point or axis of rotation point using image processing. The image is as shown below:
I've used a method where I started with tracking the circular movement (in an arc) of a few feature points in an RoI around the default hinge coordinates (entered manually) in a configuration file. This circular motion of these tracked points happens around the vertical axis which passes through the hinge point. Now, I tracked these points from their initial position until the connecting bar made a particular angle (15°/20°) with the y-axis, I drew secants between these different positions (start and end positions) of the same point and drew its perpendicular bisector, which will ideally pass through the centre of the (concentric) circles, which is the ideal hinge point.
Eg:
y_intercepts calculated for each point
H0 (322, 42)
H1 (322, 64) (within tolerance, closest to GT)
H2 (322, 48)
H_avg (322,52)
H_groundtruth (x,y): (322, 61)
We need an accuracy or tolerance of +/- 3 pixels.
Now, the issues we faced in this ideal scenario to practical working of it is:
Different tracked points give different potential hinge points (different dots on the vertical yellow line), (few of which are very close the ground truth(yellow circle)), but their weighted/average (big green circle) goes off the mark. Quite frankly, this is a problem of too many in which we do get the closest potentially to ground truth, but we’re not sure, which of these points is the closest as we’re not to use the default hitch coordinates (entered manually) from config file.
One solution could be to use frameworks already implemented for image registration such as elastix. If you configure it for a rigid registration, you can get the transformation matrix and therefore the center of the rotation.
The problem here is that only one part of your image is moving. Before doing the registration, I would simply mask the region of interest by calculating a mask from the subtraction of the two images, to keep only the part where something actually moved.
Such approach could get a subpixel accuracy. You could also repeat it for multiple angles and average the result. Alternatively to the averaging, you could use the RANSAC algorithm to know which hinge points are off (outliers) and exclude them.
Here is an example how to do a simple rigid transformation with elastix.
I hope this helps!
I intended this as only a comment, but it ended up significantly over the character limit:
The problem from an accuracy perspective (sorry, couldn't resist) seems to be that you're trying to use a planar euclidean geometry technique to solve a projective geometry problem.
Those feature tracks are only circular arcs in 3D world space. They're actually (noisy) elliptical arcs in 2D image pixel space due to the projection.
Your hinge rotation axis isn't a single pixel either, unless your camera's optical axis is directly aligned with the hinge axis. If that's not the case (as the perspective in the photo you added suggests), then your hinge axis is actually a line in pixel space, not a point, and different heights for the different tracks in model space will be 'centered' around different pixels on that line. So asking for +/- 3 pixel hinge 'point' accuracy is unclear, and so is measuring angles in pixel space in general in a way that doesn't account for perspective.
I only mention these details because you seem focused on measuring accurately. Often, those kinds of 2D approximations are fine for many applications, but high accuracy and precision from a single camera (if that's really what you need) requires better 3D scene understanding. (Or you could train a deep network with a bunch of labeled ground truth images and let it figure out the mappings.)
Now maybe you don't need such high accuracy for your application after all. In that case, simple affine geometry techniques like that mentioned in the other answer might work well enough.

Calculate angle between three GPS coordinates

perhaps this is a simple question.
I have 3 GPS coordinates (one is the current user location). What I want now is to calculate the angle between the user location and the two GPS coordinates. Imagine the user location in the center of the two other points, the three points can be seen as a triangle. And I want to calculate the angle at the user location.
I hope someone can help me because I have no idea how to do this with spherical coordinates like the GPS coordinates I have.
THX - nekro
For short distances (less than 100km, say) you can safely ignore the spherical nature of the calculation and treat the problem as a 2 cartesian coordinate problem. For large distances the spherical geometry gets pretty gnarly. I could probably figure it out, but I don't want to think that hard right now.
Edit:
All you need to do is to convert both coordinates to KM, and then treat it as a cartesian problem. (At a small scale, you can ignore the curved nature of the "lines" and treat them as normal cartesian grid lines, since the curvature is small enough to ignore at that scale)
The distance per degree of latitude is constant. The distance for a degree of longitude changes based on latitude.
Do a google search on "KM per degree of longitude" and find a link that explains it clearly. Here's one: http://www.colorado.edu/geography/gcraft/warmup/aquifer/html/distance.html
You could use thessien polygons and calculate the geometry on those from a strictly GIS perspective. If you have qgis or arcgis this should be fairly simple. These packages offer APIs which might suit your needs.
You're essentially doing two calculations (bearing to (or from) current position to two other positions) and not crosstrack (distance from a great circle line between to other points).
However, both can be found in Ed William's Aviation Formulary which has the most comprehensive collection of formulas for spherical calculations I've found.
You would be looking for "Course between points" which is listed as:
tc1=mod(atan2(sin(lon1-lon2)*cos(lat2),
cos(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon1-lon2)), 2*pi)

Perspective Compensation when Measuring Distances on an Image with a known reference distance

I am trying to calculate the real world distance of an arbitrary line drawn along the field of view from a one point perspective, single camera setup.
I will have a known distance running parallel. How can I find the compensation factor I need to apply to the pixel length of the measuring line?
Do I have to take into account the distance from the vanishing point, as the length per pixel increases the nearer you get to the vanishing point? Do I need to use the gradient of the known line to give me a rate of change?
A good study on this and similar problems can be found in Antonio Criminisi's papers and Ph.D. thesis on single-view metrology. This is a good link to start, and the whole paperdump is here

area calculation using lat/long in UIMapview

I am trying to find area of MKPolygonView object added to MapView. Apple documentation has method distanceFromLocation: to find distance between edges of MKPolygonView object. But I could not find anything to calculate area of the overlay.
Does Apple have any documented method for finding area?
Concerning the comments on the question post, the Earth is not a perfect sphere either. In fact, it's not a perfect anything, so "correct" answers aren't possible. What matters is how accurate of an approximation you need. Also, are you interested in a mean sea level type measurement, or do you want the actual contours of the ground (for example if your polygon is put over a mountain, then the same exact size polygon is put over some plains, should the result you calculate be the same or different)?
Depending on how big your polygon is, and which measurement you're looking for, a 2D approximation can be pretty accurate (the smaller the polygon, the closer you'll get). Something to keep in mind, if you want your area in something like square feet, the distance between two longitudinal lines is not constant (63 deg west and 62 deg west are closer (in feet) somewhere in Alaska than they are at the equator). You might have to do a unit conversion to handle this depending on how big your polygon is (or if your polygon could be placed anywhere). If you can't do the 2D approximation, I'm not even sure how you'd do that.
When I did this, I did the 2D approx, and I had to do the unit conversion. If that's the way you go, I can try to dig up some of my old notes and the links I used to get you started.

How to use the A* path finding algorithm on a grid less 2D plane?

How can I implement the A* algorithm on a gridless 2D plane with no nodes or cells? I need the object to maneuver around a relatively high number of static and moving obstacles in the way of the goal.
My current implementation is to create eight points around the object and treat them as the centers of imaginary adjacent squares that might be a potential position for the object. Then I calculate the heuristic function for each and select the best. The distances between the starting point and the movement point, and between the movement point and the goal I calculate the normal way with the Pythagorean theorem. The problem is that this way the object often ignores all obstacle and even more often gets stuck moving back and forth between two positions.
I realize how silly mu question might seem, but any help is appreciated.
Create an imaginary grid at whatever resolution is suitable for your problem: As coarse grained as possible for good performance but fine-grained enough to find (desirable) gaps between obstacles. Your grid might relate to a quadtree with your obstacle objects as well.
Execute A* over the grid. The grid may even be pre-populated with useful information like proximity to static obstacles. Once you have a path along the grid squares, post-process that path into a sequence of waypoints wherever there's an inflection in the path. Then travel along the lines between the waypoints.
By the way, you do not need the actual distance (c.f. your mention of Pythagorean theorem): A* works fine with an estimate of the distance. Manhattan distance is a popular choice: |dx| + |dy|. If your grid game allows diagonal movement (or the grid is "fake"), simply max(|dx|, |dy|) is probably sufficient.
Uh. The first thing that come to my mind is, that at each point you need to calculate the gradient or vector to find out the direction to go in the next step. Then you move by a small epsilon and redo.
This basically creates a grid for you, you could vary the cell size by choosing a small epsilon. By doing this instead of using a fixed grid you should be able to calculate even with small degrees in each step -- smaller then 45° from your 8-point example.
Theoretically you might be able to solve the formulas symbolically (eps against 0), which could lead to on optimal solution... just a thought.
How are the obstacles represented? Are they polygons? You can then use the polygon vertices as nodes. If the obstacles are not represented as polygons, you could generate some sort of convex hull around them, and use its vertices for navigation. EDIT: I just realized, you mentioned that you have to navigate around a relatively high number of obstacles. Using the obstacle vertices might be infeasible with to many obstacles.
I do not know about moving obstacles, I believe A* doesn't find an optimal path with moving obstacles.
You mention that your object moves back and fourth - A* should not do this. A* visits each movement point only once. This could be an artifact of generating movement points on the fly, or from the moving obstacles.
I remember encountering this problem in college, but we didn't use an A* search. I can't remember the exact details of the math but I can give you the basic idea. Maybe someone else can be more detailed.
We're going to create a potential field out of your playing area that an object can follow.
Take your playing field and tilt or warp it so that the start point is at the highest point, and the goal is at the lowest point.
Poke a potential well down into the goal, to reinforce that it's a destination.
For every obstacle, create a potential hill. For non-point obstacles, which yours are, the potential field can increase asymptotically at the edges of the obstacle.
Now imagine your object as a marble. If you placed it at the starting point, it should roll down the playing field, around obstacles, and fall into the goal.
The hard part, the math I don't remember, is the equations that represent each of these bumps and wells. If you figure that out, add them together to get your final field, then do some vector calculus to find the gradient (just like towi said) and that's the direction you want to go at any step. Hopefully this method is fast enough that you can recalculate it at every step, since your obstacles move.
Sounds like you're implementing The Wumpus game based on Norvig and Russel's discussion of A* in Artifical Intelligence: A Modern Approach, or something very similar.
If so, you'll probably need to incorporate obstacle detection as part of your heuristic function (hence you'll need to have sensors that alert your agent to the signs of obstacles, as seen here).
To solve the back and forth issue, you may need to store the traveled path so you can tell if you've already been to a location and have the heurisitic function examine the past N number of moves (say 4) and use that as a tie-breaker (i.e. if I can go north and east from here, and my last 4 moves have been east, west, east, west, go north this time)

Resources