Check if user is near route checkpoint with GPS - geolocation

Here's the situation:
I have a predetermined GPS route that the user will run. The route has some checkpoints and the user should pass near all of them (think of them as a racing game checkpoint, that prevents the user from taking shortcuts). I need to ensure that the user passes through all the checkpoints.
I want to determine an area that will be considered inside a checkpoint's radius, but I don't want it to be just a radial area, it should be an area taking into consideration the form of the path.
Didn't understand it? Neither did I. Look at this poorly drawn image to understand it better:
The black lines represents the pre-determined path, the blue ball is the checkpoint and the blue polygon is the wanted area. The green line is a more precise user, and the red line is a less accurate user (a drunk guy driving maybe? lol). Both lines should be inside the polygon, but a user who skips totally the route shouldn't.
I already saw somewhere here a function to check is the user is inside a polygon like this, but I need to know how to calculate the polygon.
Any suggestions?
EDIT:
I'm considering using the simple distanceTo() function to just draw an imaginary circle and check if the user is there. That's good because is so much simple to implement and understand, and bad because to make sure the most erronic user passes whithin the checkpoint I would need a big radius, making the correct user enter the checkpoint area sooner than expected.
And just so you guys understand the situation better, this is for an app that is supposed to be used in traffic (car or bus), and the checkpoints should be landmarks or spots that divides your route, for example, somewhere where traffic jam starts or stops.

You could just check the distance between the two, assuming you know the GeoLocation of the checkpoint.
Use the distanceTo function and setup a threshold of however many meters the user needs to be from the checkpoint to continue on.
Edit
Since you want to avoid distanceTo, here is a small function I wrote a while back to check if a point is in a polygon:
public boolean PIP(Point point, List<Point> polygon){
boolean nodepolarity=false;
int sides = polygon.size();
int j = sides -1;
for(int i=0;i<sides;i++){
if((polygon.get(i).y<point.y && polygon.get(j).y>=point.y) ||(polygon.get(j).y<point.y && polygon.get(i).y>=point.y)){
if (polygon.get(i).x+(point.y-polygon.get(i).y)/(polygon.get(j).y-polygon.get(i).y)*(polygon.get(j).x-polygon.get(i).x)<point.x) {
nodepolarity=!nodepolarity;
}
}
j=i;
}
return nodepolarity; //FALSE=OUTSIDE, TRUE=INSIDE
}
List<Point> polygon is a list of the points that make up a polygon.
This uses the Ray casting algorithm to determine how many intersections a ray makes through the polygon.
All you would need to do is create the 'boundary' around the area you need with GeoPoints being translated into pixels using the toPixels method.
Store those points into a List<> of points, and you should be all set.

check a few algos to do this in the link below
http://geospatialpython.com/2011/01/point-in-polygon.html

I know this is an old question, but maybe it would be useful for someone.
This is a simpler method, with much less computation needed. This would not trigger the first time the user comes inside the threshold area, it only gets the closest point where the user has passed near the checkpoint AND (s)he has come close enough.
The idea is to maintain a 3 item list of distances for every checkpoint, with the last three distances in it (so it would be [d(t), d(t-1), d(t-2)]). This list should be rotated on every distance calculation.
If on any distance calculation the previous d(t-1) distance is smaller than the current one d(t) and bigger than the preceding d(t-2), then the moving point has passed the checkpoint. Whether this was a real passing, or it was only a glitch, can be decided by checking the actual distance d(t-1).
private long DISTANCE_THRESHOLD = 2000;
private Checkpoint calculateCheckpoint(Map<Checkpoint, List<Double>> checkpointDistances)
{
Map<Checkpoint, Double> candidates = new LinkedHashMap<Checkpoint, Double>();
for (Checkpoint checkpoint: checkpointDistances.keySet())
{
List<Double> distances = checkpointDistances.get(checkpoint);
if (distances == null || distances.size() < 3)
continue;
if (distances.get(0) > distances.get(1) && distances.get(1) < distances.get(2) && distances.get(1) < (DISTANCE_THRESHOLD)) //TODO: make this depend on current speed
candidates.put(checkpoint, distances.get(1));
}
List<Entry<Checkpoint, Double>> list = new LinkedList<Entry<Checkpoint,Double>>(candidates.entrySet());
Collections.sort(list, comp);
if (list.size() > 0)
return list.get(0).getKey();
else
return null;
}
Comparator<Entry<Checkpoint, Double>> comp = new Comparator<Entry<Checkpoint,Double>>()
{
#Override
public int compare(Entry<Checkpoint, Double> o1, Entry<Checkpoint, Double> o2)
{
return o1.getValue().compareTo(o2.getValue());
}
};
The function gets one parameter - a Map<Checkpoint, List<Double>> with the checkpoints and the list of the last three distances. It outputs the closest Checkpoint passed or null (if there were none).
The DISTANCE_THRESHOLD should be chosen wisely.
The Comparator is just to be able to sort the Checkpoints based on their distance to the user to get the closest one.
Naturally this has some minor flaws, e.g. if the moving point is moving criss-cross, or the error movement from GPS precision is commensurable with the actual speed of the user, than this would give multiple pass marks, but this would hit almost any algorithm.

Related

Events changing visual geometries

I'm trying to visualize collisions and different events visually, and am searching for the best way to update color or visual element properties after registration with RegisterVisualGeometry.
I've found the GeometryInstance class, which seems like a promising point for changing mutable illustration properties, but have yet to find and example where an instance is called from the plant (from a GeometryId from something like GetVisualGeometriesForBody?) and its properties are changed.
As a basic example, I want to change the color of a box's visual geometry when two seconds have passed. I register the geometry pre-finalize with
// box : Body added to plant
// X_WA : Identity transform
// FLAGS_box_l : box side length
geometry::GeometryId box_visual_id = plant.RegisterVisualGeometry(
box, X_WA,
geometry::Box(FLAGS_box_l, FLAGS_box_l, FLAGS_box_l),
"BoxVisualGeometry",
Eigen::Vector4d(0.7, 0.5, 0, 1));
Then, I have a while loop to create a timed event at two seconds where I would like for the box to change it's color.
double current_time = 0.0;
const double time_delta = 0.008;
bool changed(false);
while( current_time < FLAGS_duration ){
if (current_time > 2.0 && !changed) {
std::cout << "Change color for id " << box_visual_id.get_value() << "\n";
// Change color of box using its GeometryId
changed = true;
}
simulator.StepTo(current_time + time_delta);
current_time = simulator_context.get_time();
}
Eventually I'd like to call something like this with a more specific trigger like proximity to another object, or velocity, but for now I'm not sure how I would register a simple visual geometry change.
Thanks for the details. This is sufficient for me to provide a meaningful answer of the current state of affairs as well as the future (both near- and far-term plans).
Taking your question as a representative example, changing a visual geometry's color can mean one of two things:
The color of the object changes in an "attached" visualizer (drake_visualizer being the prime example).
The color of the object changes in a simulated rgb camera (what is currently dev::RgbdCamera, but imminently RgbdSensor).
Depending on what other properties you might want to change mid simulation, there might be additional subtleties/nuances. But using the springboard above, here are the details:
A. Up until recently (drake PR 11796), changing properties after registration wasn't possible at all.
B. PR 11796 was the first step in enabling that. However, it only enables it for changing ProximityProperties. (ProximityProperties are associated with the role geometry plays in proximity queries -- contact, signed distance, etc.)
C. Changing PerceptionProperties is a TODO in that PR and will follow in the next few months (single digit unless a more pressing need arises to bump it up in priority). (PerceptionProperties are associated with the properties geometry has in simulated sensors -- how they appear, etc.)
D. Changing IllustrationProperties is not supported and it is not clear what the best/right way to do so may be. (IllustrationProperties are what get fed to an external visualizer like drake_visualizer.) This is the trickiest, due to the way the LCM communication is currently articulated.
So, when we compare possible implications of changing an object's color (1 or 2, above) with the state of the art and near-term art (C & D, above), we draw the following conclusions:
In the near future, you should be able to change it in a synthesized RGB image.
No real plan for changing it in an external visualizer.
(Sorry, it seems the answer is more along the lines of "oops...you can't do that".)

Get distance from user's location to nearest point on GMSPath

I am working with the Google Maps API and have drawn GMSPolylines on the map. I know which "node" (lat and long position for a turning point) on the map is closest to the user's location, and I know the next upcoming nodes. Given that information, how could one obtain the distance from the user's current location to the nearest point on the closest path? In the diagram below, how could we get x when we know the three GPS coordinates?
try something like this (swift 4.0)
func isApproaching(_ to:CLLocation, from:CLLocation? = nil, byDistanceInMeters:Double) -> Bool {
guard let baseLocation = from ?? ThisIsMyDefaultUserLocationVariable.location else { return false }
let delta = to.distance(from: baseLocation) as Double
if delta < byDistanceInMeters { return true }
return false
}
and this is how to call it:
if isApproaching(lastLocation, byDistanceInMeters: 50.0) ) {
// this is where you are in 50m perimeter
} else {
// here you are outside
}
.
Looking at this as a geometry problem instead of a programming problem might make this a little easier. There probably exists a library or API that does this with little more than a couple lines of code, but this approach should still yield a result with little to no overhead.
Disclaimer: This approach only works with straight lines.
You have two points that are on your path and one point that is not on the path. Using some basic algebra you can find a line that is parallel to the path and runs through the user's location, and then invert that line to find the shortest line between the user's location and the path. Then it's simply finding the intersection of two lines.
One thing to note, the larger the distance between nodes then the less accurate the euclidean distance will be. This should be negligible for nodes closer than ~100 miles.

XNA Farseer. Raycast is going through shapes

I'm making a game in XNA.
I'm doing a raycast from the enemy to the player to determine if the enemy can see the player.
Heres the code..
private float RayCallBack(Fixture fixture, Vector2 point, Vector2 normal, float fraction)
{
rayhit = fixture.Body.UserData.ToString();
if (fixture.Body.UserData == "player")
{
//AIawake = true;
}
return 0f;
}
_world.RayCast(RayCallBack, _blocklist[0]._floor.Position , ConvertUnits.ToSimUnits(playerpos));
My problem is that in the situation in the picture where I have procedurally generated caves made out of blocks the rays seem to go through the blocks so the enemy can see through the walls.
--
UPDATE
Ok the following code works! but.. I have no idea why!! :/
private float RayCallBack(Fixture fixture, Vector2 point, Vector2 normal, float fraction)
{
rayhit = fixture.Body.UserData.ToString();
if (fixture.Body.UserData == "player")
{
return fraction;
}
else
{
return 0f;
}
}
and then in a seperate update statement in this class have the code to awaken the enemy.
if (rayhit == "player") AIawake = true;
I obviously do not understand how raycast and the callback works. If someone could explain why this method works that'd be great. I am planning on doing a lot more raycasting to stop the enemies crashing into stuff and so on.
There are 4 scenarios. If you:
return -1 in your callback fixture then the fixture is going to be ignored and the ray will continue
return 0 the ray will terminate
return fraction then the ray will stop at this point
return 1 then the ray will continue until it ends
The first scenario is used when you want to ignore certain fixtures.
Second scenario is used when you want to know if your ray hit anything (not necessarily the closest object).
Third scenario is used to find the nearest object.
And the fourth scenario is used when you wish to know about all objects in the ray's path.
In general you callback logic would have a part that checks for objects that should be ignored and return -1 if they should be ignored which will be followed by a code block that either returns 0, fraction or 1.
The difference between second and third scenario is possibly the hardest to understand. The way I understand it is that if you return 0 none of other potential hits are going to be evaluated. But if you return a fraction (which will clip the ray) other potential hits within the new (clipped) ray length are going to be evaluated - in other words your callback will be executed for all of them and the ray might be clipped even further which will eventually return the nearest object.
See Box2D documentation about Ray Casting here.
One thing that documentation does say is that the order of evaluation is not guaranteed. In your case it might be that the player was being evaluated before the wall blocks.
tl; dr; return fractions as you did in your second code sample.

Mahout Recommender: What relative preference values are suitable for a GenericUserBasedRecommender?

In mahout, I'm setting up a GenericUserBasedRecommender, pretty straight forward for now, typical settings.
In generating a "preference" value for an item, we have the following 5 data points:
Positive interest
User converted on item (highest possible sign of interest)
Normal like (user expressed interest, e.g. like buttons)
Indirect expression of interest (clicks, cursor movements, measuring "eyeballs")
Negative interest
Indifference (items the user ignored when active on other items, a vague expression of disinterest)
Active dislike (thumbs down, remove item from my view, etc)
Over what range I should express these different attributes, let's use a 1-100 scale for discussion?
Should I be keeping the 'Active dislike' and 'Indifference' clustered close together, for example, at 1 and 5 respectively, with all the likes clustered in the 90-100 range?
Should 'Indifference' and 'Indirect expressions of interest' by closer to the center? As in 'Indifference' in the 20-35 range and 'Indirect like' in the 60-70 range?
Should 'User conversion' blow the scale away and be heads and tails higher than the others? As in: 'User Conversion' # 100, 'Lesser likes' # ~65, 'Dislikes' clustered in the 1-10 range?
On the scale of 1-100 is 50 effectively "null", or equivalent to no data point at all?
I know the final answer lies in trial and error and in the meaning of our data, but as far as the algorithm goes, I'm trying to understand at what point I need to tip the scales between interest and disinterest for the algorithm to function properly.
The actual range does not matter, not for this implementation. 1-100 is OK, 0-1 is OK, etc. The relative values are all that really matters here.
These values are estimated by a simple (linearly) weighted average. Therefore the response ought to be "linear". It ought to match an intuition that if action X gets a score 2x higher than action Y, then X should be an indicator of twice as much interest in real life.
A decent place to start is to simply size them relative to their frequency. If click-to-conversion rate is 2%, you might make a click worth 2% of a conversion.
I would ignore the "Indifference" signal you propose. It is likely going to be too noisy to be of use.

how to construct a RTree using given data points

I need to construct a R tree using given data points.I have searched for implementation of R tree.All the implementation i found construct r tree when given coordinates of rectangle as input.I need to construct r tree when given data points itself(it can be 1 dimensional).The code should take care of creating rectangles which encloses these data points and construct r tree.
Use MBRs (Minimum bounding rectangle) with min = max = coordinate. They all do it this way. Good implementations will however store point data with approximately twice the capacity in the leaves than in directory nodes.
If you're looking for a C++ implementation, the one contained in Boost.Geometry currently (Boost. 1.57) is able to store Points, Boxes and Segments. The obvious advantage is that the data in leafs of the tree is not duplicated which means that less memory is used, caching is better, etc. The usage looks like this:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/geometries.hpp>
#include <boost/geometry/index/rtree.hpp>
#include <vector>
namespace bg = boost::geometry;
namespace bgi = boost::geometry::index;
int main()
{
typedef bg::model::point<float, 2, bg::cs::cartesian> point;
typedef bg::model::box<point> box;
// a container of points
std::vector<point> points;
// create the rtree
bgi::rtree< point, bgi::linear<16> > rtree(points.begin(), points.end());
// insert some additional points
rtree.insert(point(/*...*/));
rtree.insert(point(/*...*/));
// find points intersecting a box
std::vector<point> query_result;
rtree.query(bgi::intersects(box(/*...*/)), std::back_inserter(query_result));
// do something with the result
}
I guess that using an Rtree to store points seems like a misuse. Although this kind of structure is indicated to store spatial data, after some research I just found out it is best suited for storing non-zero area regions (as the R from the name is for Region or Rectangle). Creating a simple table with a nice index should offer better performance either for updating and searching data. Consider my example below:
CREATE TABLE locations (id, latitude, longitude);
CREATE INDEX idx_locations ON locations (latitude, longitude);
is preferable over
CREATE VIRTUAL TABLE locations USING rtree( id, minLatitude, maxLatitude, minLongitude, maxLongitude);
if you are just planning to repeat minLatitude over maxLatitude and minLongitude over maxLongitude for every row as to represent points and not rectangles. Although the latter approach will work as expected, Rtrees are suited to index rectangle areas and using them to store points is a misuse with worst performance. Prefer a compound index as above.
Further reading: http://www.deepdyve.com/lp/acm/r-trees-a-dynamic-index-structure-for-spatial-searching-ZH0iLI4kb0?key=acm

Resources