I am developing an app that uses the user's location to be displayed on a map with other users.
I want to ensure that all users have a bit of privacy when it comes to their location being displayed openly to other users, so I am hoping to just set their location with a specified offset (lets say 1 mile) and display the "edited" location to all other users while still showing the "exact" location to the current user.
Example - If I am looking at the map, I want my "user location" (the blue dot) to be somewhat exact, while all other player's will see my location slightly offset from the real location.
What is the best way to achieve this?
I think the question you actually want the answer to is this:
How do I convert the user's location into an "approximate location" in a way that preserves the user's privacy?
It's not an easy problem:
Offsetting by a specific distance doesn't work:
There's a trivial attack if the direction is fixed.
If the direction does not change often enough, then the attacker only needs to wait to identify what looks like a road.
If the direction changes too often, then they'll tend to form a 1-mile circle around the target's house/work.
Offsetting by a random distance/direction doesn't work; the attacker just needs to collect enough samples; the clusters will likely be centered on the target's home/work.
Quantizing to a grid naively (e.g. "X is within this grid square") will tell you when the target crosses a grid boundary. This is especially bad if the target lives on a grid boundary.
Here's something that works a little better, but wil still (eventually) give away the user's location:
Pick an (approximately) 1-mile grid. For a "square" grid, you could use the Pierce quincunxal projection (there are four points of infinite distortion but you can make those all at sea — it looks like you can limit distortion on land to a factor of 2). There are also projections onto cube and, for a triangular grid, an icosahedron.
When you first need to report the user's location, give the nearest point on the grid. Also pick a threshold distance between 1 and 2 grid "squares", or so.
While the user is within the threshold distance of the center of the grid square, continue to report the same grid square. Otherwise, repeat.
It'll still eventually be obvious if the user happens to live on a grid boundary. There are various ways to attempt to fix this problem (e.g. a bias to reporting grid squares you've reported before), but these will eventually fail.
This seems a lot like trying to remove a digital watermark (the user's actual location) by using lossy compression (the approximation process) while producing an output image/audio (approximate location) that sounds/looks like the original. (The analogy works a little better if you treat the "watermark" as the user's daily habits, which will be visible in the output unless you know exactly what those habits are and can remove them.)
Or in signal processing terms: A low SNR simply means you have to listen for longer to extract the signal.
Are you showing everyone else as a pin? It might be strange if you show a pin at an exact location but the other user isn't there. For example if someone was a mile north and you showed their pin at the same location as the current user. Maybe you should display the other users with an MKOVerlay circle, and then use some calculation base on a userID to shift it slightly off centre so that people don't find out that it is always shifted 500m east and thus easily see here people are.
Whether or not you change the display, the code you seek is here: Get the GPS coordinate given the current location, bearing and distance
Related
I'm doing a robot navigation and path planning project based on simulating the turtlebot3 and driving with the A* planner.
I'm using a program that allows the robot to plan a route only through 'cells' with no cost value (aka. freespace with an inflation cost of 0). However, adjusting the inflation radius I'm experiencing that the robot either cannot pass through narrow openings when the inflation is higher, but drive along a wall when it is low.
I'm wondering if there is a way to control the inflation or something? My desires outcome is for the robot to drive in the center of wide hallways, not along walls, but be able to pass through doors that it should be able to fit through.
Thank you.
I managed to solve this problem thanks to some advice that made me look outside the box.
The problem was intrinsic to my planner, the fact that allowed space to navigate through was binary (aka. either perfectly allowed to drive through or not allowed).
Instead I changed it so that all space other than the walls were allowed space, and added the cost of a cell (from the cost map) to the cells. That way instead of being binary whether the drive can or can't drive in a cell, both the distance and cost of the cells affect the robots path, making it try to keep away from walls, but able to pass through places of high cost if there's no other way.
Ok, I've done some reading around the subject, have an idea of how I'd tackle my problem, but want to find out of this is the most efficient way, or if I'm missing something simple.
I have a line diagram of a section of railway that I'd like to plot the users location onto (the user being someone on a train moving up/down the railway).
Now, I initially went down the route of geo-referencing, but quickly realised this probably wasn't the way to go, as my image is not a real reflection of the area + I want the line diagram to be what the user sees.
OK, my though process of how I will tackle it:
I know the physical area so I could extract the coordinates along the railway, every x meters (my line diagram has a resolution of around 5m). Stick this into an array. Can anyone suggest a tool to do this?!
Allocate my line diagram a start and end, then match the image coordinates with the physical coordinates for the entire line.
Read in the users position and update where to draw the position based on the closest match in the array?
Does this sound doable, and would it give me decent results?
If you have more sophisticated answers, please do share.
It sounds reasonable in general. As the user is supposed to be on a train a simpler option may work where you just keep track of the physical distance moved and use that as a percentage distance along the line. This is a lot simpler to manage and could be backed up with some coordinate checkpoints to ensure you don't have a drifting error. I'd aim for a simpler implementation if you can.
So basically, the scenario is described as thus:
For what I have right now, my application can zone in on the user's location with a blue dot showcasing itself on the iPhone. However, what I'm looking to do is have the application also zoom in onto the user's location close enough that they can see the streets, avenues, etc. that encircle them--but, at the same time, I want my application to "blur out" anything outside of a certain specified radius. So basically, for the latter part, you would be able to see every detail that is in, say, a 5-mile radius of your location clearly, but outside the radius, it would be blurred out.
I sincerely don't know if the latter portion of the above paragraph can be accomplished, but at the very least, can someone help me out on the zooming portion and, if it is possible, the blurring out portion as well? Thank you very much!
I am working on a trails/maps app that has custom trails mapped out in a region and will aid the user navigate his or her way around some trails in a "foresty" area.
Currently, I am using MKMapView to get the user data/location, and loading the trails as overlays from a KML file. The problem I am having is that while testing the app, I noticed that in some situations the blue dot representing the user position goes off the trail overlays, which is expected since GPS (especially on phones) is not that great, plus some error that might have been obtained when getting the values for the trails to put in the KML file.
I apologize if all of that is a bit confusing, my question is: Is it possible to "snap" the user location (the blue dot that we all love) to a trail/overlay/path that is placed on the map with a specific tolerence? for example, if the blue dot appears to be a few pixels off the trails, then it would be placed right in the middle of the trails. If it is far off, then the user probably walked off the trails, and no snapping will happen to the user's location.
First off I wouldn't bother. If they are just a few pixels off they won't care, but if they are further away then it's important that they know where they are as accurately as possible. They could be lost in the snow and looking for trail markings.
If you do go ahead with that you'll have to abandon the userLocation dot and build our own. Using a LocationManager you can get told every time the device gets new location information and move your custom annotation dot to where you think they should be. More trouble that it's worth IMHO.
I made a very simple APP in which I can throw a pin right onto the location I am standing at (just a beginner's practice). But I found a problem.
I swear neither I was moving nor the device thought I was moving. And I directly use the geolocation to set the pin. but the pin and the current-location blue point are hundreds of meters apart.
(By the way, the blue point expressed my real location at the time.)
This is a famous problem of Google Map on iOS in China. Put aside the complicated issue of the so-called national security, where I want help is what should we do as a developer. Technically, is there a way, in programming, to figure out what exactly the offset is and correct it?
Does anyone have any idea?
At what time did you place the pin? iOS has up to three sources of location data (cell tower triangulation, Wifi sniffing and GPS) and will keep you up to date with the most accurate. So often you get a not very accurate location, then a more accurate location, then an even more accurate location.
If you have a MKMapView open then something you can do is key-value observe on its userLocation property rather than starting any sort of CLLocationManager. That way you'll always be updated with whatever the map view has decided is the current location, meaning that you don't need to try to match your logic to its.
I did some research on the offset, but haven't gotten a satisfying result yet. The added offset is deterministic, i.e. given a location, the deviated location is fixed. So my goal is to get the deviation function, f(p)=p', where both p and p' are 2D points. You can check here if you are interested.