I made a very simple APP in which I can throw a pin right onto the location I am standing at (just a beginner's practice). But I found a problem.
I swear neither I was moving nor the device thought I was moving. And I directly use the geolocation to set the pin. but the pin and the current-location blue point are hundreds of meters apart.
(By the way, the blue point expressed my real location at the time.)
This is a famous problem of Google Map on iOS in China. Put aside the complicated issue of the so-called national security, where I want help is what should we do as a developer. Technically, is there a way, in programming, to figure out what exactly the offset is and correct it?
Does anyone have any idea?
At what time did you place the pin? iOS has up to three sources of location data (cell tower triangulation, Wifi sniffing and GPS) and will keep you up to date with the most accurate. So often you get a not very accurate location, then a more accurate location, then an even more accurate location.
If you have a MKMapView open then something you can do is key-value observe on its userLocation property rather than starting any sort of CLLocationManager. That way you'll always be updated with whatever the map view has decided is the current location, meaning that you don't need to try to match your logic to its.
I did some research on the offset, but haven't gotten a satisfying result yet. The added offset is deterministic, i.e. given a location, the deviated location is fixed. So my goal is to get the deviation function, f(p)=p', where both p and p' are 2D points. You can check here if you are interested.
Related
I am working on an iOS app with tracking. I have implemented Kalman smoothing in order to present a pleasing path. This is working pretty well at this point.
I am having a bit of trouble dealing with the user-not-moving case though. When the user IS moving we get very good reads back from the CLLocation Manager. And even when a reading is a bit off the Kalman algorithm takes care of it.
When standing still the CLLocation Manager delegate is still receiving "accurate" locations. They have good accuracy, not an unbelievable speed. Looking at the screen with human eyes it's clear that the user is standing still with all these points just scattered around. Some points very close and a few of them far out.
I have tried setting the CLLocationManager property pausesLocationUpdatesAutomatically but it doesn't seem to be working that well. It doesn't always stop when it should and there have been difficulty restarting the tracking again as the antennas are powered down.
So I'm looking to keep the tracking on the whole time but I want to filter out the jitter in post processing. So I determine programmatically that the user is stopped and discard (or ignore) all locations until the user is moving again.
I'm not really sure how to go about this, what algorithm is appropriate to achieve something like this?
I downloaded the Indoor Atlas iPhone SDK and also generated path maps and test paths for my venue. SDK navigates me perfectly when I am moving from one place to another but when I stop moving it generates scattered output with the position radius from 10 to 25. I am expecting precise co-ordinates in both the above cases in my project.
Is there any way to get more precision?
IndoorAtlas technology is using the history of magnetic field observations for computing the precise location. This means that the device needs to move some distance in order to collect enough data to converge to a correct location estimate, i.e., to have a location fix. We are constantly improving our service to decrease the time needed for the first location fix.
If you experience your position moving after you've already stopped walking yourself, please contact support#indooratlas.com with details of your application and venue where this is experienced and we'll look into it. Thanks!
After coming across this question, I am concerned that there will not be an answer to the question, but I will hope, anyways.
I have setup a few geofences (most small and one large). I am using the simulator and I have outputted the radius of the large CLRegion and it tells me that the radius is 10881.98m around a certain coordinate, but when I simulate the geolocation to 11281.86m away from that same certain coordinate, it does not trigger the locationManager:didExitRegion: delegate method for the large region.
While the large region will not trigger locationManager:didExitRegion:, I have confirmed that the smaller regions will trigger the delegate method every time. Is there a reason why this is not firing? Is there a distance buffer around a region? Is it documented somewhere?
Any help would be great.
EDIT: From testing, I need to cut down the radius by around 45.28% in order to have the geofence trigger. Obviously this is not a great solution, as it is very imprecise and it goes against the whole idea of geofencing.
My guess is that this is an issue unique to the simulator. While CLRegion does not technically have a buffer or padding, the OS takes substantially longer to determine you have physically left the geofence area. On fences of that size, I would image it could take longer. On smaller regions, 100-200M, I've seen it take several minutes of driving, but easily 300-400M before triggering an event. From what the Apple Engineer told me at WWDC 2013, the OS takes its time in determining that you left. It is also harder for the system to determine you left because of its reliance on cell tower triangulation and known wifi networks. It needs to go well beyond the known networks before it can safely trigger the exit event.
I know it isn't an exact answer, but hopefully you'll understand a bit more how they work under the hood and what Apple's expectation of them is. Good luck.
I have a requirement mentioned below:
Already have a floor plan map image
First detect current location on floor
Then select the destination location using floor plan map image
Now application should provide direction & distance for that source to destination path
This is like how google direction works, but its in-house map require.
For example,
- Current position of user is: At his desk
- Where is Meeting Room #11
- So application should provide direction and distance updates on the map/floor plan image.
Any kind of suggestions/help would be great.
Thanks in advance
Couple of points...
You could create various audio files and play them as way points based on routing. Same principal as 'turn right at the next light'.
Definitely want to set your accuracy to: kCLLocationAccuracyBest. But this will still probably only get you accuracy of around +/- 10 meters at best.
Do a floor plan overlay using MapOverlayView.
If you are indoor, iPhone uses cell towers or WIFI for a location fix. This might be a problem for you because if you are looking to map multiple floors, only GPS can give you altitude readings - ground floor, second floor, etc...
I don't want to pour cold water on your idea but I have not heard of anyone successfully doing an indoor navigation app on an iPhone using standard stuff. If you really wanted to move forward on this project, your best accuracy might be using indoor bluetooth transmitters as navigational beacons...?
What you want is path-planing in the map, is that? If so, there is lot of algoritms you can use. You can choose a block size based on your map and resolution needs, divide de map into this, amd mark each block as navegable or not. Then getting from the first block trying in the direction of the destionation block, check if the neighboor block is blocked or not, and get going, until you reach (or not, if its not reacheable) the destination block.
Thats a pseudo-implementation, you have some option to do it, if I understand your needs.
(I dont know your hardware as said by others, with simple GPS and indoor navigation, assuming a 15m resolution is a good balance between optimistic/pesimistc signal, If its for robot-navigation, its not a goos approach in the GPS terms, but the algorimt is).
I am developing an app that uses the user's location to be displayed on a map with other users.
I want to ensure that all users have a bit of privacy when it comes to their location being displayed openly to other users, so I am hoping to just set their location with a specified offset (lets say 1 mile) and display the "edited" location to all other users while still showing the "exact" location to the current user.
Example - If I am looking at the map, I want my "user location" (the blue dot) to be somewhat exact, while all other player's will see my location slightly offset from the real location.
What is the best way to achieve this?
I think the question you actually want the answer to is this:
How do I convert the user's location into an "approximate location" in a way that preserves the user's privacy?
It's not an easy problem:
Offsetting by a specific distance doesn't work:
There's a trivial attack if the direction is fixed.
If the direction does not change often enough, then the attacker only needs to wait to identify what looks like a road.
If the direction changes too often, then they'll tend to form a 1-mile circle around the target's house/work.
Offsetting by a random distance/direction doesn't work; the attacker just needs to collect enough samples; the clusters will likely be centered on the target's home/work.
Quantizing to a grid naively (e.g. "X is within this grid square") will tell you when the target crosses a grid boundary. This is especially bad if the target lives on a grid boundary.
Here's something that works a little better, but wil still (eventually) give away the user's location:
Pick an (approximately) 1-mile grid. For a "square" grid, you could use the Pierce quincunxal projection (there are four points of infinite distortion but you can make those all at sea — it looks like you can limit distortion on land to a factor of 2). There are also projections onto cube and, for a triangular grid, an icosahedron.
When you first need to report the user's location, give the nearest point on the grid. Also pick a threshold distance between 1 and 2 grid "squares", or so.
While the user is within the threshold distance of the center of the grid square, continue to report the same grid square. Otherwise, repeat.
It'll still eventually be obvious if the user happens to live on a grid boundary. There are various ways to attempt to fix this problem (e.g. a bias to reporting grid squares you've reported before), but these will eventually fail.
This seems a lot like trying to remove a digital watermark (the user's actual location) by using lossy compression (the approximation process) while producing an output image/audio (approximate location) that sounds/looks like the original. (The analogy works a little better if you treat the "watermark" as the user's daily habits, which will be visible in the output unless you know exactly what those habits are and can remove them.)
Or in signal processing terms: A low SNR simply means you have to listen for longer to extract the signal.
Are you showing everyone else as a pin? It might be strange if you show a pin at an exact location but the other user isn't there. For example if someone was a mile north and you showed their pin at the same location as the current user. Maybe you should display the other users with an MKOVerlay circle, and then use some calculation base on a userID to shift it slightly off centre so that people don't find out that it is always shifted 500m east and thus easily see here people are.
Whether or not you change the display, the code you seek is here: Get the GPS coordinate given the current location, bearing and distance