Indoor positioning on iOS with Core Location - not accurate? - ios

Using the sample code provided from WWDC, I've been trying to write a simple proof-of-concept app that performs indoor positioning in my office building. I have a floor plan image and replaced the standard image in the demo code. I've also done the requisite mapping of GPS coordinates to pixels for the two anchor points.
When I run the app in the simulator and specify static GPS coordinates, I see the position updated as expected in the simulator. When I run it on my phone, however, the experience isn't nearly as seamless as Apple advertised in the video. On my iPhone 5s, the positioning is all over the place, and rarely anywhere close to accurate. Even sitting next to a window with a clear line-of-sight to the sky I still get very inaccurate results.
I would assume that this might have something to do with our physical layout, WiFi topology, or other such parameters. However, I also noticed that Apple has a portal where you can register your facility for use with indoor positioning. Does this have something to do with the poor results in my app? I can't imagine how Apple would be able to help with such a scenario, but thought it might have something to do with it.
Are there other steps I should take to increase the accuracy of my app? Is there a way to leverage iBeacons for improved positioning indoors? I haven't found any documentation indicating so, but thought maybe someone here would know.

You're right, Apple has the portal available at https://mapsconnect.apple.com
At this portal you can add your venue and Apple will guide you on setting it up. However, your venue must have all of the following attributes:
Accessible to the general public
Annual visitors in excess of 1 million per year
Availability of complete, accurate, and scaled reference maps
Wi-Fi throughout the area
Associated app that's authorized by venue owner
If your venue has all the required attributes, then you also will need to answer these questions about your usage:
How are you planning to use indoor positioning? (Ads, Navigation, Delivering content)
How many venues would you like to enable with indoor positioning?
What type of venue do you have? (Airport, Hospital, Museum, Mall, Office)
What type of floor plans do you have? (CAD, BIM, GeoJSON, AI, PDF, PNG, etc)
Are the venues equipped with Wi-Fi and/or iBeacon?
Name of the largest venue
Address of the largest venue
Once you have completed the entire form and jumped through the last hoop, you will be brought to a page that confirms the details. Once done, it's all in their hands and they will contact you.

Indoor Positioning does not work well without addional devices like iBeacons.
There is no useable GPS receivement in buildings, the reflected signal is often far worse than 50m .
GPS might work indoors if it is a single floor building with a thin roof, but this is usually not the case in indoor buildings.
The only thing that works well, is to buy some iBeacons and mount them at various locations in the office.
You have to manage the location of that beacons: they only send you an id, and (maybe?) the distance to that iBeacon. (Please check wheter you get distance to beacon)
But ios LocationService will not use that iBeacons.
So either use iBeacons or forget your project. There is no well working solution for indoor positioning. Some use magnetic fields, there is even an App for that, but this needs measuring your whole office in detail.

Why don't you try with the indoor SDK which can be integrated in iOS applications. Also try to give the accuracy level appropriately when you use location framework API.

Related

Best practice for dealing with geolocation in offline mode

I have developed a mobile application allowing field agents to collect data based on geolocation.
Each agent is assigned to a specific point of sale where he has to go, and when he arrives, he has to point out his presence in this point of sale before starting the collection.
To detect that the agent is in the store, I used a radius of 20 meters to take into consideration errors, but the problem is that position detection isn't easy in offline mode and the user should insist for the position to be captured.Knowing that the GPS chip doesn't need internet to capture location.
My question is more about looking for advices and best practices if you've got some experience on these kind of applications.
In the Android phones especially i've increased the precision in the settings, but i'me having the same issues.
Notice : Online things are working good

ARKit with multiplayer experience to share same planes [duplicate]

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

How Indoor Navigation works in iOS using iBeacon?

I am working on a project that has a requirement of Indoor navigations by using iBeacon. Have been searching a lot, I only found some paid sdk and other tools. I know how iBeacon used for indoor navigation,but there is some problem. I want move user location first beacon to another but only on specific path,but now when user move than location not follow path as given by me.
Please let me know. Thanks in advance!!
While it is possible to build an indoor navigation system using beacons, it is not a trivial exercise. Beacons only provide a very small building block needed to create the overall system. Think of beacons as being a brick used to build a house. Are you up for building a house from scratch out of a pile of bricks and many other components?
You may be better off using an off-the-shelf SDK, even if it is paid, rather than building this yourself. If you do want to build it from scratch, there are several components you must build:
Beacon location configuration: You need a system to register the location of each beacon in latitude/longitude and get this configuration into the mobile app.
Position determination: Based on detecting the closest beacon(s), you must build a module that determines the position of the user's mobile phone based on the configuration above.
Map rendering engine
Coordinate system conversion from the beacon location configuration reference frame to the map coordinate frame.
Wayfinding module: Based on configured routes on the map, the wayfinding module would determine where to direct the user along these routes to get to a destination.
I worked on a team that built a beacon-based indoor nav system for the Consumer Electronics Show. It took multiple team members a few months to build the system from scratch using hundreds of beacons and low-level tools. Don't underestimate the effort involved.
This answer is assuming that the user will have no other location services (GPS etc), it could be achieved using multiple iBeacons.
The all possible routes between the start and end destination would need to have to iBeacons on them.
Register that the user has arrived at the first beacon, and display to them your chosen route.
If you detect them getting close to any beacons which are not on your chosen route, then you know they're probably not following it.
So with enough beacons, you can accurately plot the user's location in an indoor environment (provided you know the exact location of the iBeacons beforehand).

Indoor mapping in iOS

I have map of my office room. I am trying to implement indoor mapping inside the office room for iOS. I watched the video from the WWDC2014 on Corelocation and indoor mapping. I also have the sample code from them. I am not sure what exactly they mean by "floor plan pixel". I have an image of the office and how can I use the following image to use as the floor plan pixel? I will really appreciate if somebody can guide me how to do or let me know if there are githubs or other resources are doing indoor mapping and tracking in iOS.
Thank you
You will need to apply for your venue to be mapped on Apple's Map Connect Website. You will have to declare that you are the manager for the venue, then instructions will follow. This will involve you providing blueprints for the venue to Apple, locations of Wi-Fi base stations and (possibly) iBeacons. You will have to use Apple's specific app (that you find in the Appstore) to map the venue. When all the process is done, you will be online: you should be able to see your venue in Apple maps and do whatever your need to do.
Having said that, Apple seems, still, focussed on venues that have at least one million visitors per year. If your venue is smaller you are stuck with iBeacons and your own implementation of a positioning / proximity algorithm. Take a look at Open Tagger, as an example in Swift, it will give you an idea of the task and hopefully a very good starting point.
https://github.com/PaoloLongato/open-tagger/tree/github-master

Really accurate speedometer on iPhone

I would like to develop a mobile app for iPhones, that calculates time needed to reach a given velocity. For example: I'm in my car, open the app, choose 100km/h and when I accelerate the app should start to count time and it stops counting just in the moment when I reach 100km/h. It should be very accurate.
I heard about two solutions. First is to use the accelerometer/gyroscope, but some people told me it's bad idea, because I won't be able to calculate time on longer distances. The second option is to use GPS, but on the other hand it can be not as accurate as I want it to be.
So I need suggestions, which option is better and why.
My targets are iPhones 4s and newer.
If you want to be more precise than the GPS you will need to have some sort of sensor. Most similar apps and concepts will create a receiver that plugs into the car that the iPhone can connect to. This has the benefit of making all of the sensors in the car available to you. This is an example: https://www.automatic.com/how-automatic-works/

Resources