Indoor Navigation using I-Beacon- Accuracy is changing rapidly - ios

I am doing an indoor navigation application using I-Beacon. For that i am using the accuracy given by the beacon. But it is changing rapidly. Since the value is changing, the X and Y coordinates of the user location, which has to be calculated is also varying even when i am static. So please help me to make the accuracy a constant when I m not moving.
Thanks in advance

I suggest you to read the following article about the experience with two positioning algorithms Trilateration and NonLinear Regression: R/GA Tech Blog
You will find the complete iOS App that implements both algorithms from these guys on GitHub
The App is very helpful to understand the difficulties of the requirement of the indoor navigation and experiment with it.
Also please note: Apple did announce the indoor positioning on WWDC 2014 with the Core Location Framework in iOS8, but after some couple of month they stopped the program. There were a lot of rush about the new feature. Apple decided than to offer the program only for big companies. You can register for it here.
It is important to understand the Apple strategy: The iBeacons technology is for proximity and advertising in contrast to Core Location Framework indoor positioning features in iOS8. The first one is just an addition to the second one, not replacement.
There is also an interesting article on the Estimote Blog about the physics of beacon tech.. The useful part for you begins with the sentence "When we started building it, we were experimenting with a method called trilateration."

Indoor positioning using beacons is extremely hard, precisely due to the fluctuations in the distance (accuracy) estimates. You could try some averaging and smoothing out algorithms, but that's just the beginning to implementing reliable, beacon-based indoor positioning.
Estimote is working on ready-made library for indoor location with beacons: https://github.com/Estimote/iOS-Indoor-SDK, you might want to give it a try. It only works with Estimote beacons though.

Related

how does google measure app works on android?

I can see that it can measure horizontal and vertical distances with +/-5% accuracy. I have a use case scenario in which I am trying to formulate an algorithm to detect distances between two points in an image or video. Any pointers to how it could be working would be very useful to me.
I don't think the source is available for the Android measure app, but it is ARCore based and I would expect it uses a combination of triangulation and knowledge it reads from the 'scene', using the Google ARCore term, it is viewing.
Like a human estimating distance to a point, by basic triangulation between two eyes and the point being looked at, a measurement app is able to look at multiple views of the scene and to measure using its sensors how far the device has moved between the different views. Even a small movement allows the same triangulation techniques be used.
The reason for mentioning all this is to highlight that you do not have the same tools or information available to you if you are analysing image or video files without any position or sensor data. Hence, the Google measure app may not be the best template for you to look to for your particular problem.

track user translation movement on ios using sensors for vr game?

I'm starting to experiment VR game development on ios. I learned a lot from google cardboard sdk. It can track user's head orientation, but it can not track user's translation. This shortage cause the use can only look at the virtual environment from a fix location (I known I can add auto walk to the game, but it's just not the same).
I'm searching around the internet, some says translation tracking just can't be done by using sensors, but it seems combining magnetometer, you can track user's movement path, like this example.
I also found a different method called SLAM, which use camera and opencv to do some feature tracking, then use feature point informations to calculate translation. Here's some example from 13th Lab. And google has a Tango Project which is more advanced, but it require hardware support.
I'm quite new to this kind of topic, so I 'm wondering, if I want to track not only the head orientation but also the head(or body) translation movement in my game, which method should I choose. SLAM seems pretty good, but it's also pretty difficult, and I think it will has a big impact on the cpu.
If you are familiar with this topic, please give some advice, thanks in advance!
If high accuracy is not important, you can try using the accelerometer to detect walking movement (basically a pedometer) and multiply it with an average human step width. Direction can be determined by the compass / magnetometer.
High accuracy tracking would likely require complex algorithms such as SLAM, though many such algorithms have already been implemented in VR libraries such as Vuforia or Kudan
Hi I disagree with you Zhiquiang Li
Look at this video made with kudan, the video is quite stable and moreover my smartphone is a quite old phone.
https://youtu.be/_7zctFw-O0Y

Does altitude variance effects geofencing?

I need to implement a auto-check in feature for managing office attendance.Will accuracy of geofencing be effected if office is situated on 50th floor.ie Does altitude variance effects accuracy of geofencing?
The accuracy is not an issue (geofencing is based on lat/long coordinates, which are the same regardless of the floor). The issue is going to be GPS accuracy. Due to the fact that GPS is not able to penetrate walls, the GPS coordinates will nearly always show you outside of the building, not inside of it. Not to mention being able to identify which office room someone has entered.
If this is acceptable accuracy level for your project, then just GPS works. If you need to see that the person has actually entered the building or the specific office, you will need to utilize beacons.
(Disclosure: I work for Proximi.io, a technology-agnostic positioning platform)

Indoor Positioning System based on inertial sensors

I am a student,and I am developing an iOS App to track indoor position.
My idea is that from a given reference point (a known position), using inertial sensors in my iphone(such as accelerometer,Gyro,etc) track the phone when moving. And display on a indoor map(a simple indoor plan)when the user is going.
But the problem is that i have no idea how to combine these sensors to give me an actual position?
Does someone has some experience that he can share with me about indoor positioning system using inertial sensors?
Thank you so much.
One Solution is to use Bluetooth Beacons
They connect to your iPhone's Bluetooth and based on their signal strengh you can estimate the distance to each one of them, so you can estimate your indoor position.
read more: Indoor Positioning

Difference Between Marker based and Markerless Augmented Reality

I am totally new to AR and I searched on the internet about marker based and markerless AR but I am confused with marker based and markerless AR..
Lets assume an AR app triggers AR action when it scans specific images..So is this marker based AR or markerless AR..
Isn't the image a marker?
Also to position the AR content does marker based AR use devices' accelerometer and compass as in markerless AR?
In a marker-based AR application the images (or the corresponding image descriptors) to be recognized are provided beforehand. In this case you know exactly what the application will search for while acquiring camera data (camera frames). Most of the nowadays AR apps dealing with image recognition are marker-based. Why? Because it's much more simple to detect things that are hard-coded in your app.
On the other hand, a marker-less AR application recognizes things that were not directly provided to the application beforehand. This scenario is much more difficult to implement because the recognition algorithm running in your AR application has to identify patterns, colors or some other features that may exist in camera frames. For example if your algorithm is able to identify dogs, it means that the AR application will be able to trigger AR actions whenever a dog is detected in a camera frame, without you having to provide images with all the dogs in the world (this is exaggerated of course - training a database for example) when developing the application.
Long story short: in a marker-based AR application where image recognition is involved, the marker can be an image, or the corresponding descriptors (features + key points). Usually an AR marker is a black&white (square) image,a QR code for example. These markers are easily recognized and tracked => not a lot of processing power on the end-user device is needed to perform the recognition (and optionally tracking).
There is no need of an accelerometer or a compass in a marker-based app. The recognition library may be able to compute the pose matrix (rotation & translation) of the detected image relative to the camera of your device. If you know that, you know how far the recognized image is and how it is rotated relative to your device's camera. And from now on, AR begins... :)
Well. Since I got downvoted without explanation. Here is a little more detail on markerless tracking:
Actual there are several possibilities for augmented reality without "visual" markers but none of them called markerless tracking.
Showing of the virtual information can be triggered by GPS, Speech or simply turning on your phone.
Also, people tend to confuse NFT(Natural feature tracking) with markerless tracking. With NFT you can take a real life picture as a marker. But it is still a "marker".
This site has a nice overview and some examples for each marker:
Marker-Types
It's mostly in german but so beware.
What you call markerless tracking today is a technique best observed with the Hololens(and its own programming language) or the AR-Framework Kudan. Markerless Tracking doesn't find anything on his own. Instead, you can place an object at runtime somewhere in your field of view.
Markerless tracking is then used to keep this object in place. It's most likely uses a combination of sensor input and solving the SLAM( simultaneous localization and mapping) problem at runtime.
EDIT: A Little update. It seems the hololens creates its own inner geometric representation of the room. 3D-Objects are then put into that virtual room. After that, the room is kept in sync with the real world. The exact technique behind that seems to be unknown but some speculate that it is based on the Xbox Kinect technology.
Let's make it simple:
Marker-based augmented reality is when the tracked object is black-white square marker. A great example that is really easy to follow shown here: https://www.youtube.com/watch?v=PbEDkDGB-9w (you can try out by yourself)
Markerless augmented reality is when the tracked object can be anything else: picture, human body, head, eyes, hand or fingers etc. and on top of that you add virtual objects.
To sum it up, position and orientation information is the essential thing for Augmented Reality that can be provided by various sensors and methods for them. If you have that information accurate - you can create some really good AR applications.
It looks like there may be some confusion between Marker tracking and Natural Feature Tracking (NFT). A lot of AR SDK's tote their tracking as Markerless (NFT). This is still marker tracking, in that a pre-defined image or set of features is used. It's just not necessarily a black and white AR Toolkit type of marker. Vuforia, for example, uses NFT, which still requires a marker in the literal sense. Also, in the most literal sense, hand/face/body tracking is also marker tracking in that the marker is a shape. Markerless, inherent to the name, requires no pre-knowledge of the world or any shape or object be present to track.
You can read more about how Markerless tracking is achieved here, and see multiple examples of both marker-based and Markerless tracking here.
Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.
Markerless Tracking is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

Resources