Indoor record object location - geolocation

Many AR applications allow user to select a 3D object and display in the phone. If the user walks into a different area. The 3D object still remains at the same location. How can i achieve this? Can GPS solve this problem? But GPS is not accurate in indoor.

You can do it by using ARKit, ARCore or Vuforia SDKs. In ARKit and ARCore you can anchor objects to physical locations and they will stay there even if you walk into a different room. However, you might notice some drift in the location in ARCore because environmental understanding of device change over time or you might lose tracking at some point. With Vuforia you can use extended tracking to track objects but it is a bit different than ARCore and ARKit. You have to use ground plane or smart terrain to utilize extended tracking fully in your situation.

Related

What are limitations for scanning and detecting 3d object in ARKit2.0 in iOS?

I am done with 3d object scanning and detection with ARKit 2.0. I have scanned 3d object from all sides of object. Once 100% scanning is done then had given name to that object and then save that ARReference Object and image in document directory. Then on button click I am going to detect scanned object and display it’s name and image from document directory.
Object get detected but it’s taking too much time to detect an object. I have gone through Apple document for best practices and limitations. Still having some questions regarding ARKit.
Is anything wrong while scanning or detecting object? What are best practices to scan 3d object?
What are the limitations for scanning and detecting object?
Is it possible to zoom while detecting object?
What are best practices to detect object quickly i.e. not taking too much time for detection?
ARKit engineers give the following recommendation for scanning 3D objects:
Light the object with an illuminance of 250 to 400 lux, and ensure that it’s well-lit from all sides.
Provide a light temperature of around ~6500 Kelvin (D65) – similar with daylight. Avoid warm or any other coloured light sources.
Set the object in front of a matte, middle-grey background.
For easy object scanning, use a recent, high-performance iOS device (iPhone X/Xs/Xr, iPad Pro). Scanned objects can be detected on any ARKit-supported device, but the process of creating a high-quality scan is faster and smoother on a high-performance device.
Position the object you want to scan on a surface free of other objects (like an empty tabletop).
Also, I should add four things:
Objects with non-repetitive (unlike polkadots) and non-flat textures are more preferable. Scanning objects with a "not-rich" texture takes a little longer.
Try not to scan transparent objects like a glass statuette or jar of water. For ARKit these kinds of objects are undesirable. It doesn't matter what Index of Refraction (IOR) they have 1.0 or 3.0.
Try not to scan highly reflective objects like mirror or chrome sphere. For ARKit these types of objects are undesirable too. Their "texture" depends on angle of view.
Try not to scan objects with a chromatic dispersion effect like surface of DVD or precious stones in jewelry.
Using zoom when scanning is a controversial issue.
The most robust scenario for me for ARObjectScanningConfiguration is to scan a middle-sized object 0.5 to 1.5 meters away. In ARKit Autofocus is enabled by default.
All aforementioned recommendations are general. Every object is unique and you need a different amount of time for any unique object to scan.
Hope this helps.

Can ARCore track moving surfaces?

ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore – the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface – the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.

How to display the data as augmented only if the user is facing towards a feature in the real world?

I am currently developing an augmented reality android application in which I would like to display the discharge data of a river along with the river name as augmented features. However, I would like to show the data augmented only if an user is facing his device camera towards the river and not in the opposite direction.
How shall I get to implement this?
I thought that there could be two ways:
feature detection: but I do not know if it would work as the feature here which is river is quiet dynamic.
something to do with the orientation of the phone with respect to the real world. However, I do not really get an idea of how I can implement this.
I think the best way to implement is to use GPS based Augmented reality.So basically you are attaching feature on GPS location and when user holds camera in direction of that GPS location closed to it the detection will happen. Definitely you should not go for image based feature detection.
you may follow the links below
https://www.youtube.com/watch?v=X6djed8e4n0
http://wirebeings.com/markerless-gps-ar.html
I hope this helps.

Scanning a 3d object in ARKit via video camera?

This is probably an insanely hard question. So far ARKit works with 3D models which are built in 3d modelling software. I was wondering if there was a way to use iPhone camera to scan 3d object (let's say a car), then use it in ARKit.
Any open source projects available which do this on other platforms or iOS?
You are looking for software in the "photogrammetry" category. There are various software tools that will stitch your photos into 3D models, but one option is Autodesk Remake. There is a free version.
ARKit/RealityKit on iPad/iPhone with a LiDAR scanner let you reconstruct a current scene and obtain a 3D geometry with an Occlusion Material applied. This geometry allows you occlude any object including a human being and physically "interact" with this generated mesh. LiDAR's working distance is up to 5 meters.
However, scanning a car isn't a good idea due to paint's high reflectivity.

Difference Between Marker based and Markerless Augmented Reality

I am totally new to AR and I searched on the internet about marker based and markerless AR but I am confused with marker based and markerless AR..
Lets assume an AR app triggers AR action when it scans specific images..So is this marker based AR or markerless AR..
Isn't the image a marker?
Also to position the AR content does marker based AR use devices' accelerometer and compass as in markerless AR?
In a marker-based AR application the images (or the corresponding image descriptors) to be recognized are provided beforehand. In this case you know exactly what the application will search for while acquiring camera data (camera frames). Most of the nowadays AR apps dealing with image recognition are marker-based. Why? Because it's much more simple to detect things that are hard-coded in your app.
On the other hand, a marker-less AR application recognizes things that were not directly provided to the application beforehand. This scenario is much more difficult to implement because the recognition algorithm running in your AR application has to identify patterns, colors or some other features that may exist in camera frames. For example if your algorithm is able to identify dogs, it means that the AR application will be able to trigger AR actions whenever a dog is detected in a camera frame, without you having to provide images with all the dogs in the world (this is exaggerated of course - training a database for example) when developing the application.
Long story short: in a marker-based AR application where image recognition is involved, the marker can be an image, or the corresponding descriptors (features + key points). Usually an AR marker is a black&white (square) image,a QR code for example. These markers are easily recognized and tracked => not a lot of processing power on the end-user device is needed to perform the recognition (and optionally tracking).
There is no need of an accelerometer or a compass in a marker-based app. The recognition library may be able to compute the pose matrix (rotation & translation) of the detected image relative to the camera of your device. If you know that, you know how far the recognized image is and how it is rotated relative to your device's camera. And from now on, AR begins... :)
Well. Since I got downvoted without explanation. Here is a little more detail on markerless tracking:
Actual there are several possibilities for augmented reality without "visual" markers but none of them called markerless tracking.
Showing of the virtual information can be triggered by GPS, Speech or simply turning on your phone.
Also, people tend to confuse NFT(Natural feature tracking) with markerless tracking. With NFT you can take a real life picture as a marker. But it is still a "marker".
This site has a nice overview and some examples for each marker:
Marker-Types
It's mostly in german but so beware.
What you call markerless tracking today is a technique best observed with the Hololens(and its own programming language) or the AR-Framework Kudan. Markerless Tracking doesn't find anything on his own. Instead, you can place an object at runtime somewhere in your field of view.
Markerless tracking is then used to keep this object in place. It's most likely uses a combination of sensor input and solving the SLAM( simultaneous localization and mapping) problem at runtime.
EDIT: A Little update. It seems the hololens creates its own inner geometric representation of the room. 3D-Objects are then put into that virtual room. After that, the room is kept in sync with the real world. The exact technique behind that seems to be unknown but some speculate that it is based on the Xbox Kinect technology.
Let's make it simple:
Marker-based augmented reality is when the tracked object is black-white square marker. A great example that is really easy to follow shown here: https://www.youtube.com/watch?v=PbEDkDGB-9w (you can try out by yourself)
Markerless augmented reality is when the tracked object can be anything else: picture, human body, head, eyes, hand or fingers etc. and on top of that you add virtual objects.
To sum it up, position and orientation information is the essential thing for Augmented Reality that can be provided by various sensors and methods for them. If you have that information accurate - you can create some really good AR applications.
It looks like there may be some confusion between Marker tracking and Natural Feature Tracking (NFT). A lot of AR SDK's tote their tracking as Markerless (NFT). This is still marker tracking, in that a pre-defined image or set of features is used. It's just not necessarily a black and white AR Toolkit type of marker. Vuforia, for example, uses NFT, which still requires a marker in the literal sense. Also, in the most literal sense, hand/face/body tracking is also marker tracking in that the marker is a shape. Markerless, inherent to the name, requires no pre-knowledge of the world or any shape or object be present to track.
You can read more about how Markerless tracking is achieved here, and see multiple examples of both marker-based and Markerless tracking here.
Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.
Markerless Tracking is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

Resources