How to display the data as augmented only if the user is facing towards a feature in the real world? - orientation

I am currently developing an augmented reality android application in which I would like to display the discharge data of a river along with the river name as augmented features. However, I would like to show the data augmented only if an user is facing his device camera towards the river and not in the opposite direction.
How shall I get to implement this?
I thought that there could be two ways:
feature detection: but I do not know if it would work as the feature here which is river is quiet dynamic.
something to do with the orientation of the phone with respect to the real world. However, I do not really get an idea of how I can implement this.

I think the best way to implement is to use GPS based Augmented reality.So basically you are attaching feature on GPS location and when user holds camera in direction of that GPS location closed to it the detection will happen. Definitely you should not go for image based feature detection.
you may follow the links below
https://www.youtube.com/watch?v=X6djed8e4n0
http://wirebeings.com/markerless-gps-ar.html
I hope this helps.

Related

Indoor record object location

Many AR applications allow user to select a 3D object and display in the phone. If the user walks into a different area. The 3D object still remains at the same location. How can i achieve this? Can GPS solve this problem? But GPS is not accurate in indoor.
You can do it by using ARKit, ARCore or Vuforia SDKs. In ARKit and ARCore you can anchor objects to physical locations and they will stay there even if you walk into a different room. However, you might notice some drift in the location in ARCore because environmental understanding of device change over time or you might lose tracking at some point. With Vuforia you can use extended tracking to track objects but it is a bit different than ARCore and ARKit. You have to use ground plane or smart terrain to utilize extended tracking fully in your situation.

track user translation movement on ios using sensors for vr game?

I'm starting to experiment VR game development on ios. I learned a lot from google cardboard sdk. It can track user's head orientation, but it can not track user's translation. This shortage cause the use can only look at the virtual environment from a fix location (I known I can add auto walk to the game, but it's just not the same).
I'm searching around the internet, some says translation tracking just can't be done by using sensors, but it seems combining magnetometer, you can track user's movement path, like this example.
I also found a different method called SLAM, which use camera and opencv to do some feature tracking, then use feature point informations to calculate translation. Here's some example from 13th Lab. And google has a Tango Project which is more advanced, but it require hardware support.
I'm quite new to this kind of topic, so I 'm wondering, if I want to track not only the head orientation but also the head(or body) translation movement in my game, which method should I choose. SLAM seems pretty good, but it's also pretty difficult, and I think it will has a big impact on the cpu.
If you are familiar with this topic, please give some advice, thanks in advance!
If high accuracy is not important, you can try using the accelerometer to detect walking movement (basically a pedometer) and multiply it with an average human step width. Direction can be determined by the compass / magnetometer.
High accuracy tracking would likely require complex algorithms such as SLAM, though many such algorithms have already been implemented in VR libraries such as Vuforia or Kudan
Hi I disagree with you Zhiquiang Li
Look at this video made with kudan, the video is quite stable and moreover my smartphone is a quite old phone.
https://youtu.be/_7zctFw-O0Y

Difference Between Marker based and Markerless Augmented Reality

I am totally new to AR and I searched on the internet about marker based and markerless AR but I am confused with marker based and markerless AR..
Lets assume an AR app triggers AR action when it scans specific images..So is this marker based AR or markerless AR..
Isn't the image a marker?
Also to position the AR content does marker based AR use devices' accelerometer and compass as in markerless AR?
In a marker-based AR application the images (or the corresponding image descriptors) to be recognized are provided beforehand. In this case you know exactly what the application will search for while acquiring camera data (camera frames). Most of the nowadays AR apps dealing with image recognition are marker-based. Why? Because it's much more simple to detect things that are hard-coded in your app.
On the other hand, a marker-less AR application recognizes things that were not directly provided to the application beforehand. This scenario is much more difficult to implement because the recognition algorithm running in your AR application has to identify patterns, colors or some other features that may exist in camera frames. For example if your algorithm is able to identify dogs, it means that the AR application will be able to trigger AR actions whenever a dog is detected in a camera frame, without you having to provide images with all the dogs in the world (this is exaggerated of course - training a database for example) when developing the application.
Long story short: in a marker-based AR application where image recognition is involved, the marker can be an image, or the corresponding descriptors (features + key points). Usually an AR marker is a black&white (square) image,a QR code for example. These markers are easily recognized and tracked => not a lot of processing power on the end-user device is needed to perform the recognition (and optionally tracking).
There is no need of an accelerometer or a compass in a marker-based app. The recognition library may be able to compute the pose matrix (rotation & translation) of the detected image relative to the camera of your device. If you know that, you know how far the recognized image is and how it is rotated relative to your device's camera. And from now on, AR begins... :)
Well. Since I got downvoted without explanation. Here is a little more detail on markerless tracking:
Actual there are several possibilities for augmented reality without "visual" markers but none of them called markerless tracking.
Showing of the virtual information can be triggered by GPS, Speech or simply turning on your phone.
Also, people tend to confuse NFT(Natural feature tracking) with markerless tracking. With NFT you can take a real life picture as a marker. But it is still a "marker".
This site has a nice overview and some examples for each marker:
Marker-Types
It's mostly in german but so beware.
What you call markerless tracking today is a technique best observed with the Hololens(and its own programming language) or the AR-Framework Kudan. Markerless Tracking doesn't find anything on his own. Instead, you can place an object at runtime somewhere in your field of view.
Markerless tracking is then used to keep this object in place. It's most likely uses a combination of sensor input and solving the SLAM( simultaneous localization and mapping) problem at runtime.
EDIT: A Little update. It seems the hololens creates its own inner geometric representation of the room. 3D-Objects are then put into that virtual room. After that, the room is kept in sync with the real world. The exact technique behind that seems to be unknown but some speculate that it is based on the Xbox Kinect technology.
Let's make it simple:
Marker-based augmented reality is when the tracked object is black-white square marker. A great example that is really easy to follow shown here: https://www.youtube.com/watch?v=PbEDkDGB-9w (you can try out by yourself)
Markerless augmented reality is when the tracked object can be anything else: picture, human body, head, eyes, hand or fingers etc. and on top of that you add virtual objects.
To sum it up, position and orientation information is the essential thing for Augmented Reality that can be provided by various sensors and methods for them. If you have that information accurate - you can create some really good AR applications.
It looks like there may be some confusion between Marker tracking and Natural Feature Tracking (NFT). A lot of AR SDK's tote their tracking as Markerless (NFT). This is still marker tracking, in that a pre-defined image or set of features is used. It's just not necessarily a black and white AR Toolkit type of marker. Vuforia, for example, uses NFT, which still requires a marker in the literal sense. Also, in the most literal sense, hand/face/body tracking is also marker tracking in that the marker is a shape. Markerless, inherent to the name, requires no pre-knowledge of the world or any shape or object be present to track.
You can read more about how Markerless tracking is achieved here, and see multiple examples of both marker-based and Markerless tracking here.
Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.
Markerless Tracking is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

Detect custom image marker in real time using OpenCV on iOS

I would like some hints, maybe more, on detecting a custom image marker in a real-time video feed. I'm using OpenCV, iPhone and the camera feed.
By custom image marker I'm referring to a predefined image, but it can be any kind of image (not a specific designed marker). For example, it can be a picture of some skyscrapers.
I've already worked with ARTags and understand how they are detected, but how would I detect this custom image and especially find out its position & orientation?
What makes a good custom image to be detected successfully?
Thanks
The most popular markers used in AR are
AR markers (a simple form of QR codes) - those detected by AR tookit & others
QR codes. There are plenty of examples on how to create/detect/read QR.
Dot grids. Similar with the chess grids used in calibration. It seems their detection can be more robust than the classical chess grid. OpenCV has codes related to dot grid detection in the calibration part. Also, the OpenCV codebase offers a good starting point to extract 3D position and orientation.
Chess grids. Similar to dot grids. They were the standard calibration pattern, and some people used them for marker detection of a long time. But they lost their position to dot grids recently, when some people discovered that dots can be detected with better accuracy.
Note:
Grids are symmetrical. I bet you already know that. But that means you will not be able to
recover full orientation data from them. You will get the plane where the grid lies, but nothing more.
Final note:
Code and examples for the first two are easily found on the Internet. They are considered the best by many people. If you decide to use the grid patterns, you have to enjoy some math and image processing work :) And it will take more.
This answer is valid no more since Vuforia is now a paid engine.
I think you should give Vuforia a try. It's a AR engine that can use any image you want as a marker. What makes a good marker for Vuforia is high frequency images.
http://www.qualcomm.com/solutions/augmented-reality
Vuforia is a free to use engine.

ios mapkit working out what you're looking at

I have a iOS application, with a MapView with several MKPolygons positioned to represent buildings on the map. Using the compass and GPS I want to be able to work out which of the polygons the handset is being aimed at.
I already am getting the GPS location and using the magnetometer to get the heading so just need to work out how to project from this point and work out which polygon it hits first.
Any suggestion??
You probably want to look at some like collision detection in 3D games to solve this problem. Sounds like an identical problem to me. There is a good overview of the different algorithms for this in this stackoverflow question: When to use Binary Space Partitioning, Quadtree, Octree?

Resources