Marker detection Methods in Augmented Reality - augmented-reality

I have placed markers in room and have a moving camera. I want to detect location of room, the camera is looking using markers detection technique. Can any one tell me methods for marker detection??

I am searching for the same so somebody else could be easily more helpful.
I just saw this two links:
http://studierstube.icg.tugraz.at/thesis/marker_detection.pdf
http://infi.nl/blog/view/id/56/

Related

ARKit plane with real world object above it

Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas

How to display the data as augmented only if the user is facing towards a feature in the real world?

I am currently developing an augmented reality android application in which I would like to display the discharge data of a river along with the river name as augmented features. However, I would like to show the data augmented only if an user is facing his device camera towards the river and not in the opposite direction.
How shall I get to implement this?
I thought that there could be two ways:
feature detection: but I do not know if it would work as the feature here which is river is quiet dynamic.
something to do with the orientation of the phone with respect to the real world. However, I do not really get an idea of how I can implement this.
I think the best way to implement is to use GPS based Augmented reality.So basically you are attaching feature on GPS location and when user holds camera in direction of that GPS location closed to it the detection will happen. Definitely you should not go for image based feature detection.
you may follow the links below
https://www.youtube.com/watch?v=X6djed8e4n0
http://wirebeings.com/markerless-gps-ar.html
I hope this helps.

Difference Between Marker based and Markerless Augmented Reality

I am totally new to AR and I searched on the internet about marker based and markerless AR but I am confused with marker based and markerless AR..
Lets assume an AR app triggers AR action when it scans specific images..So is this marker based AR or markerless AR..
Isn't the image a marker?
Also to position the AR content does marker based AR use devices' accelerometer and compass as in markerless AR?
In a marker-based AR application the images (or the corresponding image descriptors) to be recognized are provided beforehand. In this case you know exactly what the application will search for while acquiring camera data (camera frames). Most of the nowadays AR apps dealing with image recognition are marker-based. Why? Because it's much more simple to detect things that are hard-coded in your app.
On the other hand, a marker-less AR application recognizes things that were not directly provided to the application beforehand. This scenario is much more difficult to implement because the recognition algorithm running in your AR application has to identify patterns, colors or some other features that may exist in camera frames. For example if your algorithm is able to identify dogs, it means that the AR application will be able to trigger AR actions whenever a dog is detected in a camera frame, without you having to provide images with all the dogs in the world (this is exaggerated of course - training a database for example) when developing the application.
Long story short: in a marker-based AR application where image recognition is involved, the marker can be an image, or the corresponding descriptors (features + key points). Usually an AR marker is a black&white (square) image,a QR code for example. These markers are easily recognized and tracked => not a lot of processing power on the end-user device is needed to perform the recognition (and optionally tracking).
There is no need of an accelerometer or a compass in a marker-based app. The recognition library may be able to compute the pose matrix (rotation & translation) of the detected image relative to the camera of your device. If you know that, you know how far the recognized image is and how it is rotated relative to your device's camera. And from now on, AR begins... :)
Well. Since I got downvoted without explanation. Here is a little more detail on markerless tracking:
Actual there are several possibilities for augmented reality without "visual" markers but none of them called markerless tracking.
Showing of the virtual information can be triggered by GPS, Speech or simply turning on your phone.
Also, people tend to confuse NFT(Natural feature tracking) with markerless tracking. With NFT you can take a real life picture as a marker. But it is still a "marker".
This site has a nice overview and some examples for each marker:
Marker-Types
It's mostly in german but so beware.
What you call markerless tracking today is a technique best observed with the Hololens(and its own programming language) or the AR-Framework Kudan. Markerless Tracking doesn't find anything on his own. Instead, you can place an object at runtime somewhere in your field of view.
Markerless tracking is then used to keep this object in place. It's most likely uses a combination of sensor input and solving the SLAM( simultaneous localization and mapping) problem at runtime.
EDIT: A Little update. It seems the hololens creates its own inner geometric representation of the room. 3D-Objects are then put into that virtual room. After that, the room is kept in sync with the real world. The exact technique behind that seems to be unknown but some speculate that it is based on the Xbox Kinect technology.
Let's make it simple:
Marker-based augmented reality is when the tracked object is black-white square marker. A great example that is really easy to follow shown here: https://www.youtube.com/watch?v=PbEDkDGB-9w (you can try out by yourself)
Markerless augmented reality is when the tracked object can be anything else: picture, human body, head, eyes, hand or fingers etc. and on top of that you add virtual objects.
To sum it up, position and orientation information is the essential thing for Augmented Reality that can be provided by various sensors and methods for them. If you have that information accurate - you can create some really good AR applications.
It looks like there may be some confusion between Marker tracking and Natural Feature Tracking (NFT). A lot of AR SDK's tote their tracking as Markerless (NFT). This is still marker tracking, in that a pre-defined image or set of features is used. It's just not necessarily a black and white AR Toolkit type of marker. Vuforia, for example, uses NFT, which still requires a marker in the literal sense. Also, in the most literal sense, hand/face/body tracking is also marker tracking in that the marker is a shape. Markerless, inherent to the name, requires no pre-knowledge of the world or any shape or object be present to track.
You can read more about how Markerless tracking is achieved here, and see multiple examples of both marker-based and Markerless tracking here.
Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.
Markerless Tracking is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

What method to use(in OpenCV) to associate markers to proper bone structure?

I'm trying to write a C++ program using Opencv library that will reconstruct 3d points from the corresponding 2d markers placed on human model.
But I've a question. How do commercial mocap(motion capture) industry figure out which markers belong to which bone structure?
What I mean by my last question is: lets suppose there are three markers placed on left upper arm. What method do they use to associate these three markers to left upper arm from frame to frame?
Because it could belong to right upper arm right or to any bones like front chest, femur etc.
So what process do they implement to differentiate between markers and place the right marker to proper bone structure?
Do they use optical flow, SIFT to track markers where in frame-1 the markers' are labelled for proper bones? But even if the mocap industry use this method, aren't these two methods very time consuming? I saw a video on you-tube. And there they associate and reconstruct markers in real-time.
Is it possible to kindly tell me what procedure commercial mocap industry follow to correspond points to individual parts of skeleton structure?
After all you need to do this because you have to write the xRot, yRot and zRot(rotation about x-y-z axis) of bones in .bvh file so that you can view the 2d motion in 3d.
So what's the secret?
For motion capture or tracking objects with markers in general the way to go is to keep track of the markers themselves between two frames and to keep track of the distance between the markers. A combination of this information is used to determine if a marker is the same as one close by to a marker in the previous frame.
Also these systems often use multiple cameras and have calibration objects where the position of markers is known and the correlation between the cameras can be determined. The algorithms to do this detection are highly advanced in these commercial mocap solutions.

ios mapkit working out what you're looking at

I have a iOS application, with a MapView with several MKPolygons positioned to represent buildings on the map. Using the compass and GPS I want to be able to work out which of the polygons the handset is being aimed at.
I already am getting the GPS location and using the magnetometer to get the heading so just need to work out how to project from this point and work out which polygon it hits first.
Any suggestion??
You probably want to look at some like collision detection in 3D games to solve this problem. Sounds like an identical problem to me. There is a good overview of the different algorithms for this in this stackoverflow question: When to use Binary Space Partitioning, Quadtree, Octree?

Resources