Finger and face tracking using ARKit 3 - ios

Is finger tracking supported by ARKit 3? And if yes, can ARKit 3 be used in conjunction with the face detection API for the TrueDepth Camera module to sport the position of a certain finger in respect to eyes, nose and mouth?
If not, is there an easy way for finger tracking using, without going as deep as using Metal APIs?
Note: by finger tracking, I mean tracking the number of fingers and/or which finger(s) is visible.

It's possible that you can get pretty close positions for the fingers of a tracked body using ARKit 3's human body tracking feature (see Apple's Capturing Body Motion in 3D sample code), but you use ARBodyTrackingConfiguration for the human body tracking feature, and face tracking is not supported under that configuration. Also, the joints for fingers are not tracked, so while you can get their approximate location using a joint that is tracked (i.e., the wrist), ARKit won't tell you which fingers are extended or retracted.

Related

Why AR objects are trembling on vuforia`s 5*-markers?

We build the app for the jewelry store where AR object (the ring) is recognized when the ios app camera is pointed on the paper (and metal) marker we built in Vuforia. This is 5* marker (which is considered to be a good quality according to Vuforia) that we place on the finger.
So what we have - we have a pretty fast recognition, BUT we have unpleasant shaking of the ring, that is being recognized. The closer we point the camera to the marker - the more shaking is seen.
The paper maker has a normal cylinder form. The lighting is always enough while testing, etc
Any ideas on why this shaking appears?
Thanks in advance!
We were trying different markers, test in different conditions with different lighting, we played with camera settings, used different vuforia versions, etc - no luck
If you the target is cylindric in shape than you should create a Vuforia cylinder target https://library.vuforia.com/objects/cylinder-targets. Standard ImageTarget are expected to be planar in 3D shape. The detector still works as as the plane can locally approximate the cylindrical geometry but tracking will be extremely inaccurate and brittle.

ARKit detect house exterior planes

I know that ARKit is able to detect and classify planes on A12+ processors. It does the job reasonably well inside the house, but what about the outside? Is it able to detect windows and doors if I move around a house a little? I tried it myself and the result did not satisfy me: i moved around the building too much and still ARKit did not distinguish wall from the window.
I used app from here for tests: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
Iā€™m I doing everything correct? Maybe there is some third party library to detect house parts better?
Thanks in advance!
When you test the sample app outside and try to use ARKit to detect the surfaces on the exterior of a house it will not work. ARKit is built to map flat surfaces and their orientations (horizontal/vertical). This means ARKit can understand that a surface is flat, is either a wall or a floor. When you attempt to "map" the exterior of a house, ARKit will only detect the horizontal surfaces as walls, it cannot distinguish between walls and windows.
You will need to develop/source an AI model and run it against the camera data using CoreML to enable your app to distinguish between windows and walls on the exterior of a house.
ARKit Plane tracking documentation for reference: https://developer.apple.com/documentation/arkit/tracking_and_visualizing_planes
a couple articles about ARKit with CoreML
https://www.rightpoint.com/rplabs/dev/arkit-and-coreml
https://medium.com/s23nyc-tech/using-machine-learning-and-coreml-to-control-arkit-24241c894e3b
[Update]
Yes you are correct, for A12+ devices Apple does allow for plane classification. I would assume the issue with exterior windows vs interior is either distance to the window (too far for the CV to properly classify) or Apple has tuned it more for interior windows vs exterior. The difference may seem trivial but to a CV algorithm it's quite different.

Can ARCore track moving surfaces?

ARCore can track static surfaces according to its documentation, but doesn't mention anything about moving surfaces, so I'm wondering if ARCore can track flat surfaces (of course, with enough feature points) that can move around.
Yes, you definitely can track moving surfaces and moving objects in ARCore.
If you track static surface using ARCore ā€“ the resulted features are mainly suitable for so-called Camera Tracking. If you track moving object/surface ā€“ the resulted features are mostly suitable for Object Tracking.
You also can mask moving/not-moving parts of the image and, of course, inverse Six-Degrees-Of-Freedom (translate xyz and rotate xyz) camera transform.
Watch this video to find out how they succeeded.
Yes, ARCore tracks feature points, estimates surfaces, and also allows access to the image data from the camera, so custom computer vision algorithms can be written as well.
I guess it should be possible theoretically.
However, Ive tested it with some stuff in my HOUSE (running S8 and an app with unity and arcore)
and the problem is more or less that it refuses to even start tracking movable things like books and plates etc:
due to the feature points of the surrounding floor etc it always picks up on those first.
Edit: did some more testing and i Managed to get it to track a bed sheet, it does However not adjust to any movement. Meaning as of now the plane stays fixed allthough i saw some wobbling but i guess that Was because it tried to adjust the Positioning of the plane once it's original Feature points where moved.

How to detect and track the foot using ARKIt and vision framework?

I wants to virtually add foot ball and detect and tracking the foot so that we can simulate the kick to the ball.
Can any one please suggest the way achieve it in iOS?
I think you'll have to have your own CoreML model that identify where a foot is in an image, and pass it the frames captured by the camera and identify where there is a foot. ARKit doesn't do that and there's no foot recognition built in iOS 11

Difference Between Marker based and Markerless Augmented Reality

I am totally new to AR and I searched on the internet about marker based and markerless AR but I am confused with marker based and markerless AR..
Lets assume an AR app triggers AR action when it scans specific images..So is this marker based AR or markerless AR..
Isn't the image a marker?
Also to position the AR content does marker based AR use devices' accelerometer and compass as in markerless AR?
In a marker-based AR application the images (or the corresponding image descriptors) to be recognized are provided beforehand. In this case you know exactly what the application will search for while acquiring camera data (camera frames). Most of the nowadays AR apps dealing with image recognition are marker-based. Why? Because it's much more simple to detect things that are hard-coded in your app.
On the other hand, a marker-less AR application recognizes things that were not directly provided to the application beforehand. This scenario is much more difficult to implement because the recognition algorithm running in your AR application has to identify patterns, colors or some other features that may exist in camera frames. For example if your algorithm is able to identify dogs, it means that the AR application will be able to trigger AR actions whenever a dog is detected in a camera frame, without you having to provide images with all the dogs in the world (this is exaggerated of course - training a database for example) when developing the application.
Long story short: in a marker-based AR application where image recognition is involved, the marker can be an image, or the corresponding descriptors (features + key points). Usually an AR marker is a black&white (square) image,a QR code for example. These markers are easily recognized and tracked => not a lot of processing power on the end-user device is needed to perform the recognition (and optionally tracking).
There is no need of an accelerometer or a compass in a marker-based app. The recognition library may be able to compute the pose matrix (rotation & translation) of the detected image relative to the camera of your device. If you know that, you know how far the recognized image is and how it is rotated relative to your device's camera. And from now on, AR begins... :)
Well. Since I got downvoted without explanation. Here is a little more detail on markerless tracking:
Actual there are several possibilities for augmented reality without "visual" markers but none of them called markerless tracking.
Showing of the virtual information can be triggered by GPS, Speech or simply turning on your phone.
Also, people tend to confuse NFT(Natural feature tracking) with markerless tracking. With NFT you can take a real life picture as a marker. But it is still a "marker".
This site has a nice overview and some examples for each marker:
Marker-Types
It's mostly in german but so beware.
What you call markerless tracking today is a technique best observed with the Hololens(and its own programming language) or the AR-Framework Kudan. Markerless Tracking doesn't find anything on his own. Instead, you can place an object at runtime somewhere in your field of view.
Markerless tracking is then used to keep this object in place. It's most likely uses a combination of sensor input and solving the SLAM( simultaneous localization and mapping) problem at runtime.
EDIT: A Little update. It seems the hololens creates its own inner geometric representation of the room. 3D-Objects are then put into that virtual room. After that, the room is kept in sync with the real world. The exact technique behind that seems to be unknown but some speculate that it is based on the Xbox Kinect technology.
Let's make it simple:
Marker-based augmented reality is when the tracked object is black-white square marker. A great example that is really easy to follow shown here: https://www.youtube.com/watch?v=PbEDkDGB-9w (you can try out by yourself)
Markerless augmented reality is when the tracked object can be anything else: picture, human body, head, eyes, hand or fingers etc. and on top of that you add virtual objects.
To sum it up, position and orientation information is the essential thing for Augmented Reality that can be provided by various sensors and methods for them. If you have that information accurate - you can create some really good AR applications.
It looks like there may be some confusion between Marker tracking and Natural Feature Tracking (NFT). A lot of AR SDK's tote their tracking as Markerless (NFT). This is still marker tracking, in that a pre-defined image or set of features is used. It's just not necessarily a black and white AR Toolkit type of marker. Vuforia, for example, uses NFT, which still requires a marker in the literal sense. Also, in the most literal sense, hand/face/body tracking is also marker tracking in that the marker is a shape. Markerless, inherent to the name, requires no pre-knowledge of the world or any shape or object be present to track.
You can read more about how Markerless tracking is achieved here, and see multiple examples of both marker-based and Markerless tracking here.
Marker based AR uses a Camera and a visual marker to determine the center, orientation and range of its spherical coordinate system. ARToolkit is the first full featured toolkit for marker based tracking.
Markerless Tracking is one of best methods for tracking currently. It performs active tracking and recognition of real environment on any type of support without using special placed markers. Allows more complex application of Augmented Reality concept.

Resources