3D Object Detection-Opencv - opencv

I have tried face recognition using OpenCV using the documentation provided on their wiki. Its working fine and it can detect multiple faces. However there is no data provided on the site regarding 3D object detection or head tracking. The links to the code and the wiki are provided below :
Face recognition
Cascade Classifier
While the wiki does provide sufficient information about face detection, as you might have found, 3D face recognition methods are not provided.
I wanted to know about projects related to 3D face recognition and tracking so that I can see the source code and try to make a project doing the same.

This might come late but willow garage has another project running called the Point Cloud Library (PCL) that is entirely focused on 3D data processing tasks. Face recognition is one of the use cases they use to advertise the project. Of course all of this is free...
http://pointclouds.org

There are many methods. I just can point you to right direction. Face recognition examples usually provide sub-detection of eyes. So actually you know face and eyes location. In similar or other means you can also detect lips.
Now when you have at least three points of object (face this time), you can calculate its 3D position in room using triangulation. This part of example exists in find_obj.cpp which comes as example with OpenCV. Just this example uses x points from SURF and draws rectangle based on this information. Check out also anything else with CvFindHomography.

Since OpenCV 2.4.2, there has been a header file for face detection and tracking: opencv2/contrib/detection_based_tracker.hpp
The header file defines a class called DetectionBasedTracker. The tracking mechanism it defines uses haar cascades in the background to detect objects. The tracking is much faster than the OpenCV Haar implementation (however, some have found it to be less accurate).
I have personally found it to work very well on an android device. Some sample code implementing the face detection and tracker is found here:
http://bytesandlogics.wordpress.com/2012/08/23/detectionbasedtracker-opencv-implementation/

You should have a look at Active shapes models and Active Appearance Models that are for the task you are describing.
OpenCV provides you only 2D detection methods, while the methods in reference (now very popular in the field) track a set of 3D points distributed on a face plus a texture to describe its appearance.
The Wikipedia pages will give you some links to implementations of teh said methods.
If you want to know the 3D parameters of the head in the world coordinates (for example for gaze detection), then you should google for the keywords "3D head tracking" and "head pose estimation".

Related

Apple Vision Framework Identify face

Is it possible in the Apple Vision Framework to compare faces and recognise if that person is in a picture compared to a reference image of that person?
Something like Facebook Face recognition.
Thomas
From the Vision Framework Documentation:
The Vision framework performs face and face landmark detection, text
detection, barcode recognition, image registration, and general
feature tracking. Vision also allows the use of custom Core ML models
for tasks like classification or object detection.
So, no, the Vision Framework does not provide face recognition, only face detection.
There are approaches out there to recognize faces. Here is an example of face recognition in an AR app:
https://github.com/NovatecConsulting/FaceRecognition-in-ARKit
They trained a model that can detect like 100 persons, but you have to retrain it for every new person you want to recognize. Unfortunately, you can not just give two images in and have the faces compared.
According to Face Detection vs Face Recognition article:
Face detection just means that a system is able to identify that there is a human face present in an image or video. For example, Face Detection can be used to auto focus functionality for cameras.
Face recognition describes a biometric technology that goes far beyond a way when just a human face is detected. It actually attempts to establish whose face it is.
But...
In a case you need an Augmented Reality app, like Facebook's FaceApp, the answer is:
Yes, you can create an app similar to FaceApp using ARKit.
Because you need just a simple form of Face Recognition what is accessible via ARKit or RealityKit framework. You do not even need to create a .mlmodel like you do using Vision and CoreML frameworks.
All you need is a device with a front camera, allowing you detect up to three faces at a time using ARKit 3.0 or RealityKit 1.0. Look at the following Swift code how you can do it to get ARFaceAnchor when a face has detected.
And additionally, if you wanna use reference images for simple face detection – you need to put several reference images in Xcode's .arresourcegroup folder and use the following Swift code as additional condition to get a ARImageAnchor (in the center of a detected image).

Algorithms for Tracking moving objects with a moving camera

I'm trying to develop an algorithm for real time tracking moving objects with a single moving camera setup as a project, in OpenCV (C++).
My basic objectives are
Detect motion in an (initially) static frame
Track that moving object (camera to follow that object)
Here is what I have tried already
Salient motion detection using temporal differencing and Optical Flow. (does not compensate for a moving camera)
KLT based feature tracking, but I was not able to segment the moving object features (moving object features got mixed with other trackable features in the image)
Mean shift based tracking (required initialization and is a bit computationally expensive)
I'm now trying to look into the following methods
Histogram of Gradients.
Algorithms that implement camera motion parameters.
Any advice on which direction should I proceed forward to acheive my objective.
Type: 'zdenek kalal predator' to google.com and watch the videos, read the papers that came up. I think it will give you a lot of insight.

Finger/Hand Gesture Recognition using Kinect

Let me explain my need before I explain the problem.
I am looking forward for a hand controlled application.
Navigation using palm and clicks using grab/fist.
Currently, I am working with Openni, which sounds promising and has few examples which turned out to be useful in my case, as it had inbuild hand tracker in samples. which serves my purpose for time being.
What I want to ask is,
1) what would be the best approach to have a fist/grab detector ?
I trained and used Adaboost fist classifiers on extracted RGB data, which was pretty good, but, it has too many false detections to move forward.
So, here I frame two more questions
2) Is there any other good library which is capable of achieving my needs using depth data ?
3)Can we train our own hand gestures, especially using fingers, as some paper was referring to HMM, if yes, how do we proceed with a library like OpenNI ?
Yeah, I tried with the middle ware libraries in OpenNI like, the grab detector, but, they wont serve my purpose, as its neither opensource nor matches my need.
Apart from what I asked, if there is something which you think, that could help me will be accepted as a good suggestion.
You don't need to train your first algorithm since it will complicate things.
Don't use color either since it's unreliable (mixes with background and changes unpredictably depending on lighting and viewpoint)
Assuming that your hand is a closest object you can simply
segment it out by depth threshold. You can set threshold manually, use a closest region of depth histogram, or perform connected component on a depth map to break it on meaningful parts first (and then select your object based not only on its depth but also using its dimensions, motion, user input, etc). Here is the output of a connected components method:
Apply convex defects from opencv library to find fingers;
Track fingers rather than rediscover them in 3D.This will increase stability. I successfully implemented such finger detection about 3 years ago.
Read my paper :) http://robau.files.wordpress.com/2010/06/final_report_00012.pdf
I have done research on gesture recognition for hands, and evaluated several approaches that are robust to scale, rotation etc. You have depth information which is very valuable, as the hardest problem for me was to actually segment the hand out of the image.
My most successful approach is to trail the contour of the hand and for each point on the contour, take the distance to the centroid of the hand. This gives a set of points that can be used as input for many training algorithms.
I use the image moments of the segmented hand to determine its rotation, so there is a good starting point on the hands contour. It is very easy to determine a fist, stretched out hand and the number of extended fingers.
Note that while it works fine, your arm tends to get tired from pointing into the air.
It seems that you are unaware of the Point Cloud Library (PCL). It is an open-source library dedicated to the processing of point clouds and RGB-D data, which is based on OpenNI for the low-level operations and which provides a lot of high-level algorithm, for instance to perform registration, segmentation and also recognition.
A very interesting algorithm for shape/object recognition in general is called implicit shape model. In order to detect a global object (such as a car, or an open hand), the idea is first to detect possible parts of it (e.g. wheels, trunk, etc, or fingers, palm, wrist etc) using a local feature detector, and then to infer the position of the global object by considering the density and the relative position of its parts. For instance, if I can detect five fingers, a palm and a wrist in a given neighborhood, there's a good chance that I am in fact looking at a hand, however, if I only detect one finger and a wrist somewhere, it could be a pair of false detections. The academic research article on this implicit shape model algorithm can be found here.
In PCL, there is a couple of tutorials dedicated to the topic of shape recognition, and luckily, one of them covers the implicit shape model, which has been implemented in PCL. I never tested this implementation, but from what I could read in the tutorial, you can specify your own point clouds for the training of the classifier.
That being said, you did not mentioned it explicitly in your question, but since your goal is to program a hand-controlled application, you might in fact be interested in a real-time shape detection algorithm. You would have to test the speed of the implicit shape model provided in PCL, but I think this approach is better suited to offline shape recognition.
If you do need real-time shape recognition, I think you should first use a hand/arm tracking algorithm (which are usually faster than full detection) in order to know where to look in the images, instead of trying to perform a full shape detection at each frame of your RGB-D stream. You could for instance track the hand location by segmenting the depthmap (e.g. using an appropriate threshold on the depth) and then detecting the extermities.
Then, once you approximately know where the hand is, it should be easier to decide whether the hand is making one gesture relevant to your application. I am not sure what you exactly mean by fist/grab gestures, but I suggest that you define and use some app-controlling gestures which are easy and quick to distinguish from one another.
Hope this helps.
The fast answer is: Yes, you can train your own gesture detector using depth data. It is really easy, but it depends on the type of the gesture.
Suppose you want to detect a hand movement:
Detect the hand position (x,y,x). Using OpenNi is straighforward as you have one node for the hand
Execute the gesture and collect ALL the positions of the hand during the gesture.
With the list of positions train a HMM. For example you can use Matlab, C, or Python.
For your own gestures, you can test the model and detect the gestures.
Here you can find a nice tutorial and code (in Matlab). The code (test.m is pretty easy to follow). Here is an snipet:
%Load collected data
training = get_xyz_data('data/train',train_gesture);
testing = get_xyz_data('data/test',test_gesture);
%Get clusters
[centroids N] = get_point_centroids(training,N,D);
ATrainBinned = get_point_clusters(training,centroids,D);
ATestBinned = get_point_clusters(testing,centroids,D);
% Set priors:
pP = prior_transition_matrix(M,LR);
% Train the model:
cyc = 50;
[E,P,Pi,LL] = dhmm_numeric(ATrainBinned,pP,[1:N]',M,cyc,.00001);
Dealing with fingers is pretty much the same, but instead of detecting the hand you need to detect de fingers. As Kinect doesn't have finger points, you need to use a specific code to detect them (using segmentation or contour tracking). Some examples using OpenCV can be found here and here, but the most promising one is the ROS library that have a finger node (see example here).
If you only need the detection of a fist/grab state, you should give microsoft a chance. Microsoft.Kinect.Toolkit.Interaction contains methods and events that detects the grip / grip release state of a hand. Take a look at the HandEventType of InteractionHandPointer . That works quite good for the fist/grab detection, but does not detect or report the position of individual fingers.
The next kinect (kinect one) detects 3 joint per hand (Wrist, Hand, Thumb) and has 3 hand based gestures: open, closed (grip/fist) and lasso (pointer). If that is enough for you, you should consider the microsoft libraries.
1) If there are a lot of false detections, you could try to extend the negative sample set of the classifier, and train it again. The extended negative image set should contain such images, where the fist was false detected. Maybe this will help to create a better classifier.
I've had quite a bit of succes with the middleware library as provided by http://www.threegear.com/. They provide several gestures (including grabbing, pinching and pointing) and 6 DOF handtracking.
You might be interested in this paper & open-source code:
Robust Articulated-ICP for Real-Time Hand Tracking
Code: https://github.com/OpenGP/htrack
Screenshot: http://lgg.epfl.ch/img/codedata/htrack_icp.png
YouTube Video: https://youtu.be/rm3YnClSmIQ
Paper PDF: http://infoscience.epfl.ch/record/206951/files/htrack.pdf

Visual Odometry (aka. Egomotion estimation) with OpenCV

I'm planning to implement an application with augmented reality features. For one of the features I need an egomotion estimation. Only the camera is moving, in a space with fixed objects (nothing or only small parts will be moving, so that they might be ignored).
So I searched and read a lot and stumbled upon OpenCV. Wikipedia explicitly states that it could be used for egomotion. But I cannot find any documentation about it.
Do I need to implement the egomotion algorithm by myself with OpenCV's object detection methods? (I think it would be very complex, because objects will move in different speed depending on their distance to the camera. And I also need to regard rotations.)
If so, where should I start? Is there a good code example for a Kanade–Lucas–Tomasi feature tracker with support for scaling and rotation?
P.S.: I also know about marker based frameworks like vuforia, but using a marker is something I would like to prevent, as it restricts the possible view points.
Update 2013-01-08: I learned that Egomotion Estimation is better known as Visual Odometry. So I updated the title.
You can find a good implementation of monocular visual odometry based on optical flow here.
It's coded using emgucv (C# opencv wrapper) but you will find no issues on convert it back to pure opencv.
Egomotion (or visual odometry) is usually based on optical flow, and OpenCv has some motion analysis and object tracking functions for computing optical flow (in conjunction with a feature detector like cvGoodFeaturesToTrack()).
This example might be of use.
Not a complete solution, but might at least get you going in the right direction.

Detect custom image marker in real time using OpenCV on iOS

I would like some hints, maybe more, on detecting a custom image marker in a real-time video feed. I'm using OpenCV, iPhone and the camera feed.
By custom image marker I'm referring to a predefined image, but it can be any kind of image (not a specific designed marker). For example, it can be a picture of some skyscrapers.
I've already worked with ARTags and understand how they are detected, but how would I detect this custom image and especially find out its position & orientation?
What makes a good custom image to be detected successfully?
Thanks
The most popular markers used in AR are
AR markers (a simple form of QR codes) - those detected by AR tookit & others
QR codes. There are plenty of examples on how to create/detect/read QR.
Dot grids. Similar with the chess grids used in calibration. It seems their detection can be more robust than the classical chess grid. OpenCV has codes related to dot grid detection in the calibration part. Also, the OpenCV codebase offers a good starting point to extract 3D position and orientation.
Chess grids. Similar to dot grids. They were the standard calibration pattern, and some people used them for marker detection of a long time. But they lost their position to dot grids recently, when some people discovered that dots can be detected with better accuracy.
Note:
Grids are symmetrical. I bet you already know that. But that means you will not be able to
recover full orientation data from them. You will get the plane where the grid lies, but nothing more.
Final note:
Code and examples for the first two are easily found on the Internet. They are considered the best by many people. If you decide to use the grid patterns, you have to enjoy some math and image processing work :) And it will take more.
This answer is valid no more since Vuforia is now a paid engine.
I think you should give Vuforia a try. It's a AR engine that can use any image you want as a marker. What makes a good marker for Vuforia is high frequency images.
http://www.qualcomm.com/solutions/augmented-reality
Vuforia is a free to use engine.

Resources