I'm working on an iOS application that involves object detection (in this case, the eye) using OpenCV. How can I create a Haar cascade that I can implement into my iOS application? I've already seen this question, but it doesn't answer the question.
The source of opencv_contrib face module, ~\face\data\cascades, comes with pre-trained face detection data which included below features.
left and right ears (haarcascade_mcs_leftear.xml, haarcascade_mcs_rightear.xml)
left and right eyes (haarcascade_mcs_lefteye.xml, haarcascade_mcs_righteye.xml)
eye pair (haarcascade_mcs_eyepair_big.xml, haarcascade_mcs_eyepair_small.xml)
mouth (haarcascade_mcs_mouth.xml)
nose (haarcascade_mcs_nose.xml)
upper body (haarcascade_mcs_upperbody.xml)
You can use them in your code to build your iOS application. There are samples and tutorials in ~\face.
Related
Is it possible in the Apple Vision Framework to compare faces and recognise if that person is in a picture compared to a reference image of that person?
Something like Facebook Face recognition.
Thomas
From the Vision Framework Documentation:
The Vision framework performs face and face landmark detection, text
detection, barcode recognition, image registration, and general
feature tracking. Vision also allows the use of custom Core ML models
for tasks like classification or object detection.
So, no, the Vision Framework does not provide face recognition, only face detection.
There are approaches out there to recognize faces. Here is an example of face recognition in an AR app:
https://github.com/NovatecConsulting/FaceRecognition-in-ARKit
They trained a model that can detect like 100 persons, but you have to retrain it for every new person you want to recognize. Unfortunately, you can not just give two images in and have the faces compared.
According to Face Detection vs Face Recognition article:
Face detection just means that a system is able to identify that there is a human face present in an image or video. For example, Face Detection can be used to auto focus functionality for cameras.
Face recognition describes a biometric technology that goes far beyond a way when just a human face is detected. It actually attempts to establish whose face it is.
But...
In a case you need an Augmented Reality app, like Facebook's FaceApp, the answer is:
Yes, you can create an app similar to FaceApp using ARKit.
Because you need just a simple form of Face Recognition what is accessible via ARKit or RealityKit framework. You do not even need to create a .mlmodel like you do using Vision and CoreML frameworks.
All you need is a device with a front camera, allowing you detect up to three faces at a time using ARKit 3.0 or RealityKit 1.0. Look at the following Swift code how you can do it to get ARFaceAnchor when a face has detected.
And additionally, if you wanna use reference images for simple face detection – you need to put several reference images in Xcode's .arresourcegroup folder and use the following Swift code as additional condition to get a ARImageAnchor (in the center of a detected image).
May be this sounds like a stupid one, but i really curious to know that, what is the difference between "Face Detection and Face Tracking" in iOS perspective? And in what case or which kind of situation should i use the two of them.
First of all you known about Vision Framework!!
Following this link:- https://developer.apple.com/documentation/vision
Vision — a framework to apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.
Face detection. Detects all faces on selected photo.
Face landmarks. An image analysis that finds facial features (such as
the eyes and mouth) in an image.
Object tracking. Track any object using camera.
Among a lot of new APIs there is the Vision Framework which helps with detection of faces, face features, object tracking and others.
Hope will helpful to you!!
The Face Detection is done by the Apple Framework CoreImage Framework. Available since iOS 5+, (besides the Vision Framework).
You can use the https://developer.apple.com/documentation/coreimage/cidetector
CIDetector to detect faces.
Face detection will return face landmarks such as eyes, lips, nose. This frames can be used to modify the faces with image processing.
You can also use the faces found, and use it with a third party Face Recognitions API's to recognize faces (Microsoft face API) .
Face Tracking
Can be used to track the real time location of face features, and possible apply filters to it (snapchat, etc).
I have been trying OpenCV iOS sample to achieve facial emotion recognition.
I got OpenCV sample iOS project 'openCViOSFaceTrackingTutorial' from below link.
https://github.com/egeorgiou/openCViOSFaceTrackingTutorial/tree/master/openCViOSFaceTrackingTutorial
This sample project uses 'face detection', it works fine. It uses 'haarcascade_frontalface_alt2.xml' trained model.
I want to use the same project but have haarcascade for other facial emotions like Sad, Surprise. I have been searching for how to train haarcascade for emotions like Sad, Surprise etc. but couldn't find any clue.
Could someone advise me, how to train haarcascade for emotions like Sad, Surprise etc. to use in this sample OpenCV iOS project? Or will there be readymade haarcascade for emotions like Sad, Surprise etc. to use for this iOS sample.
Thanks
you can read tutorial on how to generate haarcascade file, but generating haarcascade for emotions is not easy task.
I would suggest to extract Mouth and eye from face and using haarcascade and process these rectangles for detecting emotions. you can get gaarcascade for mouth and eye from here . Mouth and Eye are complicated feature so will not work if you will try to find it in whole image, so first find the front face and try to detect mouth and eye within face rectangle only.
There are open source library available on github for emotion detaction, though these are not for ios, you can use similar algorithm to implement in ios.
You might have seen that option in one of the samsung phone that when a person smile it take the photo. So it somehow detects the smile and the click the photo automatically.I'm trying to create the similar thing on iOS that lets say if the camera detects a chair it clicks the photo.I've searched around and what I found is that there is a library called OpenCV but I'm not sure it'll work with iOS or not. Plus there is a concept of CoreImage in iOS which has something to do with deep understanding of the image. So any idea about this?
openCV For iOS
For detection you can use openCV framework in iOS and the native detection methods. In my application i am using openCV rectangle detection and the scenario is: after taken picture openCV detects rectangle on the image and then makes lines on detected shape, also it can crop the image with basic functionality and as perspective correction.
options: Face Detection, Shape Detection
Native way:
iOS provides us real time detection there are many tutorials how to use them i will link at the end of the thread. Native way also provides us face detection, shape detection and perspective correction.
Coclusion:
Choice is up to you but i prefer native way. remember openCV is written in C++ if you are using swift language you can import openCV in your project and then connect swift to objective-C to call openCV. Using Bridging Headers
Tutorials:
Medium Link 1
Medium Link 2
Toptal Tutorial
How to use OPENCV in iOS
I have tried face recognition using OpenCV using the documentation provided on their wiki. Its working fine and it can detect multiple faces. However there is no data provided on the site regarding 3D object detection or head tracking. The links to the code and the wiki are provided below :
Face recognition
Cascade Classifier
While the wiki does provide sufficient information about face detection, as you might have found, 3D face recognition methods are not provided.
I wanted to know about projects related to 3D face recognition and tracking so that I can see the source code and try to make a project doing the same.
This might come late but willow garage has another project running called the Point Cloud Library (PCL) that is entirely focused on 3D data processing tasks. Face recognition is one of the use cases they use to advertise the project. Of course all of this is free...
http://pointclouds.org
There are many methods. I just can point you to right direction. Face recognition examples usually provide sub-detection of eyes. So actually you know face and eyes location. In similar or other means you can also detect lips.
Now when you have at least three points of object (face this time), you can calculate its 3D position in room using triangulation. This part of example exists in find_obj.cpp which comes as example with OpenCV. Just this example uses x points from SURF and draws rectangle based on this information. Check out also anything else with CvFindHomography.
Since OpenCV 2.4.2, there has been a header file for face detection and tracking: opencv2/contrib/detection_based_tracker.hpp
The header file defines a class called DetectionBasedTracker. The tracking mechanism it defines uses haar cascades in the background to detect objects. The tracking is much faster than the OpenCV Haar implementation (however, some have found it to be less accurate).
I have personally found it to work very well on an android device. Some sample code implementing the face detection and tracker is found here:
http://bytesandlogics.wordpress.com/2012/08/23/detectionbasedtracker-opencv-implementation/
You should have a look at Active shapes models and Active Appearance Models that are for the task you are describing.
OpenCV provides you only 2D detection methods, while the methods in reference (now very popular in the field) track a set of 3D points distributed on a face plus a texture to describe its appearance.
The Wikipedia pages will give you some links to implementations of teh said methods.
If you want to know the 3D parameters of the head in the world coordinates (for example for gaze detection), then you should google for the keywords "3D head tracking" and "head pose estimation".