iOS Face Recognition - ios

I want to develop an application that recognizes a person in an image like the iPhone's Photos application.
What is the Apple's framework that should be used to achieve such a feature ?
Thanks.

Vision
Apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.
see in Apple Docs here vision

Related

Apple Vision Framework Identify face

Is it possible in the Apple Vision Framework to compare faces and recognise if that person is in a picture compared to a reference image of that person?
Something like Facebook Face recognition.
Thomas
From the Vision Framework Documentation:
The Vision framework performs face and face landmark detection, text
detection, barcode recognition, image registration, and general
feature tracking. Vision also allows the use of custom Core ML models
for tasks like classification or object detection.
So, no, the Vision Framework does not provide face recognition, only face detection.
There are approaches out there to recognize faces. Here is an example of face recognition in an AR app:
https://github.com/NovatecConsulting/FaceRecognition-in-ARKit
They trained a model that can detect like 100 persons, but you have to retrain it for every new person you want to recognize. Unfortunately, you can not just give two images in and have the faces compared.
According to Face Detection vs Face Recognition article:
Face detection just means that a system is able to identify that there is a human face present in an image or video. For example, Face Detection can be used to auto focus functionality for cameras.
Face recognition describes a biometric technology that goes far beyond a way when just a human face is detected. It actually attempts to establish whose face it is.
But...
In a case you need an Augmented Reality app, like Facebook's FaceApp, the answer is:
Yes, you can create an app similar to FaceApp using ARKit.
Because you need just a simple form of Face Recognition what is accessible via ARKit or RealityKit framework. You do not even need to create a .mlmodel like you do using Vision and CoreML frameworks.
All you need is a device with a front camera, allowing you detect up to three faces at a time using ARKit 3.0 or RealityKit 1.0. Look at the following Swift code how you can do it to get ARFaceAnchor when a face has detected.
And additionally, if you wanna use reference images for simple face detection – you need to put several reference images in Xcode's .arresourcegroup folder and use the following Swift code as additional condition to get a ARImageAnchor (in the center of a detected image).

What is the difference between Face Detection and Face Tracking in iOS perspective

May be this sounds like a stupid one, but i really curious to know that, what is the difference between "Face Detection and Face Tracking" in iOS perspective? And in what case or which kind of situation should i use the two of them.
First of all you known about Vision Framework!!
Following this link:- https://developer.apple.com/documentation/vision
Vision — a framework to apply high-performance image analysis and computer vision techniques to identify faces, detect features, and classify scenes in images and video.
Face detection. Detects all faces on selected photo.
Face landmarks. An image analysis that finds facial features (such as
the eyes and mouth) in an image.
Object tracking. Track any object using camera.
Among a lot of new APIs there is the Vision Framework which helps with detection of faces, face features, object tracking and others.
Hope will helpful to you!!
The Face Detection is done by the Apple Framework CoreImage Framework. Available since iOS 5+, (besides the Vision Framework).
You can use the https://developer.apple.com/documentation/coreimage/cidetector
CIDetector to detect faces.
Face detection will return face landmarks such as eyes, lips, nose. This frames can be used to modify the faces with image processing.
You can also use the faces found, and use it with a third party Face Recognitions API's to recognize faces (Microsoft face API) .
Face Tracking
Can be used to track the real time location of face features, and possible apply filters to it (snapchat, etc).

Reproduce the new scanning feature in iOS 11 Notes

Does anyone know how to reproduce the new Notes new scanning feature in iOS 11??
Is AVFoundation used for the camera?
How is the camera detecting the shape of the paper/document/card?
How do they place the overlay over in real time?
How does the camera know when to take the photo?
What's that animated overlay and how can we achieve this?
Does anyone know how to reproduce this?
Not exactly :P
Is AVFoundation used for the camera? Yes
How is the camera detecting the shape of the paper/document/card?
They are using the Vision Framework to do rectangle detection.
It's stated in this WWDC session by one of the demonstrators
How do they place the overlay over in real time?
You Should check out the above video for this as he talks about doing something similar in one of the demos
How does the camera know when to take the photo?
I'm not familiar with this app but it's surely triggered in the capture session, no?
Whats that animated overlay and how can we achieve this?
Not sure about this but I'd imagine it's some kind of CALayer with animation
Is Tesseract framework used for the image afterwards?
Isn't Tesseract OCR for text?
If you're looking for handwriting recognition, you might want to look for a MNIST model
Use Apple’s rectangle detection SDK, which provides an easy-to-use API that can identify rectangles in still images or video sequences in near-realtime. The algorithm works very well in simple scenes with a single prominent rectangle in a clean background, but is less accurate in more complicated scenes, such as capturing small receipts or business cards in cluttered backgrounds, which are essential use-cases for our scanning feature.
An image processor that identifies notable features (such as faces and barcodes) in a still image or video.
https://developer.apple.com/documentation/coreimage/cidetector

how image processing detect photograph or real person in front of camara?

I planned to develop a software that can takes attendant (work , school) by face recognition as my final year project(FYP).(Just an Idea)
I have search through the net about the image processing library and i found out OpenCv is more well known as i found a lot of video for face recognition using OpenCv in youtube which will definitely help me a lot.(I'm totally new to image processing). Also, i will be using Visual Studio.
Here come the first problem, which is is it possible to detect that it is a photo or a real person is standing in front of the camera while taking the attending?
If yes, can you provide me some link or tutorial link for how image processing can detect 'photograph' and 'real person'?
As i said, I'm totally new to image processing and this is just an idea for my FYP
Or is the any open sources library that you recommend?
Eulerian Video Magnification can detect that it is a photo or a real person is standing in front of the camera but it may not detect that it is a video or real person is standing in front of the camera. Thus, the Face Recognition Authentication System which is based Eulerian Video Magnification can not be successfull when malicious user uses face video to rather than a real person face.
Here are my ideas to develop robust Face Recogition Authentication system;
You can use Multi-View Face Recognition to develop robust face authentication system. Here is a demo video of this technique and here are papers to get theoritical background. Also you can benefit from this, this, this and this when you start coding.
You can use RANDOM directions to detect that it is a photo/video or real person for example blink your eyes 3 times, move your eyebrow, look at the left side or look at the right side (multi-view face recognition will be used to recognize user's face when user look at the right or left) etc.
You should use these 2 ideas in your project for developing robust Face
Recognition Authentication system.
Here is the scnario;

Active Appearance Model face for ios

I am using OpenCV to develop an iPhone application, and I am using it to detect faces. Rather than detect the face as a whole, I would like to detect each of the smaller facial features (eyes, nose, ears, lips, etc.). Actually,I want to do something like this link
How to use aam-opencv for iOS?
I searched on internet and I found this link
But I don't know how to apply this on iOS.
Please, can you help me about facial expression recognition?

Resources