Is it possible to track a face on iOS using the camera? - ios

I know I can recognize a face bounding box on iOS, but, I am wondering if it is possible once a face has been found to know for each frame that comes in if the face tracked is the same one I tracked in the previous frame.
This would allow me to handle multiple faces found in the frames and being able to know if each face is the same one as the previous frame...
As of now I only know how to track a face per frame without knowing if the face I am tracking is the same one from the previous frame.
Any input?

See iOS 11's VNDetectFaceLandmarksRequest and VNFaceObservation.
For older iOS, there are SDKs including OpenCV and Microsoft's Cognitive-Face-iOS and others if you search.

Related

Placing objects automatically when ground plane detected with vuforia

I'm working on an application where the concept is that you can 'select' objects before actually placing them. So what I wanted to do was have some low quality objects on a shelf or something like it. When the user selects the object he then can tap to place the high quality version of the object in his area for further viewing.
I was wondering if it's possible with vuforia. I wanted to use this platform since it works well from what I could tell and it's cross platform (The application needs to be for android and the HoloLens).
I have set up the basic application where you can place a capsule in the area. Now I wanted to automatically place the (in this case capsule) once vuforia has detected a ground plane. From what I could see the plane finder has events that go off when an input is detected, but I couldn't find an event that goes off when the ground plane is detected. Is it still possible with vuforia? I know it's doable with the HoloLens, but I would like to know if it's possible for android or other mobile devices. I really don't know where to start/look for so I hope someone can point me in the right direction.
Let me know if I need to include more information!
The Vuforia PlaneFinderBehaviour (see doc here) has the event OnAutomaticHitTest which fires every frame a ground plane is detected.
So you can use it to automatically spawn an object.
You have to add your method in the On Automatic Hit Test instead of the On Interactive Hit Test list of the "Plane Finder":
I've heard that vuforia fusion, does not yet support ARCore (it supports ARKit) so it uses an internal implementation to simulate ARCore functionality, and they are waiting for a final release of ARCore to support it. Many users reported that their objects move even when they use an ARCore supported device.

iOS face tracking while detection with CIDetector

Here is the actual problem: During face detection in streamed video I need to track which faces where detected in previous iterations and which are new one. This could be possibly done with
CIFaceFeature trackingID
property, but here comes the hard part.
First of all: CIDetector returns array of
CIFaceFeatureInternal
objects instead of CIFaceFeature. They are almost like CIFaceFeature, but doesn't contain any tracking id or eyes data.
Currently I tried it on iOS 5, so as
CIDetectorTracking
option for CIDetector is available only from iOS 6 maybe thats something expected. Anyway, I need to target iOS 5 in my application. I could possibly try to determine if some face is still present on screen by calculating detected faces rectangles, but without additional information like eyes and mouth position that will be very uncertain.
So here comes the question:
How can I detect faces from video output in iOS 5 and also get some tracking id for found faces?
If I can get a direction at least, maybe some 3rd party library like openCV or some explanation that would be very helpful.

iOS: UIImagePickerController Overlay property Detects CameraSource Change

Background: I am implementing a face mask to help people focus their camera and to produce a uniform result across every picture. Sadly, the face mask needs to adjust its size while switching between front and back facing camera to provide a great guideline for people.
Problem: I have been trying to detect this switch between camera to adjust my face mask accordingly. I have not yet found how to detect it.
Additional Info: I have tried looking into delegate and/or subclassing the pickerController. There are no methods visible for this detection. My last resort would be having a thread keep on checking camera source and adjust if needed. I welcome anything better :)
I would take a look at the UIImagePickerController documentation around cameraDevice property.
https://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIImagePickerController_Class/UIImagePickerController_Class.pdf
You can create an observer to run a selector when it changes:
http://farwestab.wordpress.com/2010/09/09/using-observers-on-ios/

iPad QR scanner - how to adjust Zxing scanner's focus area

Currently the rectangle frame is almost take up the whole screen, may I know if there is anyway to reduce the focus area?
Because I found that if I use an iPhone app which has Zxing built in, in the iPad, the efficiency is better than the iPad app.
So I'm trying to reduce the focus area, hopefully this could yield me a better result in iPad.
Are you talking about an iPad 3? The iPad 2 has a fixed-focus camera. There has recently been an issue detected with the iPad 3 support which led to poor decode rates, particularly for dense codes. It's been partially fixed by adjusting the resolution ZXing asks iOS for, but the fix isn't complete at this point.
Or are you just thinking about cropping down the region ZXing looks at to detect a code? This is unlikely to produce better decode rates. Given the resolution ZXing asks iOS for, it can scan an entire capture image very quickly. In theory, extraneous clutter could confuse it and cropping could reduce that, but I wouldn't spend any effort trying to improve the cropping until I was sure that confusion was really happening. I haven't seen any evidence of it.

Motion detection of iOS device in 3d Space

Ive been working with the iOS sensors a bit off late and i wanted to write an app that would accurately track the motion of the phone in space. I wanted to know if its possible to track the motion of the device and detect gestures, such as drawing a circle with your phone or even moving in a straight line.
I've been searching online about this, and i wanted to know two things:-
1.Is it possible to do this with the CoreMotion framework.
2.If Yes, what is the alternative for older devices that do not support CoreMotion. Without the double integral method using the accelerometer!
This would really help!
Any other alternative ideas are most welcome!
Thanks in advance!
As your write, you cannot do the double integral.
For gesture recognition, I would try dynamic time warping. See my earlier answer here.

Resources