iOS face tracking while detection with CIDetector - ios

Here is the actual problem: During face detection in streamed video I need to track which faces where detected in previous iterations and which are new one. This could be possibly done with
CIFaceFeature trackingID
property, but here comes the hard part.
First of all: CIDetector returns array of
CIFaceFeatureInternal
objects instead of CIFaceFeature. They are almost like CIFaceFeature, but doesn't contain any tracking id or eyes data.
Currently I tried it on iOS 5, so as
CIDetectorTracking
option for CIDetector is available only from iOS 6 maybe thats something expected. Anyway, I need to target iOS 5 in my application. I could possibly try to determine if some face is still present on screen by calculating detected faces rectangles, but without additional information like eyes and mouth position that will be very uncertain.
So here comes the question:
How can I detect faces from video output in iOS 5 and also get some tracking id for found faces?
If I can get a direction at least, maybe some 3rd party library like openCV or some explanation that would be very helpful.

Related

Methods to track marked points in a stationary video?

Not sure where to ask this. Please redirect me if SO is not the place.
I want make a web app that accurately tracks pose in a stationary video of someone pedaling a stationary bike. The joints can be marked with some stickers to make the process easier and more accurate. Basically, I want to do what does this app.
First i tried markerless tracking using pose estimation models such as mediapipe's Blazepose and google's MoveNet. However, these are not accurate enough. I would also like to track some additional landmarks (ball of the foot,...).
Then I tried OpenCV.js's Lukas-Kanade optical flow method. But the algorithm lost the tracked point quickly. Even when i placed a colored tape on the part of the body that i wanted to track.
I also tried template matching a single marked point in opencv but it was not very robust, and it would probably not work well when using more markers.
What other methods can I try? Since the app i send the video of requires stickers to be placed, I though it is using something like Lukas-Kanade. But as I said, when I tried it, it wasn't able to track the marked point. Because the app is only on iOS I thought it may be using this API. However, this is only my speculation.
Edit: added example video: https://www.youtube.com/watch?v=eCNyyABfWSE
I tried shooting in slowmo to have more fps, but the quality suffered because of this. Also i didn't have blue or green tape so I had to use yellow, which is not very visible on the sweater or on my wrist. But the markers on the pants should be trackable right?

Is it possible to track a face on iOS using the camera?

I know I can recognize a face bounding box on iOS, but, I am wondering if it is possible once a face has been found to know for each frame that comes in if the face tracked is the same one I tracked in the previous frame.
This would allow me to handle multiple faces found in the frames and being able to know if each face is the same one as the previous frame...
As of now I only know how to track a face per frame without knowing if the face I am tracking is the same one from the previous frame.
Any input?
See iOS 11's VNDetectFaceLandmarksRequest and VNFaceObservation.
For older iOS, there are SDKs including OpenCV and Microsoft's Cognitive-Face-iOS and others if you search.

Automatically take a picture to proceed on back thread when rectangle detected? iOS, Swift

I am working on a project at the moment and wanted to find out what I need to be looking at to automatically take a picture when a rectangle is detected. I have seen this in action on an app called car spotter but wanted to know how it could be done. On car spotter it detects the rectangle and blurs the number plate automatically.
You can use AVCaptureSession to capture pictures from camera, and use CIDetector to detect rectangle. They are all system APIs, doesn't need OpenCV which costs extra storage space.
And there is an implementation example on Github:
https://github.com/charlymr/IRLDocumentScanner
And the key procedures are in this class:
https://github.com/charlymr/IRLDocumentScanner/blob/master/Source/Private/IRLCameraView.m

Is it possible to emulate a polarization filter during image processing, using C++ or OpenCV?

I've looked all over Stack and other sources, but I haven't seen any code that seems to successfully emulate what a polarization filter does, reducing glare. The application I want for this code won't allow for a physical filter, so I was wondering if anyone had tried this.
I'm using OpenCV image processing (mat) in C++ on an Android platform, and glare is interfering with the results I'm trying to get. Imagine a lost object you're trying to find based on a finite set of Red/Green/Blue values; if the object is smooth, glare would render bad results. And that's my current problem.
OK, no, there's no virtual polarization that can be accomplished just with code. It's possible to find (via image color saturation) glare spots on shiny objects, and those can be overwritten with nearby pixels without glare, but that's not the same thing as real polarization. That requires a physical, metal mesh in front of the lens, or sensor, to eliminate those stray light waves that create glare.
Tell you what. The person who invents the virtual polarization filter, using just code, will be an instant billionaire since every cell phone and digital camera company will want to license the patent.

How to track an opened hand in any environment with RGB camera?

I want to make a movable camera that tracks an opened hand (toward the floor). It just needs to track the opened hand but it has to also know the rotation (2d rotation).
This is what I searched for so far:
Contour- As the camera is movable, the background is unknown, even the lighting is not fixed. It's hard for me to get a clear hand
segment in real time.
Haar- It seems this just returns a rect and can't deal with rotation.
Feature detect- A hand doesn't have enough detail for this.
I am using the Opencv Unity plugin to do this.
EDIT
https://www.codeproject.com/Articles/826377/Rapid-Object-Detection-in-Csharp
I see another library can do something like this. Can OpenCV also do this?

Resources