I know how to using image detection to find whether there is helmet or not in the picture,
but I don't know how to find whether there is someone without wearing helmet, especially when there is another wearing helmet in the picture.
Would you please tell me how to solve the problem?
Thanks in advance.
Ryu
First, it depends on what kind of helmets you want to include. Some helmets cover the face and don't.
In case of helmets that cover the face, you can use face detector or can train the faces a class. If face is detected, it means he/she is not wearing helmet.
In case of helmets the cover the face, you can detect the face and crop the appropriate area above the detected face in the image to detect the helmet.
Related
I've been using Vision to identify Facial Landmarks, using VNDetectFaceLandmarksRequest.
It seems that whenever a face is detected, the resulting VNFaceObservation will always contain all the possible landmarks, and have positions for all of them. It also seems that positions returned for the occluded landmarks are 'guessed' by the framework.
I have tested this using a photo where the subject's face is turned to the left, and the left eye thus isn't visible. Vision returns a left eye landmark, along with a position.
Same thing with the mouth and nose of a subject wearing a N95 face mask, or the eyes of someone wearing opaque sunglasses.
While this can be a useful feature for other use cases, is there a way, using Vision or CIDetector, to figure which face landmarks actually are visible on a photo?
I also tried using CIDetector, but it appears to be able to detect mouths and smiles through N95 masks, so it doesn't seem to be a reliable alternative.
After confirmation from Apple, it appears it simply cannot be done.
If Vision detects a face, it will guess some occluded landmarks' positions, and there is no way to differentiate actually detected landmarks from guesses.
For those facing the same issue, a partial way around can be to compare the landmarks' points' positions to those of the median line's and the nose crest's points.
While this can help determine if a facial landmark is occluded by the face itself, it won't help with facial landmarks occluded by opaque sunglasses or face masks.
May be this sounds like a stupid one, but i really curious to know that, what is the difference between "Face Detection and Face Recognition" in iOS perspective? And in what case or which kind of situation should i use the two of them. I am new in iOS and have never any previous tinkle down experience about iOS face Detection/Recognition related thing. I am going to make an application, where i have to detect user face (by camera, not after taken photo) with database picture collection. Please give you response if any, and please don't misunderstand my question. O:)
-Thanks a lot in advanced.
In General:
Face Detection:
Detect the face in the image. It searches general human face like segment in the whole image. Output may be one or more than one. The output will be a rectangle or rectangles on the faces in the image.[Paul viola method]
Face Recognition:
Recognize input face from the already trained database with highest match score. A single face should be given as input, and the output will be a name, or class name or unknown face.
[PCA, LDA]
iOS has face detection, but no face recognition. It can tell you where the faces are in an image but can't tell you who they are.
If you want to use the face detection, start with AVMetadataFaceObject or a tutorial like this one.
Project: Red eye detection
Description: I want to remove red-eye from images. I am not able to use face detector because, the faces in the images are not always frontal and also the images are of players with helmet. And the images may have many red eyes. Also, the lighting is not proper. I want to know how to detect the red eyes? I am searching for some proven studies. Any help would be appreciated.
Update:
My images will be like the below one with red eye.
Those algorithms belong to edge and feature detection algorithms studied in Computer Vision.
Since you are looking for studies, I can offer you to read ones by Microsoft, HP, another one by HP, another good discussion of the algorithm
I am working on a project which needs to detect left and right face turn in face recognition. I dont even know whether API's are available even to detect it.
I am able to detect face using OpenCV, but when I turn my face it is not even detecting the face.
Any help is greatly appreciated,
It's called profile in OpenCV and is already part of OpenCV. I guess you must use another classifier for this type of detection
Take a look at these links:
http://alereimondo.no-ip.org/OpenCV/34
http://tech.groups.yahoo.com/group/OpenCV/message/78936
I have a problem to detect object in images or video frames.
I have a task that is detect some people or something who enter into the sight of web camera, and then my system will be alarm.
Next step is recognize which kind of thing the object is, in this phase I know use Hough transform to detect line, circle, even rectangle. But when a people come into the sight of camera, people's profile is more complex than line, circle and rectangle. How can i recognize the object is people not a car.
I need help to know that.
thanks in advance
I suggest you look at the paper "Histograms of Oriented Gradients for Human Detection" by Dalal and Triggs. They used Histograms of Oriented Gradients to detect humans in images
I think one method is to use Bayesian analysis on your image and see how that matches with a database of known images. I believe some people run a wavelet transform to emphasize more noticeable features.