I'm writing a project in which we need to be able to recognize faces using OpenCV. I'm training my base on photos, then give test photos to the program with people, which we attended. Recognition works good (80-90%). But! If I give the program a photo with person, which we didn't use in the teaching of our base, the program finds a man in our base with the terrible low distance. At the same time, Apple iPhoto works good with all photos. Can anyone know what algorithm they used to recognize faces ? or had my problem? Help please.
P.S. Tested algorithms: LBPHFaceRecognizer, FisherFaceRecognizer, EigenFaceRecognizer.
You mention iPhoto so I'm going to assume you're using OS X or iOS. If so, you may want to try Apple's built-in face detection.
Related
I and my partner decided to implement a traffic light recognition program as a student project.
But we are absolute beginners with computer vision and have no idea how to start with this. (What only we know is to use OpenCV)
Should we firstly learn image recognition or just start with object tracking?
Our ideal production is to recognize traffic light in a video but not just an image.
In my opinion, you should take a serious course about computer vision before going deeper.
The video is just a sequence of picture. So you could use opencv to read each image then process them.
For you current project, a simple object detection using hog feature should be more than enough.
There's tutorial at http://www.hackevolve.com/create-your-own-object-detector/ . It's very easy to understand and source code is also available, so you can move quick.
Good luck.
I'm working with Vision framework to detect faces and objects on multiple images and works fantastic.
But I have a question that I can't find on documentation. The Photos app on iOS classify faces and you can click on face and show all the images with this face.
How can I classify faces like Photos app? Is there any unique identifier or similar to do this?
Thanks!
In order to uniquely recognise faces, firstly you need to detect a face, then run it through a CoreML model (or another image classification model type, such as a Tensorflow model) in order to classify the image and tell you the likeliness that the face you captured matches one of the faces trained into your model.
Apple Photos uses machine learning (as mentioned in their iPhone reveal keynote this year) to train the device to recognise faces in your photos. The training would be performed locally on the device, however, Apple does not offer any public APIs (yet) to allow us to do this.
You could send photo data (crops of faces using the tool mentioned above by Paras) to your server and have it train a model (using CoreML's trainer or something like Nvidia DIGITS on AWS or your own server), convert it to CoreML, compile the model then download it to your device and sideload the model. This is as close as you're going to get to "magic" face recognition used by Photos, for now, as the device can only read compiled models.
I don't think their is a way to uniquely identify faces returned to you from the vision framework. I checked the UUID property a VNFaceObservation and it is a different identifier every-time.
You might have to make your own CoreML model or just wait/find a good 3rd party one.
I hope someone proves me wrong because I want to know also.
You might want to check out this repo
https://github.com/KimDarren/FaceCropper
I tested this and works very well, you can even customise as per your need.
I went through the Kinect SDK and Toolkit provided by Microsoft. Tested the Face Detection Sample, it worked successfully. But, how to recognize the faces ? I know the basics of OpenCV (VS2010). Is there any Kinect Libraries for face recognition? if no, what are the possible solutions? Are there, any tutorials available for face recognition using Kinect?
I've been working on this myself. At first I just used the Kinect as a webcam and passed the data into a recognizer modeled after this code (which uses Emgu CV to do PCA):
http://www.codeproject.com/Articles/239849/Multiple-face-detection-and-recognition-in-real-ti
While that worked OK, I thought I could do better since the Kinect has such awesome face tracking. I ended up using the Kinect to find the face boundaries, crop it, and pass it into that library for recognition. I've cleaned up the code and put it out on github, hopefully it'll help someone else:
https://github.com/mrosack/Sacknet.KinectFacialRecognition
I've found project which could be a good source for you - http://code.google.com/p/i-recognize-you/ but unfortunetly(for you) its homepage is not in english. The most important parts:
-project(with source code) is at http://code.google.com/p/i-recognize-you/downloads/list
-in bibliography author mentioned this site - http://www.shervinemami.info/faceRecognition.html. This seems to be a good start point for you.
There are no built in functionality for the Kinect that will provide face recognition. I'm not aware of any tutorials out there that will do it, but someone I'm sure has tried. It is on my short list; hopefully time will allow soon.
I would try saving the face tracking information and doing a comparison with that for recognition. You would have a "setup" function that would ask the user the stare at the Kinect, and would save the points the face tracker returns to you. When you wish to recognize a face, the user would look at the screen and you would compare the face tracker points to a database of faces. This is roughly how the Xbox does it.
The big trick is confidence levels. Numbers will not come back exactly as they did previously, so you will need to include buffers of values for each feature -- the code would then come back with "I'm 93% sure this is Bob".
hello
I have to develop a software for my college course that will perform retinal scan, i.e, if a picture is provided the location of the retina will be detected by the program.
But i have got no clue on how to implement this project. Can anyone please provide any relevant information?
I would perhaps start out researching how a face detection algorithm is implemented, and then implement that same algorithm with an iris as the target.
Heres an open source Java implementation of a face detection algorithm: Here
What is the college course? Hopefully you're given more guidance (or should already have knowledge in the area) beyond "develop an algorithm to find the retina/iris". It could probably be done with shape recognition, or various other techniques depending on what the image is like. Are we talking about "Here's a closeup of a face, find the eyes", or "Here's a picture of 10 people, find the eyes"? The algorithms will be very different in those two cases.
I want to build a program that can recognize typical playing cards.
is there an algorithm that can process an image from webcam and to determine the card type?
if there is not- are there more simple algorithms that can be combined for this purpose?
thanks
Here's some basic information to point you in the right direction. I suggest tackling the problem in two parts: a) webcam, and b) card recognition.
Part a) is not as hard as part b) and so I suggest that you ignore the webcam initially - get the algorithm working with several test images you've taken. Once card recognition works, you can then get your webcam working as your input.
Here's a wikipedia article about object recognition. The names of the algorithms are listed, so you'll be able to do some research into which algorithm(s) you might investigate.
Be warned: image processing and feature/object detection is not trivial. I suspect that this would make a very good masters or PhD project. I have very little experience in this area and your question is very general. I hope this helps you to get started.
postscript:
If you get this working well, the casinos will probably be interested. You could make some money, if you play your cards right.