I am very new to opencv and able to install it so far. I want to compare a face with other different faces available in library and to find out the closest match. I have tried different features but couldn't find the closer answer.
Any suggestion to choose a detector.
dimensions of input image and images in library are same.
thanks in advance
i think, you want face recognition (who is it?), not detection (is it a face?).
look here for what opencv has to offer there
Related
So I want to count how many people appear in a facebook profile picture.
Typically there are 0-2 people (sometimes there are 4-5+ but that's more rare).
A sample dataset (and a few tries using python) can be found here:
https://github.com/yoniker/FaceDetect
I've tried different methods, none of them give reasonable results (all of those methods are wrong most of the time),I've tried the following:
-Face detection- http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html
It usually doesn't find anyone (that happens at around 75% of the pictures)- and I have tried different Haar filters and parameters.
-Pedestrian Detection http://www.pyimagesearch.com/2015/11/09/pedestrian-detection-opencv/
Again it doesn't find people most of the time.
OpenFace:Probably this face recognition algo doesn't truly help with face detection (see https://groups.google.com/forum/#!topic/cmu-openface/X6erXKckk0Q).
And finally I've looked at different StackOverflow questions such as
Count the number of people in the video but none of them are relevant!
I've tried for half a day now- so help will be super appreciated!!
For me, dlib has given better results than using OpenCV's haar face detector. It has python bindings too. You can find quick-start code to do face detection here.
It would be possible to help better if you post an Image in which faces are not detected properly.
Having said that, to improve face detection apart from using dlib, you can experiment with these ideas:
Use histogram equalisation(equalizeHist on opencv) on gray scale image before passing it to face detector. (i.e preprocess your images)
If faces are tilted to left or right, more often face detection fails. To solve this rotate the images in steps of 5 degrees upto 30 degrees and apply face detection. At each rotation you might detect new faces.
Most face detectors which are not using deep learning detect mostly frontal faces. Not much could be done about this apart from using deep learning or train your own side profile face detector using HOG or HAAR features.
Hope this helps you to improve your face detection.
There's always the cascade classifier in OpenCV for all your face detection needs. If you could feed it with some nice features it would give you all the results.
I'm attempting to implement an easter egg in a mobile app I'm working on. These easter egg will be triggered when a logo is detected in the camera view. The logo I'm trying to detect is this one: .
I'm not quite sure what the best way to approach this is as I'm pretty new to computer vision. I'm currently finding horizontal edges using the Canny algorithm. I then find line segments using the probabilistic Hough transform. The output of this looks as follows (blue lines represent the line segments detected by the probabilistic Hough transform):
The next step I was going to take would be to look for a group of around 24 lines (fitting within a nearly square rectangle), each line would have to be approximately the same length. I'd use these two signals to indicate the potential presence of the logo. I realise that this is probably a very naive approach and would welcome suggestions as to how to better detect this logo in a more reliable manner?
Thanks
You may want to go with SIFT using Rob Hess' SIFT Library. It's using OpenCV and also pretty fast. I guess that easier than your current way of approaching the logo detection :)
Try also looking for SURF, which claims to be faster & robuster than SIFT. This Feature Detection tutorial will help you.
You may just want to use LogoGrab's technology. It's the best out there and offers all sorts of APIs (both mobile and HTTP). http://www.logograb.com/technologyteam/
I'm not quite sure if you would find such features in the logo to go with a SIFT/SURF approach. As an alternative you can try training a Haar-like feature classifier and use it for detecting the logo, just like opencv does for face detection.
You could also try the Tensorflow's object detection API here:
https://github.com/tensorflow/models/tree/master/research/object_detection
The good thing about this API is that it contains State-of-the-art models in Object Detection & Classification. These models that tensorflow provide are free to train and some of them promise quite astonishing results. I have already trained a model for the company I am working on, that does quite amazing job in LOGO detection from Images & Video Streams. You can check more about my work here:
https://github.com/kochlisGit/LogoLens
I am trying to identify static hand signs. Confused with the libraries and algorithms I can use for the project.
What need to it identify hand signs and convert in to text. I managed to get the hand contour.
Can you please tell me what is the best method to classify hand signs.
Is it haar classifier, adaboost classifier, convex hull, orientation histograms, SVM, shift algorithm, or any thing else.
And also pls give me some examples as well.
I tried opencv and emugcv both for image processing. what is best c++ or c# for a real time system.
Any help is highly appreciated.
Thanks
I have implemented a handtracking for web applications in my master deggree. Basically, you should follow those steps:
1 - Detect features of skin color in a Region of Interest. Basically, put a frame in the screen and ask for the user put the hand.
2 - You should have a implementation of a lucas kanade tracker method. Basically, this alghorithm will ensure that your features are not lost through the frames.
3 - Try get more features for each 3 frames interval.
The people use many approaches, so I cannot give a unique. You could make some research using Google Scholar and use the keywords "hand sign", "recognition" and "detection".
Maybe you find some code with the help of Google. An example, the HandVu: http://www.movesinstitute.org/~kolsch/HandVu/HandVu.html
The haar classifier (method of Viola-Jones) help to detect hand, not to recognize them.
Good luck in your research!
I have made the following with OpenCV. Algorithm:
Skin detection made in HSV
Thinning (if pixel has zero neighbor than set zero)
Thicking (if pixel has neighbor nonzero then set it nonzero)
See this Wikipedia page for the details of these.
You can find the best trained cascade to detect hand using OpenCV from the GitHub...
https://github.com/Aravindlivewire/Opencv/blob/master/haarcascade/aGest.xml
Good luck...
So, I'm trying to stitch images taken by a microscope of a microchip, but it's very hard to have all the features aligned. I already have a 50% overlap between two adjacent images, but even with that, it's not always a good fit.
I'm using SURF with OpenCV to extract the keypoints and find the homographic matrix. But still, it's far from being an acceptable result.
My objective is to be able to stitch perfectly 2x2 images, so this way, I can repeat that process recursively until I have the final image.
Do you have any suggestion ? A nice algorithm to approach this problem. Or maybe a way to transform the images to be able to extract better keypoints from them. Play with the threshold (a smaller one to get more keypoints, or a larger one?).
Right now, my approach is to first stitch two 2x1 images and then, stitch these two together. It's close from what we want, but still not acceptable. Also, the problem might be that the image used to be the "source" (while the second image is transform with the matrix to overlap that one) might not be a bit misaligned or there's a small angle on that image that affects the whole result.
Any help or suggestion is appreciated. Specially any solution that would allow to use OpenCV and SURF (even if I'm not totally against other libraries... it's just that most of the project has been developed with that).
Thanks!
I found using TurboReg during image registration development to be a helpful comparison tool. It is a free ImageJ plugin, and has many different fitting types.
Have you taken a look at the new OpenCV stitching samples: stitching.cpp and stitching_detailed.cpp?
EDIT : I forgot this was cutting edge OpenCV because I'm using the trunk at home :) To get access to these new samples, you'll need to check out the OpenCV trunk from SVN like this:
svn co https://code.ros.org/svn/opencv/trunk/opencv opencv-trunk
Unfortunately, you'll need to recompile it, but you should be able to use the new stitching code :) If you haven't built OpenCV from source before, here is a good little tutorial to get you started. I will mention that OpenCV has a lot more options that can be enabled/disabled than are mentioned in the tutorial, so you might want to use the cmake-gui to get a look at all of the options. You can apt-get it with this command:
> sudo apt-get install cmake-qt-gui
Also, if you're more concerned with quality, and you don't mind slower performance; you might consider using the Lucas-Kanade method for image registration. Here is a lecture, and here is a paper on the topic that might be helpful to you.
The Fiji's stitching plugin handles this situation of alignment error propagation with 2D mosaicing. We use it in daily use for microscopic stitching, and I must say it is perfect.
I am very new in openCV, I saw it could figure out the face and return a rectangle to indicate the face. I am wondering whether there is anyway for openCV to access two images, with which contains one face, and I expect openCV to return the possibility of whether those two people are the same.
Thanks.
OpenCV does not provide a full face recognition engine.
You might want to check out this work: The One-Shot Similarity Kernel which proposes something similar to what you need. It also provides Matlab code.