Many days ago I saw a user using scipy function to get region of interest (RoI) in an image just by using different colors. I believe it was some sort of filter. Now I'm trying to do the same but I'm not able to find that content anymore.
I would grateful if you could help me with that problem.
Example of Semantic Segmentation:
Related
Here's some research I have done so far:
- I have used Google Vision API to detect various face landmarks.
Here's the reference: https://developers.google.com/vision/introduction
Here's the link to Sample Code to get the facial landmarks. It uses the same Google Vision API. Here's the reference link: https://github.com/googlesamples/ios-vision
I have gone through the various blogs on internet which says MSQRD based on the Google's cloud vision. Here's the link to it: https://medium.com/#AlexioCassani/how-to-create-a-msqrd-like-app-with-google-cloud-vision-802b578b30a0
For Android here's the reference:
https://www.raywenderlich.com/158580/augmented-reality-android-googles-face-api
There are multiple paid SDK's which full fills the purpose. But they are highly priced. So cant able to afford it.
For instance:
1) https://deepar.ai/contact/
2) https://www.luxand.com/
There is possibility might have some see this question as duplicate of this:
Face filter implementation like MSQRD/SnapChat
But the thread is almost 1.6 years old with no right answers to it.
I have gone through this article:
https://dzone.com/articles/mimic-snapchat-filters-programmatically-1
It describes all the essential steps to achieve the desired results. But they advice to use their own made SDK.
As per my research no good enough material is around which helps to full fill the desired results like MSQRD face filters.
One more Github repository around which has same implementation but it doesn't gives much information about same.
https://github.com/rootkit/LiveFaceMask
Now my question is:
If we have the facial landmarks using Google Vision API (or even using
DiLib), how I can add 2d or 3d models over it. In which format this
needs to be done like this require some X,Y coordinates with vertices
calculation.
NOTE: I have gone through the Googles "GooglyEyesDemo" which adds the
preview layer over eyes. It basically adds a view over the face. So I
dont want to add UIView one dimensional preview layers over it. Image
attached for reference :
https://developers.google.com/vision/ios/face-tracker-tutorial
Creating Models: I also want to know how to create models for live
filters like MSQRD. I welcome any software or format recommendations.
Hope the research I have done will help others and someone else
experience helps me to achieve the desired results. Let me know if any
more details are required.**
Image attached for more reference:
Thanks
Harry
Canvas class is used in android for drawing such 3D / 2D models or core graphics for IOS can be used.
What you can do is detect the face components, take their location points and draw images on top of them. Consider going through this
You need to either predict x,y,z coordinates(check out this demo), either use x,y predictions but then find parameters of universal 3d-model & camera that will give the closest projection of current x,y.
The problem statement:
Given two images such as the two images of Brad Pitt below, figure out if the image contains the same person or no. The difficulty is that we have only one reference image for each person and what to figure out if any other incoming image contains the same person or no.
Some research:
There are a few different methods of solving this task, these are
Using color histograms
Keypoint oriented methods
Using deep convolutional neural networks or other ML techniques
The histogram methods involve calculating histograms based on color and defining some sort of metric between them and then deciding upon a threshold. One that I have tried is the Earth Mover's Distance. However this method is lacking in accuracy.
The best approach therefore should be some sort of mix between 2nd and 3rd methods, and some preprocessing.
For preprocessing obvious steps to perform are:
Run a face detection such as Viola-Jones and separate the regions containing faces
Convert the said faces to grayscale
Run eye,mouth,nose detection algorithms perhaps using haar_cascades of opencv
Align the face images according found landmarks
All of this is done using opencv.
Extracting features such as SIFT and MSER generate accuracy of between 73-76%. After some additional research I've come across this paper using fisherfaces. And the fact that opencv has now the ability to create fisherface detectors and train them is great and works fantastically, achieving the accuracy promised by the paper on the Yale datasets.
The complication of the problem is that in my case I don't have a database with several images of the same person, to train the detector on. All I have is a single image corresponding to a single person, and given another image I want to understand whether this is the same person or no.
So what I am interested in knowing is`
Has anyone tried anything of the sort? What are some papers/methods/libraries that I should look into?
Do you have any suggestions on how to tackle problem?
Since you have only one image, you can give this method using DLib a try. I have used 3-4 images per person and it is giving good results.
Detect face (sample_face)
Get face descriptor (128 D vector) using dlib compute_face_descriptor (Check link)
Get the new picture in which you want to recognise the face
detect face and compute the descriptor(lets call test_face).
Compute euclidean distance between test_face descriptor and all sample_faces descriptor
assign the test_face with class(person name) with least euclidean distance.
Give this a whirl, you can play with face aligning if you start getting good results.
This is one of the hot topic for computer visin area. To handle as you have written there are many kind of solutions are available.
But i suggest to look OpenFace which has very high accuracy. There is a implementation of that project at Github.
Thanks
You need to understand that machine learning doesn't work that way, there are intensive training carried out before your model can give some good results.
with the single image of a person you just cannot predict that its the same person, cause you need to train your model over different images of the person under different light intensities, angles and many other varying scenarios.
Still i would like to try this link :
http://hanzratech.in/2015/02/03/face-recognition-using-opencv.html
you may find some match for the image atleast.
So what I am interested in knowing is` Has anyone tried anything of
the sort?
Yes. This is 2017 and facial recognition has been researched for decades.
What are some papers/methods/libraries that I should look into?
Anything google throws at you searching "single image/sample face recognition"
Do you have any suggestions on how to tackle problem?
See above
Extracting features such as SIFT and MSER generate accuracy of between 73-76%.
I doubt humans, who's facial recognition is unmatched perform much better with only 1 image as reference. I mean I couldn't tell for sure if that's Brad Pitt or if one is just a look-alike and I have seen him on houndreds of pictures and hours of movies...
I'm trying to compare an image against a known set of images and find the closest match(es) using Emgu CV and Surf. I've found a lot of people trying to do the same thing but not a complete solution that uses the GPU for speed.
The closest I've gotten is the tutorial here:
http://romovs.github.io/blog/2013/07/05/matching-image-to-a-set-of-images-with-emgu-cv/
However that doesn't take advantage of the GPU and it's really slow for my application. I need something fast like the SurfFeature sample.
So I tried to refactor that tutorial code to match the SurfFeature logic that uses the GPU. Everything was going well with GpuMat's replacing Matrix here and there. But I ran into a major problem when I got to the core of the tutorial above, that is to say, the logic that concatenates all of the descriptors into one large matrix. I couldn't find a way to append GpuMat's to each other - even if I could do that, there's no guarantee that the FlannIndex search routine would even work with the Gpu-based code.
So now I'm stuck on something I thought would be relatively straight-forward. There are certainly a number of people trying to do this over the years so I'm really surprised that there isn't a published solution.
If you could help me, I'd be most appreciative. To summarize, I need to do the following:
Build a large in-memory (on the GPU) list of descriptors and keypoints for a known set of images using Surf (as per the SurfFeature sample). Given an unknown image, search against the in-memory stuff to find the closest match (if any).
Thanks in advance if you can help!
I am currently using OpenCV3.0 with the hope i will be able to create a program that does 3 things. First, finds faces within a live video feed. Secondly, extracts the locations of facial landmarks using ASM or AAM. Finally, uses a SVM to classify the facial expression on the persons face in the video.
I have done a fair amount of research into this but can't find anywhere the most suitable open source AAM or ASM library to complete this function. Also if possible I would like to be able to train the AAM or ASM to extract the specific face landmarks i require. For example, all the numbered points in the picture linked below:
www.imgur.com/XnbCZXf
If there are any alternatives to what i have suggested to get the required functionality then feel free to suggest them to me.
Thanks in advance for any answers, all advice is welcome to help me along with this project.
In the comments, I see that you are opting to train your own face landmark detector using the dlib library. You had a few questions regarding what training set dlib used to generate their provided "shape_predictor_68_face_landmarks.dat" model.
Some pointers:
The author (Davis King) stated that he used the annotated images from the iBUG 300-W dataset. This dataset has a total of 11,167 images annotated with the 68-point convention. As a standard trick, he also mirrors each image to effectively double the training set size, ie 11,167*2=22334 images. Here's a link to the dataset: http://ibug.doc.ic.ac.uk/resources/facial-point-annotations/
Note: the iBUG 300-W dataset includes two datasets that are not freely/publicly available: XM2VTS, and FRGCv2. Unfortunately, these images make up a majority of the ibug 300-W (7310 images, or 65.5%).
The original paper only trained on the HELEN, AFW, and LFPW datasets. So, you ought to be able to generate a reasonably-good model on only the publicly-available images (HELEN,LFPW,AFW,IBUG), ie 3857 images.
If you Google "one millisecond face alignment kazemi", the paper (and project page) will be the top hits.
You can read more about the details of the training procedure by reading the comments section of this dlib blog post. In particular, he briefly discusses the parameters he chose for training: http://blog.dlib.net/2014/08/real-time-face-pose-estimation.html
With the size of the training set in mind (thousands of images), I don't think you will get acceptable results with just a handful of images. Fortunately, there are many publicly available face datasets out there, including the dataset linked above :)
Hope that helps!
AAM and ASM are pretty old school and results are a little bit disappointing.
Most of Facial landmarks trackers use cascade of patches or deep-learning. You have DLib that performs pretty well (+BSD licence) with this demo, some other on github or a bunch of API as this one that is free to use.
You can also give a look at my project using C++/OpenCV/DLib with all functionalities you quoted and perfectly operational.
Try Stasm4.0.0. It gives approximately 77 points on face.
I advise you to use FaceTracker library. It is written in C++ using OpenCV 2.x. You won't be disappointed on it.
Is this possible in OpenCV or other image processing library?
It should find 3 Shape1 and 2 Shape2. (Meaning it has to process scaling and rotation) And give a position.
I am kind of new to opencv and do not know which algorithm or functions to use to do this. Any help would be very appreciated especially any code. Would the 2 unconnected parts in Shape1 cause problems in detection?
You could try the Chamfer algorithm.
OpenCV has a CPP example in opencv/samples/cpp/chamfer.cpp.
Below is an older version I found via google:
https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/cpp/chamfer.cpp?rev=4194