I'm doing research in the field of emotion recognition. For this purpose I need to catch and classify particular face details like eyes, nose, mouth, etc. Standard OpenCV function for this is detectMultiScale(), but its disadvantage is that it returns list of rectangles (video) while I'm mostly interested in particular key points - corners of mouth, upper and lower points, edges, etc (video).
So, how do they do it? OpenCV is ideal, but other solutions are ok too.
To analyse such precise points, you can use Active appearance models. Your second video seems to be done with AAM. Check out above wikipedia link, where you can get a lot of AAM tools and API.
On the other hand, if you can detect mouth using haar-cascade, apply colour filtering. Obviously lips and surrounding region has color difference. You get precise model of lips and find its edges.
Check out this paper: Lip Contour Extraction
Related
I'm trying to implement a license plate detection algorithm, so far I have narrowed it down to a few interest regions:
My next step would be to classify each interest region and ignore the false regions. I was thinking maybe I can check each region for character. If the region contains some characters, then it is a plate, otherwise, it's a false region. How would I go about checking for characters?
Another approach I can think of is to use PCA to determine if a region contains plate, but I have no idea how to do it in OpenCV.
Text detection is not an easy task at all. It may be harder than the whole plate detection you are building. I can suggest you a simple tricky approach:
Find contours inside each region.
Find bounding rectangle around each contour.
delete very small or very larg rectangle.
Check weather these contours are represent a straight line. The region with contours that are arranged in linear way is the plate.
There is a technique called MSER (Maximally stable extremal regions). It detects connected blobs of similar color (which individual letters usually are). Then you classify these blobs to tell if they are letters or not. Then it's called CSER.
See OpenCV doc, Wikipedia
Also there is a Quasi-Linear version of the algorithm, if you feel like implementing it yourself.
I am able to detect eyes, nose and mouth in a given face using Matlab. Now, I want four more points i.e corners of the eyes and nose. how do i get these points??
This is the Image for corner points of nose.
Red point is showing the point, what I'm looking for.(its just to let you know.. there is no point in original image)
Active Appearance Model (AAM) could be useful in your case.
AMM is normally used for matching a statistical model of object shape and appearance to a new image and widely used for extracting face features and for head pose estimation.
I believe this could be helpful for you to start with.
You can try using corner detectors included in the computer vision system toolbox, such as detectHarrisFeatures, detectMinEigenFeatures, or detectFASTFeatures. However they may give you more points than you want, so you will have to do some parameter tweaking.
Disclaimer: I am not looking for a robust method that works under all conditions and/or requiring complex analysis of the image (e.g., http://vis-www.cs.umass.edu/lfw/part_labels/). Therefore please do not link me to one of the many papers on hair segmentation that exist. I am looking for something fast and simple (not perfectly robust).
That being said, my goal is to extract the area containing the hair in an image of a human face. If I am able to get at least some tiny set of pixels that I am sure are hair pixels, then I can use one of several algorithms to find the rest (e.g., a "photoshop magic wand" type algorithm).
An example (left side is the original face, right side is the gradient magnitude):
http://imgur.com/YX85MKB
Here's the information I have access to regarding any image of a human face: all locations of important features (e.g., nose, eyes, mouth and chin). One dumb/simple way of finding hair pixels could be to perform edge detection, and work up from the nose until I find two "horizontal edges" which we assume are the lower and upper boundaries of the hair, then take a sample from in-between.
Any ideas on other simple methods to try?
Instead of using image processing techniques (edge detection) you could use simple maths. You say you know where the nose, eyes, mouth and chin are. From the distance between these body parts you can certainly determine the distance to look up from the eyes to find hair. I'm not sure which distance ratio you can use, but the hair are certainly not 10x farther than the distance between eyes and chin.
Obviously this technique is not bald-proof.
I have to make a bot which has to overcome obstacles autonomously in an arena that will be filled with rocks. The bot has to find its way through this area and reach the end point. I am thinking of using edge detector operators like canny and sobel for this problem.
I want to know whether those will be suitable options for this problem. If so, then after detecting the edges, how can I make the bot find the path, overcoming the rock obstacles?
I am using QT IDE and opencv library.
Since you will be analyzing frames of video, and the robot will be moving most of the time, image differences and optical flow too will be helpful. Edge detection alone might not help a lot, unless the surroundings and obstacles are simple and have known properties. Posting a photo of the scene can help those who want to answer the question.
Yes, canny is a very good edge detector. In fact the opencv implementation uses sobel to get the gradient estimate. You may need to apply a Gaussian filter to the image before edge detection. Edges are good features to look for rocks, but depending on the background other features such as color may also be useful. It probably would be easier if you gather 3D scene information via stereo, or laser scanner, or kinect like sensor. Also consider detecting when you bump into rocks and building up a map of where they are.
You can use contours to detect any object. You can estimate its size by finding the area of the contours. Then you can use moments to find the center of the object.
For a project I've to detect a pattern and track it in space despite rotation, noise, etc.
It's highlighted with IR light and recorded with an IR camera:
Picture: https://i.stack.imgur.com/RJuVS.png
As on this picture it will be only very simple shape and we can choose which one we're gonna use.
I need direction on how to process a recognition of these shapes please.
What I do currently is thresholding and erosion to get a cleaner shape and then a contour detection and a polygon approximation.
What should I do then? I tried hu-moments but it wasn't good at all.
Could you please give me a global approach to recognize and track such pattern in space?
Can you choose which shape to project?
if so I would recomend using few concentric circles. Then using hough transform for circles you can easily find the center of the shape even when tracking is extremly hard (large movement/low frame rate).
If you must use rectangular shape then there is a good open source which does that. It is part of a project to read street signs and auto-translate them.
Here is a link: http://code.google.com/p/signfinder/
This source is not large and it would be easy to cut out the relevant part.
It uses "good features to track" of openCV in module CornerFinder.
Hope it helped
It is possible, you need following steps: thresholding image, some morphological enhancement,
blob extraction and normalization of blob size, blobs shape analysis, comparison of analysis results with pattern that you want to track.
There is many methods for blobs shape analysis. Simple methods: geometric dimensions, area, perimeter, circularity measurement; bit quads and others (for example, William K. Pratt "Digital Image Processing", chapter 18). Complex methods: spacial moments, template matching, neural networks and others.
In any event, it is very hard to answer exactly without knowledge of pattern shapes that you want to track )
hope it helped