How can I classify a person by identification of shape? - opencv

I have a program which detect a moving object and I want to classify these objects by identifying the shape of each one with a dataset of shapes.Can any one have any idea how to compare the shape of each object with the dataset using some points of the current shape and compare it with the samples?
image1
detected object1

From the theory point of view you should start off reading two papers
1) HOG detectors by Dalal and Trigs
2) Chamfer Detectors by Gavrila
If you just want to use the edge information, Chamfer is the solution. From my experience it fails miserably on cluttered scenes. HOG produces far superior results.
OpenCV already has human body detector implemented
If you are looking for a Machine Learning adventure why not train your own HOG detector using OpenCV train_cascade? It is very fast and real time

Related

How do I integrate SURF algorithm with particle filter tracking?

I'm new to the image processing field. I'm currently trying to detect multiple persons in a video using the SURF algorithm with SVM training for detection, and the particle filter for tracking. So far, I used the bag-of-features to build a vocabulary and cluster keypoints from a set of images (for both persons and nonpersons) and used two SVMs to classify if the test data are persons or not. Now that the classification is done, I'm planning to use the SURF keypoints extracted from the test data for the particle filter.
Basically, I want to justify the SURF keypoints to suit the particle filter input, feed these points into the particle filter algorithm, and then start tracking the targets and drawing circles representing the particles at the same time.
How do I do this? :( I'm trying to study the code provided here https://github.com/NewProggie/Particle-Filter but I kinda got lost because it AdaBoost, and of course, the particles are generated randomly. Thank you very much!

Confusion regarding Object recognition and features using SURF

I have some conceptual issues in understanding SURF and SIFT algorithm All about SURF. As far as my understanding goes SURF finds Laplacian of Gaussians and SIFT operates on difference of Gaussians. It then constructs a 64-variable vector around it to extract the features. I have applied this CODE.
(Q1 ) So, what forms the features?
(Q2) We initialize the algorithm using SurfFeatureDetector detector(500). So, does this means that the size of the feature space is 500?
(Q3) The output of SURF Good_Matches gives matches between Keypoint1 and Keypoint2 and by tuning the number of matches we can conclude that if the object has been found/detected or not. What is meant by KeyPoints ? Do these store the features ?
(Q4) I need to do object recognition application. In the code, it appears that the algorithm can recognize the book. So, it can be applied for object recognition. I was under the impression that SURF can be used to differentiate objects based on color and shape. But, SURF and SIFT find the corner edge detection, so there is no point in using color images as training samples since they will be converted to gray scale. There is no option of using colors or HSV in these algorithms, unless I compute the keypoints for each channel separately, which is a different area of research (Evaluating Color Descriptors for Object and Scene Recognition).
So, how can I detect and recognize objects based on their color, shape? I think I can use SURF for differentiating objects based on their shape. Say, for instance I have a 2 books and a bottle. I need to only recognize a single book out of the entire objects. But, as soon as there are other similar shaped objects in the scene, SURF gives lots of false positives. I shall appreciate suggestions on what methods to apply for my application.
The local maxima (response of the DoG which is greater (smaller) than responses of the neighbour pixels about the point, upper and lover image in pyramid -- 3x3x3 neighbourhood) forms the coordinates of the feature (circle) center. The radius of the circle is level of the pyramid.
It is Hessian threshold. It means that you would take only maximas (see 1) with values bigger than threshold. Bigger threshold lead to the less number of features, but stability of features is better and visa versa.
Keypoint == feature. In OpenCV Keypoint is the structure to store features.
No, SURF is good for comparison of the textured objects but not for shape and color. For the shape I recommend to use MSER (but not OpenCV one), Canny edge detector, not local features. This presentation might be useful

Detecting object class using shape descriptors in computer vision

I want to differentiate between two classes of objects through the differences in the shape of blob(blob is in the form of binary image) using shape descriptors and machine learning .I want to ask if there is any good shape feature which I can use to detect the descriptors for the irregular contour or blob obtained ?
there is a large body of work associated with shape descriptors, these methods work on either the outer edge detected pixels (the boundary) or the full filled-in binary shape. Both approaches rely on making the shape descriptors invariant to translation, rotation and scaling, and some to skew. The classical boundary method is Fourier Descriptors and the classic filled in method is Moment Invariants, both are covered in most good image processing textbooks and are easy to implement with OpenCV.
The answer is very subjective on the kinds of shapes you are looking for. If the contours of the shapes are discriminative enough, you can try shape context. To classify shapes, feed in these features into any classifier -- SVM or random forests for instance.
If the shapes have consistently occuring corners, then you can extract the corners using FAST or SURF, and describe the regions around the corners using SIFT or SURF. In this case, shapes are best recognised by feature matching or bags of words.

how to proceed object recognition using edge detection and histogram processing techniques?

hello everyone i am pursuing mtech my project is object recognition to recognize specific objects such as weapons etc not allowed at airport so input will be scanned images of baggage/luggage in matlab for now its for static images only now i am using edge detection and histogram processing techniques.. i have gone through internet found ANN genetic algorithm and many more but can't summarize whole scenarios each paper explain in its own way plz help me out to how to proceed with object recognition using edge detection and histogram processing techniques
If you'd like to do object recognition with only the contours, use Shape Context.
Essentially, you will have a database of shapes apriori, where you know the label of each shape (gun, something_harmless_1, knife, something_harmless_2). At query time, you take the contour of your object and compute the Shape Context Distance between your query shape and all shapes in your database. The shape with the shortest Shape Context Distance is then deemed as the true class of your object.
Alternatively, if you wanted to use the histogram of the object, you could do a similar matching but with a different distance metric. Instead of using the Shape Context Distance, you could store a histogram for all objects in your database and compute the Earth Mover's Distance between your query object and all other objects in your database.
It is possible to encode both of these distances in your final result. You can come up with some weighting scheme between them that makes sense for you.

OpenCV - Haar classifier for long objects with different angles

I have used Haar classifier with OpenCV before succesfully. Unfortunately it seems to work only on square objects and fixed angles (i.e. faces). However I need to find "long" (rectangular) objects which have different angles (see sample input image).
Is there a way to train Haar classifier to find such objects? All I can find are tutorials for face recognition. Any other alternative approches?
Haar classifiers are known to work with rigid object only. You need a classifier for each of the view. For example, the side-face classifier in OpenCV doesn't work as good as front-face classifer(due to the reason being, side face has more variation in yaw-pitch-roll than front face).
There is no perfect way of answering your question.
However, in your case whatever you are trying to classify (microbes I suppose) are overlapping on each other. Its a complex issue. But, you can isolate the region where microbes occur (not isolate each microbe like a face).
You can refer fingerprint segmentation techniques that are known to enhance the ridges on a fingerprint (here in your case its microbe edges) from the background and isolate the image.
Check "ridgesegmentation.m" in the following page:
http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/index.html

Resources