How to perform image classification from mahout? How to convert the image to a form which is accepted by mahout classification algorithms? Is the any starter code to start with? Please share me some starter tutorials. Is mahout good library for image classification?
There are two answers to your question:
The simple answer is that from a Mahout point of view classifying images is no different than classifying any other type of data. You find a suitable set of features to describe your data, and then: train, validate, test, and deploy.
The second answer is a bit more involved, and I'm going to summarize. In the case of images the step in which you compute a suitable set of features spans a whole research area (called computer vision). There are many methods: DHOG, direction of gradient, SURF, SIFT, etc. Depending on the images and what your expectations are, you may obtain reasonable results just using an existing method, or maybe not. It would be impossible to say without looking at your images and you telling us your objectives.
Related
I need to compare two images in a project,
The images would be two fruits of the same kind -let's say two different images of two different apples-
To be more clear, the database will have images of the stages which an apple takes from the day it was picked from a tree until it gets rotten..
The user would upload an image of the apple they have and the software should compare it to all those images in the database and retrieve the data of the matching image and tell the user at which stage is it...
I did compare before images using OpenCv emgu but I really don't have much knowledge if it's the best way...
I need an expert advise is what i said in the project even possible? or the whole database images' will match the user's image!
And is this "image processing" or something else?
And is there any suggested tutorials to learn how to do this?
I know it seems not totally clear yet, but it's just a crazy idea that I wish I can get a way to know more how i can bring it to life!
N.B the project will be an android application
This is an example of a supervised image classification problem, which is a pretty broad field. You can read up on image classification here.
The way that you would approach this problem would be to define a few stages of decay (fresh, starting to rot, half rotten, completely rotten), put together a dataset of many images of the fruit in each stage, and train an image classifier on each stage. The sample dataset should contain images of many different pieces of fruit in many different settings. If you want to support different types of fruit, you would need to train a different classifier for each fruit.
There are many image classification tools out there. To name a few:
OpenCV's haar classifier
dlib's hog classifier
Matlab's Computer Vision System Toolbox
VLFeat
It would be up to you to look into which approach would work best for your situation.
Given that this is a fairly broad problem, I wouldn't expect to come up with a solid solution quickly unless you've had experience with image classification. If you are trying to develop a product, I would recommend getting in touch with a computer vision expert that you could contract to solve it.
If you are just looking to learn more about image classification, however, this could be a fun way to play around with different tools and get a feel for what's out there. You may want to start by learning about Machine Learning in general. Caltech offers a free online course that gives a pretty good intro to the subject.
I am doing a project on Writer Identification. I want to extract HOG features from Line Images of Arabic Handwriting. And than use Gaussian Mixture Model for Classification.
The link to the database containing the line Images is : http://khatt.ideas2serve.net/
So my questions are as follows;
There are three folders namely Test, Train and Validate. So, from which folder do I need to extract the features. And for what purpose should we use each of the folders.
Do we need to extract the features from individual images and merge them or is there any method to extract features of all the images together.
Test, Train and Validate
Read this stats SE question: What is the difference between test set and validation set?
This is basic machine learning, so you should probably go back and review your course literature, since it seems like you're missing some pretty important machine learning concepts.
Do we need to extract the features from individual images and merge them or is there any method to extract features of all the images together.
It seems, again, like you're missing basic concepts here. Histogram of oriented gradients subdivides the image and finds the oriented gradient. See this SO question for examples of hos this looks.
The traditional way of using HoG is: for each image in your training set, you extract the HoG, use these to train a SVM, validate the training with the validation set, then actually use the trained SVM on the test set.
You need to extract the HOG features from each image separately. Furthermore, you have to resize all images to be of the same size, otherwise all your HOG vectors will be of different length.
You can use the extractHOGFeatures function in MATLAB. See this example.
I am a very new student on machine learning. I just wanted to ask what are possible ways to improve a method (Naive Bayes for example) to get better results classifying images into text or non-text images, instead of just inputing a x number of images and telling the system which have text and which do not?
Thanks in advance
The state of the art in such problems are deep neural networks with several convolutional layers. See this article for an example of image classification using deep convolutional nets. Your problem (just determining if an image has text or not) is much easier than the general image classification problem the authors consider, so you'd probably get away with using a much simpler network architecture.
Nowadays you don't need to implement these things yourself, there are efficient and GPU-accelerated implementations freely available, for instance Caffe, Torch7, keras...
Can anyone advise me way to build effective face classifier that may be able to classify many different faces (~1000)?
And i have only 1-5 examples of each face
I know about opencv face classifier, but it works bad for my task (many classes, a few samples).
It works alright for one face classification with small number of samples. But i think that 1k separate classifier is not good idea
I read a few articles about face recognition but methods from these articles reqiues a lot of samples of each class for work
PS Sorry for my writing mistakes. English in not my native language.
Actually, for giving you a proper answer, I'd be happy to know some details of your task and your data. Face Recognition is a non-trivial problem and there is no general solution for all sorts of image acquisition.
First of all, you should define how many sources of variation (posing, emotions, illumination, occlusions or time-lapse) you have in your sample and testing sets. Then you should choose an appropriate algorithm and, very importantly, preprocessing steps according to the types.
If you don't have any significant variations, then it is a good idea to consider for a small training set one of the Discrete Orthogonal Moments as a feature extraction method. They have a very strong ability to extract features without redundancy. Some of them (Hahn, Racah moments) can also work in two modes - local and global feature extraction. The topic is relatively new, and there are still few articles about it. Although, they are thought to become a very powerful tool in Image Recognition. They can be computed in near real-time by using recurrence relationships. For more information, have a look here and here.
If the pose of the individuals significantly varies, you may try to perform firstly pose correction by Active Appearance Model.
If there are lots of occlusions (glasses, hats) then using one of the local feature extractors may help.
If there is a significant time lapse between train and probe images, the local features of the faces could change over the age, then it's a good option to try one of the algorithms which use graphs for face representation so as to keep the face topology.
I believe that non of the above are implemented in OpenCV, but for some of them you can find MATLAB implementation.
I'm not native speaker as well, so sorry for the grammar
Coming to your problem , it is very unique in its way. As you said there are only few images per class , the model which we train should either have an awesome architecture which can create better features within an image itself , or there should be an different approach which can achieve this task .
I have four things which I can share as of now :
Do data pre-processing and then create a bigger dataset and train on a neural network ideally. Here, we can do pre-processing like:
- image rotation
- image shearing
- image scaling
- image blurring
- image stretching
- image translation
and create atleast 200 images per class. Please checkout opencv documentation which provides many more methods on how you can increase the size of your dataset. Once you do this, then we can apply transfer learning , which is a better approach than training a neural network from scratch.
Transfer learning is a method where we train a network on our own custom classes , and this network is already pre-trained on 1000's of classes. Since our data here is very less, I would prefer transfer learning only. I have written a blog on how you can approach this using tranfer learning after you have the required amount of data. It is linked here. Face recognition also is a classification task itself, where each human is a separate class. So, follow the instructions given in the blog , may be it would help you create your own powerful classifer.
Another suggestion would be , after creating a dataset , encode them properly. This encoding would help you preserve the features in an image and can help you train better networks. VLAD ,Fisher , Bag of Words are few encoding techniques. You can search few repositories online which have implemented these already on ORL database. Once you encode , train the network on the encodings , you will obviously see a better performance.
Even do check out , Siamese network here which is meant for this purpose I feel . Here they compare two images with similar characteristics on different networks and there by achieve better classification accuracies . Git repository is here.
Another standard approach would be using SVM , Random forests since the data is less. If you still prefer neural networks the above methods would serve you the purpose. If you intend to go with encodings , then I would suggest random forests , as it is highly preferrable in learning and flexible too.
Hopefully , this answer would help you proceed in the right direction of achieving things.
You might want to take a look at OpenFace, a Python and Torch implementantion of face recognition with deep neural networks: https://cmusatyalab.github.io/openface/
I have a set of reference images (200) and a set of photos of those images (tens of thousands). I have to classify each photo in a semi-automated way. Which algorithm and open source library would you advise me to use for this task? The best thing for me would be to have a similarity measure between the photo and the reference images, so that I would show to a human operator the images ordered from the most similar to the least one, to make her work easier.
To give a little more context, the reference images are branded packages, and the photos are of the same packages, but with all kinds of noises: reflections from the flash, low light, imperfect perspective, etc. The photos are already (manually) segmented: only the package is visible.
Back in my days with image recognition (like 15 years ago) I would have probably tried to train a neural network with the reference images, but I wonder if now there are better ways to do this.
I recommend that you use Python, and use the NumPy/SciPy libraries for your numerical work. Some helpful libraries for handling images are the Mahotas library and the scikits.image library.
In addition, you will want to use scikits.learn, which is a Python wrapper for Libsvm, a very standard SVM implementation.
The hard part is choosing your descriptor. The descriptor will be the feature you compute from each image, intended to compute a similarity distance with the set of reference images. A good set of things to try would be Histogram of Oriented Gradients, SIFT features, and color histograms, and play around with various ways of binning the different parts of the image and concatenating such descriptors together.
Next, set aside some of your data for training. For these data, you have to manually label them according to the true reference image they belong to. You can feed these labels into built-in functions in scikits.learn and it can train a multiclass SVM to recognize your images.
After that, you may want to look at MPI4Py, an implementation of MPI in Python, to take advantage of multiprocessors when doing the large descriptor computation and classification of the tens of thousands of remaining images.
The task you describe is very difficult and solving it with high accuracy could easily lead to a research-level publication in the field of computer vision. I hope I've given you some starting points: searching any of the above concepts on Google will hit on useful research papers and more details about how to use the various libraries.
The best thing for me would be to have a similarity measure between the photo and the reference images, so that I would show to a human operator the images ordered from the most similar to the least one, to make her work easier.
One way people do this is with the so-called "Earth mover's distance". Briefly, one imagines each pixel in an image as a stack of rocks with height corresponding to the pixel value and defines the distance between two images as the minimal amount of work needed to transfer one arrangement of rocks into the other.
Algorithms for this are a current research topic. Here's some matlab for one: http://www.cs.huji.ac.il/~ofirpele/FastEMD/code/ . Looks like they have a java version as well. Here's a link to the original paper and C code: http://ai.stanford.edu/~rubner/emd/default.htm
Try Radpiminer (one of the most widely used data-mining platform, http://rapid-i.com) with IMMI (Image Mining Extension, http://www.burgsys.com/mumi-image-mining-community.php), AGPL licence.
It currently implements several similarity measurement methods (not only trivial pixel by pixel comparison). The similarity measures can be input for a learning algorithm (e.g. neural network, KNN, SVM, ...) and it can be trained in order to give better performance. Some information bout the methods is given in this paper:
http://splab.cz/wp-content/uploads/2012/07/artery_detection.pdf
Now-a-days Deep Learning based framworks like Torch , Tensorflow, Theano, Keras are the best open source tool/library for object classification/recognition tasks.