I am working on a project to segment air images and classify each segment. The images are very large and have huge homogeneous areas, so I decided to use a Split and Merge Algorithm for the segmentation.
(On the left the original image and on the right the segmented one, where each segment is represented in its RGB mean value Thanks to this answer)
For the classification I want to use a SVM Classifier (I used it a lot in two projects before) with a feature vector.
For the beginning I just want to use five classes: Water, Vegetation, Built up area, Dune and Anomaly
Now I am thinking about what I can put in this feature vector:
The mean RGB Value of the Segment
A texture feature (but can I represent the texture of the segment with just one value?)
The place in the source image (maybe with a value which represents left, right or middle?)
The size of the segment (Water segments should be much larger than built areas)
The mean RGB values of the fourth neighborhood of the segment
So has anyone done something like this and can give me some advises what useful stuff I can put in the feature vector? And can someone give me an advise how I can represent the texture in the segment correctly?
Thank you for your help.
Instead of the Split and Merge algorithm, you could also use superpixels. There are several fast and easy-to-use superpixel algorithms available (some are even implemented in recent OpenCV versions). To name just a view:
Compact Watershed (see here: https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf)
preSLIC and SLIC (see here: https://www.tu-chemnitz.de/etit/proaut/forschung/rsrc/cws_pSLIC_ICPR.pdf and here: http://www.kev-smith.com/papers/SLIC_Superpixels.pdf)
SEEDS (see here: https://arxiv.org/abs/1309.3848)
ERGC (see here: https://sites.google.com/site/pierrebuyssens/code/ergc)
Given the superpixel segmentation, there is a vast set of features you can compute in order to classify them:
In Automatic Photo Pop-Up Table 1, Hoiem et al. consider, among others, the following features: mean RGB color, mean HSV color, color histograms, saturation histograms, Textons, differenty oriented Gaussian derivative filters, mean x and y location, area, ...
In Recovering Occlusion Boundaries from a Single Image, Hoiem et al. consider some additional features to the above list in Table 1.
In SuperParsing: Scalable Nonparametric Image
Parsing with Superpixels
, Tighe et al. additionally consider SIFT histograms, the mask reduced on a 8 x 8 image, boundign box shape, and color thumbnails.
In Class Segmentation and Object Localization with Superpixel Neighborhoods
, Fulkerson et al. also consider features from neighboring superpixels.
Based on the superpixels, you can still apply a simple merging-scheme in order to reduce the number of superpixels. Simple merging by color histograms might already be useful for your tasks. Otherwise you can additionally use edge information in between superpixels for merging.
You don't need to restrict yourself to just 1 feature vector. You could try multiple feature vectors (from the list you already have), and feed them to classifiers based on multiple kernel learning (MKL). MKL has shown to improve the performance over a single feature approach, and one of my favourite MKL techniques is VBpMKL.
If you have time, I would suggest you try one or more the following features, which can capture the features of interest:
Haralick texture features
Histogram oriented gradient features
Gabor filters
SIFT
patch-wise RGB means
Related
I have read an article regarding Brain tumor segmentation.That article has some methods to segment the brain tumor cells from normal brain cells.Those methods are pre-processing,segmentation and feature extraction.But I couldn't understand,whats the difference between segmentation and Feature extraction.I googled it also,but still I didn't understand.Can anyone please explain the basic concept of this methods?
Segmentation is usually understood as the decomposition of a whole into parts. In particular, decomposing or partitioning an image into homogeneous regions.
Feature extraction is a broader concept, which can be described as finding areas with specific properties, such as corners, but it can also be any set of measurements, be them scalar, vector or other. Those features are commonly used for pattern recognition and classification.
A typical processing scheme could be to segment out cells from the image, then characterizing their shape by means of, say edge smoothness features, and telling normal from ill cells.
Image Segmentation vs. Feature Localization • Image Segmentation: If R is a segmented region,
1. R is usually connected; all pixels in R are connected (8- connected or 4-connected).
2. Ri \ Rj = , i 6= j; regions are disjoint.
3. [ni=1Ri = I, where I is the entire image; the segmentation
is complete.
• Feature Localization: a coarse localization of image fea- tures based on proximity and compactness – more e↵ective than Image Segmentation.
Feature extraction is a prerequisite for image segmentation.
When you face a project for segmenting a particular shape or structure in an image, one of the procedure to be applied is to extract the relevant features for that region so that you can differentiate it from other region.
A simple and basic features which are commonly used in image segmentation could be intensity. So you can make different groups of structure based on the intensity they show in the image.
Feature extraction is used for classification and relevant and significant features are used for labeling different classed inside an image.
My problem is as follows:
I have 6 types of images, or 6 classes. For example, cat, dog, bird, etc.
For every type of image, I have many variations of that image. For example, brown cat, black dog, etc.
I'm currently using a Support Vector Machine (SVM) to classify the images using one-versus-rest classification. I'm unfolding each image into a single pixel vector and using that as the feature vector for a given image I'm experiencing decent classification accuracy, but I want to try something different.
I want to use image descriptors, particularly SURF features, as the feature vector for each image. This issue is, I can only have a single feature vector per given image and I'm given a variable number of SURF features from the feature extraction process. For example, 1 picture of a cat may give me 40 SURF features, while 1 picture of a dog will give me 68 SURF features. I could pick the n strongest features, but I have no way of guaranteeing that the chosen SURF features are ones that describe my image (for example, it could focus on the background). There's also no guarantee that ANY SURF features are found.
So, my problem is, how can I get many observations (each being a SURF feature vector), and "fold" these observations into a single feature vector which describes the raw image and can fed to an SVM for training?
Thanks for your help!
Typically the SURF descriptors are quantized using a K-means dictionary and aggregated into one l1-normalized histogram. So your inputs to the SVM algorithm are now fixed in size.
I have a lots of images of paper cards of different shades of colors. Like all blues, or all reds, etc. In the images, they are held up to different objects that are of that color.
I want to write a program to compare the color to the shades on the card and choose the closest shade to the object.
however I realize that for future images my camera is going to be subject to lots of different lighting. I think I should convert into HSV space.
I'm also unsure of what type of distance measure I should use. Given some sort of blobs from the cards, I could average over the HSV and simply see which blob's average is the closest.
But I welcome any and all suggestions, I want to learn more about what I can do with OpenCV.
EDIT: A sample
Here I want to compare the filled in red of the 6th dot to see it is actually the shade of the 3rd paper rectangle.
I think one possibility is to do the following:
Color histograms from Hue and Saturation channels
compute the color histogram of the filled circle.
compute color histogram of the bar of paper.
compute a distance using histogram distance measures.
Possibilities here includes:
Chi square,
Earthmover distance,
Bhattacharya distance,
Histogram intersection etc.
Check this opencv link for details on computing histograms
Check this opencv link for details on the histogram comparisons
Note that when computing the color histograms, convert your images to HSV colorspace as you yourself suggested. Then, there is 2 things to note here.
[EDITED to make this a suggestion rather than a must do because I believe V channel might be necessary to differentiate the shades. Anyhow, try both and go with the one giving better result. Apologies if this sent you off track.] One possibility is to only use the Hue and Saturation channels i.e. you build a 2D
histogram rather than a 3D one consisting of values from the hue and
saturation channels. The reason for doing so is that the variation
in lighting is most felt in the V channel. This, together with the
use of histograms, should hopefully make your comparisons more
robust to lighting changes. There is some discussion on ignoring the
V channel when building color histograms in this post here. You
might find the references therein useful.
Normalize the histograms using the opencv functions. This is to
account for the different sizes of the patches of material (your
small circle vs the huge color bar has different number of pixels).
You might also wish to consider performing some form of preprocessing to "stretch" the color in the image e.g. using histogram equalization or an "S curve" mapping so that the different shades of color get better separated. Then compute the color histograms on this processed image. Keep the information for the mapping and perform it on new test samples before computing their color histograms.
Using ML for classification
Besides simply computing the distance and taking the closest one (i.e. a 1 nearest neighbor classifier), you might want to consider training a classifier to do the classification for you. One reason for doing so is that the training of the classifier will hopefully learn some way to differentiate between the different shades of hues since it has access to them during the training phase and is required to differentiate them. Notice that simply computing a distance, i.e. your suggested method, may not have this property. Hopefully this will give better classification.
The features use in the training can still be the color histograms that I mention above. That is, you compute color histograms as described above for your training samples and pass this to the classifier along with their class (i.e. which shade they are). Then, when you wish to classify a test sample, you likewise compute a color histogram and pass it to the classifier and it will return you the class (shade of color in your case) the color of the test sample belongs to.
Potential problems when training a classifier rather than using a simple distance comparison based approach as you have suggested is partly the added complexity of the program as well as potentially getting bad results when the training data is not good. There is also going to be a lot of parameter tuning involved to get it to work well.
See the opencv machine learning tutorials here for more details. Note that in the examples in the link, the classifier only differentiate between 2 classes whereas you have more than 2 shades of color. This is not a problem as the classifiers in general can work with more than 2 classes.
Hope this helps.
Here is the problem we are trying to solve:
Goal is to classify pixels of a colored image into 3 different classes.
We have a set of manually classified data for training purposes
Pixels almost do not correlate to each other (each have individual behaviour) - so most likely classification is on each individual pixel and based on it's individual features.
3 classes approximately can be mapped to colors of RED, YELLOW and BLACK color families.
We need to have the system semi-automatic, i.e. 3 parameters to control the probability of the presence of 3 outcomes (for final well-tuning)
Having this in mind:
Which classification technique will you choose?
What pixel features will you use for classification (RGB, Ycc, HSV, etc) ?
What modification functions will you choose for well-tuning between three outcomes.
My first try was based on
Naive bayes classifier
HSV (also tried RGB and Ycc)
(failed to find a proper functions for well-tuning)
Any suggestion?
Thanks
For each pixel in the image try using the histogram of colors the n x n window around that pixel as its features. For general-purpose color matching under varied lighting conditions, I have had good luck with using two-dimensional histograms of hue and saturation with a relatively small number of bins along each dimension. Depending upon your lighting consistency it might make sense for you to directly use the RGB values.
As for the classifier, the manual-tuning requirement is most easily expressed using class weights: parameters that specify the relative costs of false negatives versus false positives. I have only used this functionality with SVMs, but I'm sure you can find implementations of other classifiers that support a similar concept.
What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?"
So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information.
Please note that this is not based on template matching.
Wikipedia didn't give much details on actually implementing any of the texture analyses.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?
The second is harder due to different radiometry.
Here is a basic scheme of how to measure similarity of areas.
You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature
You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image
Calculate such vector to both areas that you want to compare
Define similarity function which takes two vectors and output how much they are alike.
You need to focus on steps 2 and 4.
Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available.
Step 4. You can use cosine simmilarity, min-max or weighted cosine.
After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic.
For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels.
Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
I'd suggest to use wavelet analysis. Wavelets are localized in both time and frequency and give a better signal representation using multiresolution analysis than FT does.
Thre is a paper explaining a wavelete approach for texture description. There is also a comparison method.
You might need to slightly modify an algorithm to process images of arbitrary shape.
An interesting approach for this, is to use the Local Binary Patterns.
Here is an basic example and some explanations : http://hanzratech.in/2015/05/30/local-binary-patterns.html
See that method as one of the many different ways to get features from your pictures. It corresponds to the 2nd step of DanielHsH's method.