Bag of Visual Words in Opencv - opencv

I am using BOW in opencv for clustering the features of variable size. However one thing is not clear from the documentation of the opencv and also i am unable to find the reason for this question:
assume: dictionary size = 100.
I use surf to compute the features, and each image has variable size descriptors e.g.: 128 x 34, 128 x 63, etc. Now in BOW each of them are clustered and I get a fixed descriptor size of 128 x 100 for a image. I know 100 is the cluster center created using kmeans clustering.
But I am confused in that, if image has 128 x 63 descriptors, than how come it clusters into 100 clusters which is impossible using kmeans UNLESS i convert the descriptor matrix to 1D. Wont converting to 1D will lose valid 128 dimensional information of a single key points?
I need to know how is the descriptor matrix manipulated to get 100 cluter centers from only 63 features.

Think it like this.
You have 10 cluster means total and 6 features for current image. First 3 of those features are closest to 5th mean and remaining 3 of them are closest to 7th, 8th, and 9th mean respectively. Then your feature will be like [0, 0, 0, 0, 3, 0, 1, 1, 1, 0] or normalized version of this. Which is 10 dimensional, and that is equal to number of cluster mean. So you can create 100000 dimensional vector from 63 features if you want.
But still I think there is something wrong, because after you applied BOW, your features should be 1x100 not 128x100. Your cluster means are 128x1 and you are assigning your 128x1 sized features (you hvae 34 128x1 feature for first image, 63 128x1 feature for second image, etc.) to those means. So in basic you are assigning 34 or 63 features to 100 means, your result should be 1x100.

Related

Implementing a neural network classifier for my data, but is it solveable this way?

I will try to explain what the problem is.
I have 5 materials, each composed of 3 different minerals of a set of 10 different minerals. For each material I have measured the inensity vs wavelength. And each Intensity vs wavelength vector can be mapped into a binary vector of ones and zeros corresponding to the minerals the material is composed of.
So material 1 has an intensity of [0.51 0.53 0.57 0.68...... ] measured at different wavelengths [470 480 490 500 510 ......] and a binary vector
[1 0 0 0 1 0 0 1 0 0]
and so on for each material.
For each material I have 5000 examples, so 25000 examples for all. Each example will have a 'similar' intensity vs wavelength behaviour but will give the 'same' binary vector.
I want to design a NN classifier so that if I give it as an input the intensity vs wavelength, it gives me the corresponding binary vector.
The intensity vs wavelength has a length of 450 so I will have 450 units in the input layer
the binary vector has a length of 10, so 10 output neurons
the hidden layer/s will have as a beginning 200 neurons.
Can I simly design a NN classifier this way, and would it solve the problem, or I need something else?
You can do that, however, be aware to use the right cost and output layer activation functions. In your case, you should use sigmoid units for your outer layer and binary-cross-entropy as a cost function.
Another way to go about this would be to use one-hot encoding so that you can use normal multi-class classification (will probably not make sense since your output is probably sparse).

how to perform the output binarization of a torch model

I have to binarize the output o of a torch model (lua script), the value range is [-1,+1], i want to threshold those values in such a way that:
0 if o[i]<0
1 if o[i]>=0
The output is composed by 32 layers with size 1x1 float tensors, so 32 floats, i want to get 32 bits from those 32 floats but i cannot find a layer that allows to do that.
At the moment I have a for cycle that checks the value of each level but it is very slow.
Maybe I can use the threshold layer or implement one by my own, do you have any advice?
You can use the 'greater or equal than' operator https://github.com/torch/torch7/blob/master/doc/maths.md#torchgea-b
local threshold_tensor = o:ge(0)

VLFeat: computation of number of octaves for SIFT

I am trying to go through and understand some of VLFeat code to see how they generate the SIFT feature points. One thing that has me baffled early on is how they compute the number of octaves in their SIFT computation.
So according to the documentation, if one provides a negative value for the initial number of octaves, it will compute the maximum which is given by log2(min(width, height)). The code for the corresponding bit is:
if (noctaves < 0) {
noctaves = VL_MAX (floor (log2 (VL_MIN(width, height))) - o_min - 3, 1) ;
}
This code is in the function is in the vl_sift_new function. Here o_min is supposed to be the index of the first octave (I guess one does not need to start with the full resolution image). I am assuming this can be set to 0 in most use cases.
So, still I do not understand why they subtract 3 from this value. This seems very confusing. I am sure there is a good reason but I have not been able to figure it out.
The reason why they subtract by 3 is to ensure a minimum size of the patch you're looking at to get some appreciable output. In addition, when analyzing patches and extracting out features, depending on what algorithm you're looking at, there is a minimum size patch that the feature detection needs to get a good output and so subtracting by 3 ensures that this minimum patch size is met once you get to the lowest octave.
Let's take a numerical example. Let's say we have a 64 x 64 patch. We know that at each octave, the sizes of each dimension are divided by 2. Therefore, taking the log2 of the smallest of the rows and columns will theoretically give you the total number of possible octaves... as you have noticed in the above code. In our case, either the rows and columns are the minimum value, and taking the log2 of either the rows or columns gives us 7 octaves theoretically (log2(64) = 7). The octaves are arranged like so:
Octave | Size
--------------------
1 | 64 x 64
2 | 32 x 32
3 | 16 x 16
4 | 8 x 8
5 | 4 x 4
6 | 2 x 2
7 | 1 x 1
However, looking at octaves 5, 6 and 7 will probably not give you anything useful and so there's actually no point in analyzing those octaves. Therefore by subtracting by 3 from the total number of octaves, we will stop analyzing things at octave 4, and so the smallest patch to analyze is 8 x 8.
As such, this subtraction is commonly performed when looking at scale-spaces in images because this enforces that the last octave is of a good size to analyze features. The number 3 is arbitrary. I've seen people subtract by 4 and even 5. From all of the feature detection code that I have seen, 3 seems to be the most widely used number. So with what I said, it wouldn't really make much sense to look at an octave whose size is 1 x 1, right?

How to distinguish photo from picture?

I have the following problem:
I am given a set of images and I need to devide them to photos and pictures(graphics) with means of OpenCV library.
I've already tried
to analyze RGB histogram (in average picture has empty bins of histogram),
to analyze HSV histogram (in average picture has not much colors),
to search for contours (in average the number of contours on picture is less than on photo).
So I have 7% error (tested on 2000 images). I'm confused a little, because I have no a lot of experience in numerous computer vision means.
For example,this photo below. Its histograms (RGB and HSV) are very poor and number of contours is rather small. In addition there is a lot of background color, so I need to find an object to calculate only it histogram (I use findContours() for this). But in any case my algorithm detects this image as picture.
And one more example:
The problem with pictures is noise. I have images of small size (200*150) and in some cases noise is so perceptible, that my algorithm detects this image as photo. I've tried to blur images, but in this case the number of colors increases because of mixing pixels and also it decreases the number of contours (some dim boundaries become indistinguishable).
Example of pictures:
I've also tried color segmentation and MSER, but my best result is still 7%.
Could you advice me what methods can I also try?
I've used your dataset to create really simple models. To do this, I've used Rattle library in R.
Input data
rgbh1 - number of bins in RGB histogram, which value > #param#, in my case #param# = 30 (340 is maximum value)
rgbh2 - number of bins in RGB histogram, which value > 0 (not empty)
hsvh1 - number of bins in HSV histogram, which value > #param#, in my case #param# = 30 (340 is maximum value)
hsvh2 - number of bins in HSV histogram, which value > 0 (not empty)
countours - number of contours on image
PicFlag - flag indicating picture/photo (picture = 1, photo = 0)
Data exploration
To better understand your data, here is a plot of distribution of individual variables by picture/photo group (there is percentage on y axis):
It clearly shows that there are variables with preditive power. Most of them can be used in our model. Next I've created simple scatter plot matrix to see whether some combination of variables can be useful:
You can see the for example combination of number of countours and rgbh1 looks promising.
On the following chart you can notice that there is also quite strong correlation among variables. (Generally, we like to have a lot of variables with low correlation, while you have only a limited number of correlated variables). Pie chart shows how big is the correlation - full circle means 1, empty circle means 0, my opinion is that if correlation exceeds .4 it might not be good idea to have both variables in the model)
Model
Then I created simple models (keeping Rattle's default) using decision tree, random forest, logistic regression and neural network. As input I've used your data with 60/20/20 split (training, validiation, testing dataset). This is my result (please refer to google if you don't understand error matrix):
Error matrix for the Decision Tree model on pics.csv [validate] (counts):
Predicted
Actual 0 1
0 167 22
1 6 204
Error matrix for the Decision Tree model on pics.csv [validate] (%):
Predicted
Actual 0 1
0 42 6
1 2 51
Overall error: 0.07017544
Rattle timestamp: 2013-01-02 11:35:40
======================================================================
Error matrix for the Random Forest model on pics.csv [validate] (counts):
Predicted
Actual 0 1
0 170 19
1 8 202
Error matrix for the Random Forest model on pics.csv [validate] (%):
Predicted
Actual 0 1
0 43 5
1 2 51
Overall error: 0.06766917
Rattle timestamp: 2013-01-02 11:35:40
======================================================================
Error matrix for the Linear model on pics.csv [validate] (counts):
Predicted
Actual 0 1
0 171 18
1 13 197
Error matrix for the Linear model on pics.csv [validate] (%):
Predicted
Actual 0 1
0 43 5
1 3 49
Overall error: 0.07769424
Rattle timestamp: 2013-01-02 11:35:40
======================================================================
Error matrix for the Neural Net model on pics.csv [validate] (counts):
Predicted
Actual 0 1
0 169 20
1 15 195
Error matrix for the Neural Net model on pics.csv [validate] (%):
Predicted
Actual 0 1
0 42 5
1 4 49
Overall error: 0.0877193
Rattle timestamp: 2013-01-02 11:35:40
======================================================================
Results
As you can see the overall error rate oscilates between 6.5% and 8%. I do not think that this result can be significantly improved by tunning parameters of used methods. There are two ways how to decrease overall error rate:
add more uncorrelated variables (we do usually have 100+ input variables in the modeling dataset and +/- 5-10 are in the final model)
add more data (we can then tune the model without being scared by overfitting)
Used sofware:
R http://www.r-project.org/
Rattle http://rattle.togaware.com/
Code used to create corrgram and scatterplot (other outputs were generated using Rattle GUI):
# install.packages("lattice",dependencies=TRUE)
# install.packages("car")
library(lattice)
library(car)
setwd("C:/")
indata <- read.csv2("pics.csv")
str(indata)
# Corrgram
corrgram(indata, order=TRUE, lower.panel=panel.shade,
upper.panel=panel.pie, text.panel=panel.txt,
main="Picture/Photo correlation matrix")
# Scatterplot Matrices
attach(indata)
scatterplotMatrix(~rgbh1+rgbh2+hsvh1+hsvh2+countours|PicFlag,main="Picture/Photo scatterplot matrix",
diagonal=c("histogram"),legend.plot=TRUE,pch=c(1,1))
Well a generic suggestion will be to increase the number of features ( or get better features) and to build a classifier using this features, trained with an appropriate machine learning algorithm. OpenCV already has couple of good machine learning algorithms, which you can make use of.
I have never worked on this problem, but a quick google search led me to this paper by Cutzu et. al. Distinguishing paintings from photographs
One feature that should be useful is the gradient histogram. Natural images have a particular distribution of gradient strengths.

How are matrices stored in memory?

Note - may be more related to computer organization than software, not sure.
I'm trying to understand something related to data compression, say for jpeg photos. Essentially a very dense matrix is converted (via discrete cosine transforms) into a much more sparse matrix. Supposedly it is this sparse matrix that is stored. Take a look at this link:
http://en.wikipedia.org/wiki/JPEG
Comparing the original 8x8 sub-block image example to matrix "B", which is transformed to have overall lower magnitude values and much more zeros throughout. How is matrix B stored such that it saves much more memory over the original matrix?
The original matrix clearly needs 8x8 (number of entries) x 8 bits/entry since values can range randomly from 0 to 255. OK, so I think it's pretty clear we need 64 bytes of memory for this. Matrix B on the other hand, hmmm. Best case scenario I can think of is that values range from -26 to +5, so at most an entry (like -26) needs 6 bits (5 bits to form 26, 1 bit for sign I guess). So then you could store 8x8x6 bits = 48 bytes.
The other possibility I see is that the matrix is stored in a "zig zag" order from the top left. Then we can specify a start and an end address and just keep storing along the diagonals until we're only left with zeros. Let's say it's a 32-bit machine; then 2 addresses (start + end) will constitute 8 bytes; for the other non-zero entries at 6 bits each, say, we have to go along almost all the top diagonals to store a sum of 28 elements. In total this scheme would take 29 bytes.
To summarize my question: if JPEG and other image encoders are claiming to save space by using algorithms to make the image matrix less dense, how is this extra space being realized in my hard disk?
Cheers
The dct needs to be accompanied with other compression schemes that take advantage of the zeros/high frequency occurrences. A simple example is run length encoding.
JPEG uses a variant of Huffman coding.
As it says in "Entropy coding" a zig-zag pattern is used, together with RLE which will already reduce size for many cases. However, as far as I know the DCT isn't giving a sparse matrix per se. But it usually enhances the entropy of the matrix. This is the point where the compressen becomes lossy: The intput matrix is transferred with DCT, then the values are quantizised and then the huffman-encoding is used.
The most simple compression would take advantage of repeated sequences of symbols (zeros). A matrix in memory may look like this (suppose in dec system)
0000000000000100000000000210000000000004301000300000000004
After compression it may look like this
(0,13)1(0,11)21(0,12)43010003(0,11)4
(Symbol,Count)...
As my under stand, JPEG on only compress, it also drop data. After the 8x8 block transfer to frequent domain, it drop the in-significant (high-frequent) data, which means it only has to save the significant 6x6 or even 4x4 data. That it can has higher compress rate then non-lost method (like gif)

Resources