How to call back (display) the images which been extracted before? - machine-learning

I've done feature extraction using GLCM and k-nn for classification. What I need to do now is troubleshooting, to analyse why the images been classified wrongly. I want to display the nearest neighbor of the testing data, but not just points like below:
I want to display the images that nearest to that image(test), so that is easy to know why is that the images nearest to each other(visually). But here is my problem, I didn't know how to call back the images which been extracted before, since those are presented in array of numbers only.
What should I do?

The KNeighborsClassifier of Scikit-Learn has function kneighbors which returns the k-nearest-neigbors' distances and indices. It may help you to find your nearest images for each test image.

Related

Processing 20x20 image before training for machine learning

I have 10,000 examples 20x20 png image (binary image) about triangle. My mission is build program, which predict new image is whether triangle. I think I should convert these image to 400 features example, but I don't know how convert fastest.
Can you show me the way?
Here are a image .
Your question is too broad as you dont specify which technologies you are using , but in general you need to create a vector from an array , that depends on your tools , for example if you use python(and the numpy library) you could use flatten().
image_array.flatten();
If you want to do it manually you just need to move every row to a single row.
The previous answer is correct. Yet I want to add something to it:
The example image that you provided is noisy. This is rather problematic as you are working with only binary images. Therefore I want to suggest preprocessing, such as gaussian filter or edge detection. Denoising will improve your clustering algorithms accuracy stronlgy (to my knowledge).
One important question:
What are the other pictures showing? Do you have to seperate triangles from circles? You will get much better answers if you provide more information.
Anyhow, my key message is: Preprocessing is vital for image-processing.

Image classification issue in adjacent images

I classified large number of images captured by the same sensor so images look to be similar in spectral response.
However, the result I get is different for the same class in two neighboring images (see the images).
classification result
I don't know if this is the problem of classifier or pixel values.
I checked pixel values which look consistent and the classifier is well tested but still the problem in there. Any idea where the problem might be?
Thank you

How to use SIFT for image comparison

I recently stumbled upon a SIFT implementation for C#. I thought it would be great fun to play around with it, so that's what I did.
The implementation generates a set of "interest points" for any given image. How would I actually use this information to compare two images?
What I'm after is a single "value of similarity". Can that be generated out of the two sets of interest points of the two images?
You need to run SIFT on both images so you get interest points (lets call them Keypoints) in both images.
After that you need to find matches between keypoints in both images. There are algorithms implemented for that purpose in OpenCV.
The value of similarity can be computed out of the number of matches. You can consider that if you get more than 4 points the images are the same, and you can also calculate the relative rotation between them.
You can use number of matches as similarity metric.

how to recognize an same image with different size ?

We as human, could recognize these two images as same image :
In computer, it will be easy to recognize these two image if they are in the same size, so we have to make Preprocessing stage or step before recognize it, like scaling, but if we look deeply to scaling process, we will know that it's not an efficient way.
Now, could you help me to find some way to convert images into objects that doesn't deal with size or pixel location, to be input for recognition method ?
Thanks advance.
I have several ideas:
Let the image have several color thresholds. This way you get large
areas of the same color. The shapes of those areas can be traced with
curves which are math. If you do this for the larger and the smaller
one and see if the curves match.
Try to define key spots in the area. I don't know for sure how
this works but you can look up face detection algoritms. In such
an algoritm there is a math equation for how a face should look.
If you define enough object in such algorithms you can define
multiple objects in the images to see if the object match on the
same spots.
And you could see if the predator algorithm can accept images
of multiple size. If so your problem is solved.
It looks like you assume that human's brain recognize image in computationally effective way, which is rather not true. this algorithm is so complicated that we did not find it. It also takes a large part of your brain to deal with visual data.
When it comes to software there are some scale(or affine) invariant algorithms. One of such algorithms is LeNet 5 neural network.

Features extraction and classification

I'm implementing an ancient coins recognition system. I have used contours detection to extract features of coins. And I thought to use SVM for training images.
My question is how I can give those features to SVM? I got to know that I have to save those features into a file and then that file should feed to the SVM. But, I don't have an idea to save features to a file.
Saving features to a file means save the number of contours in the image, x, y, width and height of each contour right?
Can someone please help me? I am stuck here for two months. Still, I couldn't find the solution for this.
Once I save features to a file, do I have to give the coin name also to the same file or to another file?
Appreciate your help a lot.
Nadeeshani
It depends on what computer vision / image processor library are you using. For example, OpenCV has builtin SVM functionality:
http://opencv.willowgarage.com/documentation/cpp/support_vector_machines.html
so you don't even have to export the features. But LIBSVM ( http://www.csie.ntu.edu.tw/~cjlin/libsvm/) has many more bindings, also for Matlab, for example.
As for how to give the features to the SVM... the input of most categorizers (including SVMs) is a multi-dimensional vector, so you could get one for example concatenating the first 10 x-y-width-height tuples. However, this naive solution is unlikely to work, because if you change the order of the tuples (or you rotate the coin so that the x-y coordinates change), you will get totally different vectors. So try to make up a coin image -> feature vector mapping that doesn't change when the coin is rotated / moved / noise is added. (second idea: features ordered by size, first 5-10, with some shape descriptors instead of simple width / height maybe?)
Coin names are mostly irrelevant at this phase, use 1-of-N encoding for the SVM output.

Resources