I would like to decide if an image is present in a list stored in a DB (e.g. pictures of IDs, passport, Stu. card, etc). I thought about using a KNN algorithm, that will plot the K closest images.
Options for distance metric:
sum of Euclidean distance between each relative pixels (img1[pixel_i], img2[pixel_i])
sum of Euclidean distance betwen each pixel to each other, multiplied by some factor decreasing with distance (pixel to pixel)
same as above, but with manhattan...
Do you know/think of a better way to deal with the image similarity subject?
I think that using raw graylevel values in computing distances is a very bad idea. This is not invariant to illumination, to translation and to rotation (although I don't think that rotation is a big issue in face images).
Try to use some robust and invariant descriptor extracted from each image (e.g. SIFT on keypoints) and then compute distances between those features. K-NN could work. Alternatively, look for image retrieval literature for more advanced approaches.
Hope this helps!
If you have a large number of images in your database, it will get rather unwieldy calculating the similarity between a given image and every single image in your database every time. Instead, I would consider something like a Perceptual Hash (pHash) where you could pre-compute a parameter ONCE for each image in your database and store it, and then , when you want to compare an image you calculate just its single pHash and compare that with all the stored ones in your database.
Related
I want to compare the similarity between different images. I know there are serval methods to make comparisons, but in my case all images are preprocessed by resnet, so my data set looks like (N,1000) where N indicates the number of images and 1000 is a vector standing for features of each image.
How can I measure the similarity among different images? Can I use Euclidean distance to measure it?
What kind of images are they? Grayscale? Binary masks? What characteristics do you want to take into account? Based on your initial post it is hard to grasp what you want to accomplish in detail.
If you want to measure the exact similarity you can use the Dice coefficient.
I'm trying to develop algorithm, which returns similarity score for two given black and white images: original one and its sketch, drawn by human:
All original images has the same style, but there is no any given limited set of them. Their content could be totally different.
I've tried few approaches, but none of them was successful yet:
OpenCV template matching
OpenCV matchTemplate is not able to calculate similarity score of images. It could only tells me count of matched pixels, and this value is usually quite low, because of not ideal proportions of human's sketch.
OpenCV feature matching
I've failed with this method, because I couldn't find good algorithms for extracting significant features from human's sketch. Algorithms from OpenCV's tutorials are good in extracting corners and blobs as features. But here, in sketches, we have a lot of strokes - each of them produces a lot of insignificant, junk features and leads to fuzzy results.
Neural Network Classification
Also I took a look at neural networks - they are good in image classification, but also they need train sets for each of classes, and this part is impossible, because we have an unlimited set of possible images.
Which methods and algorithms would you use for this kind of task?
METHOD 1
Cosine similarity gives a similarity score ranging between (0 - 1).
I first converted the images to gray scale and binarized them. I cropped the original image to half the size and excluded the text as shown below:
I then converted the image arrays to 1D arrays using flatten(). I used the following to compute cosine similarity:
from scipy import spatial
result = spatial.distance.cosine(im2, im1)
print result
The result I obtained was 0.999999988431, meaning the images are similar to each other by this score.
EDIT
METHOD 2
I had the time to check out another solution. I figured out that OpenCV's cv2.matchTemplate() function performs the same job.
I f you check out THIS DOCUMENTATION PAGE you will come across the different parameters used.
I used the cv2.TM_SQDIFF_NORMED parameter (which gives the normalized square difference between the two images).
res = cv2.matchTemplate(th1, th2, cv2.TM_SQDIFF_NORMED)
print 1 - res
For the given images I obtained a similarity score of: 0.89689457
I'm trying to detect a pattern like this in some images
The actual image looks something like this
It could be scaled and/or rotated. Is there a way to do that efficiently without resorting to neural nets or some learning algorithm? Can some detection be done based on the value gradient for example (dark-bright-dark-bright-dark)?
input image is MxN (in your example M<N ):
take mean RGB image
mean Y to get 1xN vector
derive
abs
threshold
calculate the distance between peaks.
search for a location where the ratio between the distances is as expected (from what i see in your example ~ 1:7:1)
if a place found, validate the colors in the middle of the distance (from your example should be white-black-white)
You might be able to use Gabor Filters at varying orientations, and do standard threshold to identify objects.
If you know the frequency of the pattern you could try using a bandpass filter to isolate objects at that frequency. If it is a very strong frequency, you might be able to identify it in the image's Fourier transform.
Without much other knowledge about what you are looking for in your image, it will be very difficult to identify a specific repeating pattern.
What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?"
So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information.
Please note that this is not based on template matching.
Wikipedia didn't give much details on actually implementing any of the texture analyses.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?
The second is harder due to different radiometry.
Here is a basic scheme of how to measure similarity of areas.
You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature
You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image
Calculate such vector to both areas that you want to compare
Define similarity function which takes two vectors and output how much they are alike.
You need to focus on steps 2 and 4.
Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available.
Step 4. You can use cosine simmilarity, min-max or weighted cosine.
After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic.
For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels.
Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
I'd suggest to use wavelet analysis. Wavelets are localized in both time and frequency and give a better signal representation using multiresolution analysis than FT does.
Thre is a paper explaining a wavelete approach for texture description. There is also a comparison method.
You might need to slightly modify an algorithm to process images of arbitrary shape.
An interesting approach for this, is to use the Local Binary Patterns.
Here is an basic example and some explanations : http://hanzratech.in/2015/05/30/local-binary-patterns.html
See that method as one of the many different ways to get features from your pictures. It corresponds to the 2nd step of DanielHsH's method.
As we know Fourier Transform is sensitive to noises(like salt and peppers),
how can it still be used for image recognization?
Is there a FT expert here?
Update to actually answer the question you asked... :) Pre-process the image with a non-linear filter to suppress the salt & pepper noise. Median filter maybe?
Basic lesson on FFTs on matched filters follows...
The classic way of detecting a smaller image within a larger image is the matched filter. Essentially, this involves doing a cross correlation of the larger image with the smaller image (the thing you're trying to recognize).
For every position in the larger image
Overlay the smaller image on the larger image
Multiply all corresponding pixels
Sum the results
Put that sum in this position in the filtered image
The matched filter is optimal where the only noise in the larger image is white noise.
This IS computationally slow, but it can be decomposed into FFT (fast Fourier transform) operations, which are much more efficient. There are much more sophisticated approaches to image matching that tolerate other types of noise much better than the matched filter does. But few are as efficient as the matched filter implemented using FFTs.
Google "matched filter", "cross correlation" and "convolution filter" for more.
For example, here's one brief explanation that also points out the drawbacks of this very oldschool image matching approach: http://www.dspguide.com/ch24/6.htm
Not sure exactly what you're asking. If you are asking about how FFT can be used for image recognition, here are some thoughts.
FFT can be used to perform image "classification". It can't be used to recognize different faces or objects, but it can be used to classify the type of image. FFT calculates the spacial frequency content of the image. So for example, natural scene, face, city scene, etc. will have different FFTs. Therefore you can classify image or even within image (e.g. aerial photo to classify terrain).
Also, FFT is used in pre-processing for image recognition. It can be used for OCR (optical character recognition) to rotate the scanned image into correct orientation. FFT of typed text has a strong orientation. Same thing for parts inspection in industrial automation.
I don't think you'll find many methods in use that rely on Fourier Transforms for image recognition.
In the case of salt and pepper noise, it can be considered high frequency noise, and thus you could low pass filter your FFT before making a comparison with the target image.
I would imagine that it would work, but that different images that are somewhat similar (like both are photographs taken outside) would register as being the same image.