I'm using scikit-image SSIM to compare the similarity between two images. The thing is that I get negative values, which are not favorable for my purpose. I understand that the range of SSIM values are supposed to be between -1 to 1, but I need to get a positive value only, and I want this value to reduce as the similarity increases between the two images. I've been thinking of two ways to handle this issue. First, subtracting SSIM value from 1:
Similarity Measure=(1-SSIM)
Now, it gives zero in the case of perfect match (SSIM=1) and 1 when there is no similarity (SSIM=0). But, since SSIM also results into negative values between -1 and 0, I also get values larger than 1, which I don't know how to interpret. In particular, I don't know when SSIM returns negative values, what does it mean. Are the images with SSIM values between -1 and 0 less similar than images with SSIM of 0? Because, if this is not the case then my similarity measure will cause problem (it results into values more than 1 when SSIM is negative, which means less similarity compared to the case of SSIM=0).
Another measure that I was thinking to use is structural dissimilarity (DSSIM), which is defined as follows:
DSSIM=(1-SSIM)/2
This will return 0 when the two images are exactly the same, which is what I'm looking for, but DSSIM=1 when SSIM=-1 which corresponds with no similarity at all, and returns 1/2, when SSIM=0. Again, this can only be useful when SSIM of negative values shows less similarity than SSIM=0, which as I mentioned is something that I don't know about and couldn't find anything that explains about the corresponds of each value of SSIM in terms of the level of similarity between the two images. I hope someone could help me with such interpretation or some way to get only values of 0 and 1 for SSIM.
Edit: As I mentioned in the comments SSIM can be negative, and it is caused by the covarience of the two images that can be negative. In the Skimage SSIM source code, Covarience of the two images is represented by vxy, and it can be negative in some cases. As for the interpretation of negative values of SSIM in terms of similarity, I'm not still sure, but this paper states that this happens when local image structure is inverted. Still, I saw this for images that do not look like having inverted structure of each other. But, I guess local is important here, meaning that two images might not look like as inverted version of each other but their structure is inverted locally. Is this a right interpretation?
Yes, the similarity of two images with SSIM = 0 is better than SSIM = -1, so you can use:
1 - (1 + SSIM ) / 2
Related
I have a dataset with some outliers, which are 10 or 100 times greater than the normal values. I cannot throw out these rows, and I want to normalize this data in an interval [0, 1]
First of all, here's what I thought to do:
Simply rank my dataset's rows and use the ranked positions as variable to normalize. Since we have a uniform distribution here, it is easy. The problem is that the value's differences are not measured, so values with a large difference could have similar normalized values if there aren't intermediate value examples in this dataset
Use sklearn.preprocessing.RobustScaler method. But I got normalized values between -0.4 and 300. It is still not good to normalize something in this scale
Distribute normalized values between 0 and 0.8 in a linear way for all values where quantile <= 0.8, and distribute the values between 0.8 and 1.0 among the remaining values in a similar way to the ranking strategy I mentioned above
Make a 1D Kmeans algorithm to locate all near values and get a cluster with non-outlier values. For these values, I just distribute normalized values between 0 and the quantile value it represents, simply by doing (value - mean) / (max - min), and for the remaining outlier values, I distribute the range between values greater than the quantile and 1 with the ranking strategy
Create a filter function, like a sigmoid, and multiply values by it. Smaller values remain unchanged, but the outlier's values are approximated to non-outlier values. Then, I normalize it. But how can I design this sigmoid's parameters?
First of all, I would like to get some feedbacks about these strategies, what do you think about them?
And also, how is this problem normally solved? Is there any references to recommend?
Thank you =)
I am not sure about the terminology I should use for my problem, so I will give an example.
I have 2 sets of measurements (6 empirical distributions per set = D1-6) that describe 2 different states of the same system (BLUE & RED). These distributions can be multimodal, skewed, undersampled, and strange in some other unpredictable ways.
BLUE is my reference and I want to make RED distributed as closely as possible to BLUE, for all pairwise distributions. For this, I will play with parameters of my RED system and monitor the RED set of measurements D1-6 trying to make it overlap BLUE perfectly.
I know that I can use Jensen-Shannon or Bhattacharyya distances to evaluate the distance between 2 distributions (i.e. RED-D1 and BLUE-D1, for example). However, I do not know if there exist other metrics that could be applied here to get a global distance between all distributions (i.e. quantify the global mismatch between 2 sets of pairwise distributions). Is that the case ?
I am thinking about building an empirical scoring function that would use all the pairwise Jensen-Shannon distances, but I have no better ideas yet. I believe I can NOT just sum all the JS distances because I would get similar scores in these 2 hypothetical, different cases:
D1-6 are distributed as in my image
RED-D1-5 are a much better fit to BLUE-D1-5, BUT RED-D6 is shifted compared to BLUE-D6
And that would be wrong because I would have missed one important feature of my system. Given these 2 cases, it is better to have D1-6 distributed as in my image (solution 1).
The pairwise match between each distribution is equally important and should be equally weighted (i.e. the match between BLUE-D1 and RED-D1 is as important as the match between BLUE-D2 and RED-D2, etc).
D1-3 has a given range DOM1 of [0, 5] and D4-6 has another range DOM2 of [50, 800]. Diamonds represent the weighted means of BLUE and RED distributions.
Thank you very much for your help!
I ended up using a sum of all pairwise Earth Mover's Distance (EMD, https://en.wikipedia.org/wiki/Earth_mover%27s_distance, also known as Wasserstein metric) as a global metric of distance between all pairwise distributions. This describes the difference or similarity between 2 states of my system in an appropriate way.
EMD is implemented in python in package 'pyemd' or using scipy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html.
Can anyone explain me the similarities and differences, of the Correlation and Convolution ? Please explain the intuition behind that, not the mathematical equation(i.e, flipping the kernel/impulse).. Application examples in the image processing domain for each category would be appreciated too
You will likely get a much better answer on dsp stack exchange but... for starters I have found a number of similar terms and they can be tricky to pin down definitions.
Correlation
Cross correlation
Convolution
Correlation coefficient
Sliding dot product
Pearson correlation
1, 2, 3, and 5 are very similar
4,6 are similar
Note that all of these terms have dot products rearing their heads
You asked about Correlation and Convolution - these are conceptually the same except that the output is flipped in convolution. I suspect that you may have been asking about the difference between correlation coefficient (such as Pearson) and convolution/correlation.
Prerequisites
I am assuming that you know how to compute the dot-product. Given two equal sized vectors v and w each with three elements, the algebraic dot product is v[0]*w[0]+v[1]*w[1]+v[2]*w[2]
There is a lot of theory behind the dot product in terms of what it represents etc....
Notice the dot product is a single number (scalar) representing the mapping between these two vectors/points v,w In geometry frequently one computes the cosine of the angle between two vectors which uses the dot product. The cosine of the angle between two vectors is between -1 and 1 and can be thought of as a measure of similarity.
Correlation coefficient (Pearson)
Correlation coefficient between equal length v,w is simply the dot product of two zero mean signals (subtract mean v from v to get zmv and mean w from w to get zmw - here zm is shorthand for zero mean) divided by the magnitudes of zmv and zmw.
to produce a number between -1 and 1. Close to zero means little correlation, close to +/- 1 is high correlation. it measures the similarity between these two vectors.
See http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient for a better definition.
Convolution and Correlation
When we want to correlate/convolve v1 and v2 we basically are computing a series of dot-products and putting them into an output vector. Let's say that v1 is three elements and v2 is 10 elements. The dot products we compute are as follows:
output[0] = v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2]
output[1] = v1[0]*v2[1]+v1[1]*v2[2]+v1[2]*v2[3]
output[2] = v1[0]*v2[2]+v1[1]*v2[3]+v1[2]*v2[4]
output[3] = v1[0]*v2[3]+v1[1]*v2[4]+v1[2]*v2[5]
output[4] = v1[0]*v2[4]+v1[1]*v2[5]+v1[2]*v2[6]
output[5] = v1[0]*v2[7]+v1[1]*v2[8]+v1[2]*v2[9]
output[6] = v1[0]*v2[8]+v1[1]*v2[9]+v1[2]*v2[10] #note this is
#mathematically valid but might give you a run time error in a computer implementation
The output can be flipped if a true convolution is needed.
output[5] = v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2]
output[4] = v1[0]*v2[1]+v1[1]*v2[2]+v1[2]*v2[3]
output[3] = v1[0]*v2[2]+v1[1]*v2[3]+v1[2]*v2[4]
output[2] = v1[0]*v2[3]+v1[1]*v2[4]+v1[2]*v2[5]
output[1] = v1[0]*v2[4]+v1[1]*v2[5]+v1[2]*v2[6]
output[0] = v1[0]*v2[7]+v1[1]*v2[8]+v1[2]*v2[9]
Notice that we have less than 10 elements in the output as for simplicity I am computing the convolution only where both v1 and v2 are defined
Notice also that the convolution is simply a number of dot products. There has been considerable work over the years to be able to speed up convolutions. The sweeping dot products are slow and can be sped up by first transforming the vectors into the fourier basis space and then computing a single vector multiplication then inverting the result, though I won't go into that here...
You might want to look at these resources as well as googling: Calculating Pearson correlation and significance in Python
The best answer I got were from this document:http://www.cs.umd.edu/~djacobs/CMSC426/Convolution.pdf
I'm just going to copy the excerpt from the doc:
"The key difference between the two is that convolution is associative. That is, if F and G are filters, then F*(GI) = (FG)*I. If you don’t believe this, try a simple example, using F=G=(-1 0 1), for example. It is very convenient to have convolution be associative. Suppose, for example, we want to smooth an image and then take its derivative. We could do this by convolving the image with a Gaussian filter, and then convolving it with a derivative filter. But we could alternatively convolve the derivative filter with the Gaussian to produce a filter called a Difference of Gaussian (DOG), and then convolve this with our image. The nice thing about this is that the DOG filter can be precomputed, and we only have to convolve one filter with our image.
In general, people use convolution for image processing operations such as smoothing, and they use correlation to match a template to an image. Then, we don’t mind that correlation isn’t associative, because it doesn’t really make sense to combine two templates into one with correlation, whereas we might often want to combine two filter together for convolution."
Convolution is just like correlation, except that we flip over the filter before correlating
I was wondering if anybody knows anything deeper than what I do about the Affinity Propagation clustering algorithm in the python scikit-learn package?
All I know right now is that I input an "affinity matrix" (affmat), which is calculated by applying a heat kernel transformation to the "distance matrix" (distmat). The distmat is the output of a previous analysis step I do. After I obtain the affmat, I pass that into the algorithm by doing something analogous to:
af = AffinityPropagation(affinity = 'precomputed').fit(affmat)
On one hand, I appreciate the simplicity of the inputs. On the other hand, I'm still not sure if I've got the input done correctly?
I have tried evaluating the results by looking at the Silhouette scores, and frequently (but not a majority of the time) I am getting scores close to zero or in the negative ranges, which indicate to me that I'm not getting clusters that are 'naturally grouped'. This, to me, is a problem.
Additionally, when I observe some of the plots, I'm getting weird patterns of clustering, such as the image below, where I clearly see 3 clusters of points, but AP is giving me 10:
I also see other graphs like this, where the clusters overlap a lot.:
How are the cluster centers' euclidean coordinates identified? I notice when I put in percent values (99.2, 99.5 etc.), rather than fraction values (0.992, 0.995 etc.), the x- and y-axes change in scale from [0, 1] to [0, 100] (but of course the axes are scaled appropriately such that I'm getting, on occasion, [88, 100] or [92, 100] and so on).
Sorry for the rambling here, but I'm having quite a hard time understanding the underpinnings of the python implementation of the algorithm. I've gone back to the original paper and read it, but I don't know how the manipulate it in the scikit-learn package to get better clustering.
I am using libSVM.
Say my feature values are in the following format:
instance1 : f11, f12, f13, f14
instance2 : f21, f22, f23, f24
instance3 : f31, f32, f33, f34
instance4 : f41, f42, f43, f44
..............................
instanceN : fN1, fN2, fN3, fN4
I think there are two scaling can be applied.
scale each instance vector such that each vector has zero mean and unit variance.
( (f11, f12, f13, f14) - mean((f11, f12, f13, f14) ). /std((f11, f12, f13, f14) )
scale each colum of the above matrix to a range. for example [-1, 1]
According to my experiments with RBF kernel (libSVM) I found that the second scaling (2) improves the results by about 10%. I did not understand the reason why (2) gives me a improved results.
Could anybody explain me what is the reason for applying scaling and why the second option gives me improved results?
The standard thing to do is to make each dimension (or attribute, or column (in your example)) have zero mean and unit variance.
This brings each dimension of the SVM into the same magnitude. From http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf:
The main advantage of scaling is to avoid attributes in greater numeric
ranges dominating those in smaller numeric ranges. Another advantage is to avoid
numerical diculties during the calculation. Because kernel values usually depend on
the inner products of feature vectors, e.g. the linear kernel and the polynomial ker-
nel, large attribute values might cause numerical problems. We recommend linearly
scaling each attribute to the range [-1,+1] or [0,1].
I believe that it comes down to your original data a lot.
If your original data has SOME extreme values for some columns, then in my opinion you lose some definition when scaling linearly, for example in the range [-1,1].
Let's say that you have a column where 90% of values are between 100-500 and in the remaining 10% the values are as low as -2000 and as high as +2500.
If you scale this data linearly, then you'll have:
-2000 -> -1 ## <- The min in your scaled data
+2500 -> +1 ## <- The max in your scaled data
100 -> -0.06666666666666665
234 -> -0.007111111111111068
500 -> 0.11111111111111116
You could argue that the discernibility between what was originally 100 and 500 is smaller in the scaled data in comparison to what it was in the original data.
At the end, I believe it very much comes down to the specifics of your data and I believe the 10% improved performance is very coincidental, you will certainly not see a difference of this magnitude in every dataset you try both scaling methods on.
At the same time, in the paper in the link listed in the other answer, you can clearly see that the authors recommend data to be scaled linearly.
I hope someone finds this useful!
The accepted answer speaks of "Standard Scaling", which is not efficient for high-dimensional data stored in sparse matrices (text data is a use-case); in such cases, you may resort to "Max Scaling" and its variants, which works with sparse matrices.