Image filtering - wrong results? - image-processing

I'm experimenting with convolving an image with a user-supplied mask, in this case
u = array([[-2,-2,-2],[-2,25,-2],[-2,-2,-2]])/9
using the commands
In[1]: import scipy.ndimage as ndi
In[2]: import skimage.io as io
In[3]: c = io.imread('cameraman.png')
In[4]: cu = ndi.convolve(c,u)
In[5]: io.imshow(cu)
I'm checking this against commands in GNU Octave:
Octave-3.8: 1> c = imread('cameraman.png');
Octave-3.8: 2> u = [-2 -2 -2;-2 25 -2;-2 -2 -2]/9
Octave-3.8: 3> cu = imfilter(c,u)
Octave-3.8: 4> imshow(cu)
But here's the thing: Octave seems to give the correct result, but Python doesn't, even though the commands convolve and imfilter are supposed to be implementing the same algorithm. (Well in fact imfilter performs a correlation, which in this case is the same as a convolution.)
The Octave output is:
!
and the Python output is:
!
which as you can see is very different to the Octave result. Does anybody know what's going on here? Or is there a better way of convolving with a user-supplied linear filter than using convolve?

The problem may be the result of your convolution taking your image luminance values out of bounds. I ran the example below in Matlab (~=Octave) and for an image that initially has grey values 0-255 so in normalised range [0,0.99] the result ends in with pixels in range [-0.88,2.03].
>> img=double(imread('cameraman.tif'))./255;
>> K=[-2 -2 -2 ; -2 25 -2; -2 -2 -2]/9;
>> out=conv2(img,K,'same');
>> max(max(out))
ans =
2.0288
>> min(min(out))
ans =
-0.8776
It could be that Python has a problem visualising images with out of range grey values <0 or >255 and this is causing a clamping of values resulting in black/white halos in those areas. Perhaps Octave normalises the image prior to displaying it resulting in few artifacts. If you normalise you image in Python prior to displaying it, do you still have this problem?

Related

Implementing convolution from scratch in Julia

I am trying to implement convolution by hand in Julia. I'm not too familiar with image processing or Julia, so maybe I'm biting more than I can chew.
Anyway, when I apply this method with a 3*3 edge filter edge = [0 -1 0; -1 4 -1; 0 -1 0] as convolve(img, edge), I am getting an error saying that my values are exceeding the allowed values for the RGBA type.
Code
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_h, half_kernel_h)
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col])))
end
end
cpy_im
end
Error (original)
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
See the READMEs for FixedPointNumbers and ColorTypes for more information.
I am able to identify a simple case where such error may occur (a white pixel surrounded by all black pixels or vice-versa). I tried "fixing" this by attempting to follow the advice here from another stackoverflow question, but I get more errors to the effect of Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package..
Code attempting to apply solution from the other SO question
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ [-half_kernel_h, half_kernel_h]
from_col, to_col = col .+ [-half_kernel_h, half_kernel_h]
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col] ./ 2 .+ 128)))
end
end
cpy_im
end
Corresponding error
MethodError: no method matching +(::ColorTypes.RGBA{Float32}, ::Int64)
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
Closest candidates are:
+(::Any, ::Any, !Matched::Any, !Matched::Any...) at operators.jl:591
+(!Matched::T, ::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at int.jl:87
+(!Matched::ChainRulesCore.AbstractThunk, ::Any) at ~/.julia/packages/ChainRulesCore/a4mIA/src/tangent_arithmetic.jl:122
Now, I can try using convert etc., but when I look at the big picture, I start to wonder what the idiomatic way of solving this problem in Julia is. And that is my question. If you had to implement convolution by hand from scratch, what would be a good way to do so?
EDIT:
Here is an implementation that works, though it may not be idiomatic
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_h, half_kernel_w) = size(kernel) .÷ 2
(height, width) = size(img)
cpy_im = copy(img)
# println(Dict("width" => width, "height" => height, "half_kernel_w" => half_kernel_w, "half_kernel_h" => half_kernel_h, "row range" => 1+half_kernel_h:(height-half_kernel_h), "col range" => 1+half_kernel_w:(width-half_kernel_w)))
for row ∈ 1+half_kernel_h:(height-half_kernel_h)
for col ∈ 1+half_kernel_w:(width-half_kernel_w)
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_w, half_kernel_w)
vals = Dict()
for method ∈ [red, green, blue, alpha]
x = sum((kernel .* method.(img[from_row:to_row, from_col:to_col])))
if x > 1
x = 1
elseif x < 0
x = 0
end
vals[method] = x
end
cpy_im[row, col] = RGBA(vals[red], vals[green], vals[blue], vals[alpha])
end
end
cpy_im
end
First of all, the error
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
should direct you to read the docs of the ColorVectorSpace package, where you will learn that using ColorVectorSpace will now enable math on RGB types. (The absence of default support it deliberate, because the way the image-processing community treats RGB is colorimetrically wrong. But everyone has agreed not to care, hence the ColorVectorSpace package.)
Second,
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
indicates that you're trying to write negative entries with an element type, N0f8, that can't support such values. Instead of cpy_im = copy(img), consider something like cpy_im = [float(c) for c in img] which will guarantee a floating-point representation that can support negative values.
Third, I would recommend avoiding steps like RGB.(img...) when nothing about your function otherwise addresses whether images are numeric, grayscale, or color. Fundamentally the only operations you need are scalar multiplication and addition, and it's better to write your algorithm generically leveraging only those two properties.
Tim Holy's answer above is correct - keep things simple and avoid relying on third-party packages when you don't need to.
I might point out that another option you may not have considered is to use a different algorithm. What you are implementing is the naive method, whereas many convolution routines using different algorithms for different sizes, such as im2col and Winograd (you can look these two up, I have a website that covers the idea behind both here).
The im2col routine might be worth doing as essentially you can break the routine in several pieces:
Unroll all 'regions' of the image to do a dot-product with the filter/kernel on, and stack them together into a single matrix.
Do a matrix-multiply with the unrolled input and filter/kernel.
Roll the output back into the correct shape.
It might be more complicated overall, but each part is simpler, so you may find this easier to do. A matrix multiply routine is definitely quite easy to implement. For 1x1 (single-pixel) convolutions where the image and filter have the same ordering (i.e. NCHW images and FCHW filter) the first and last steps are trivial as essentially no rolling/unrolling is necessary.
A final word of advice - start simpler and add in the code to handle edge-cases, convolutions are definitely fiddly to work with.
Hope this helps!

Difference between absdiff and normal subtraction in OpenCV

I am currently planning on training a binary image classification model. The images I want to train on are the difference between two original pictures. In other words, for each data entry, I start out with 2 pictures, take their difference, and the label that difference as a 0 or 1. My question is what is the best way to find this difference. I know about cv2.absdiff and then normal subtraction of images - what is the most effective way to go about this?
About the data: The images I'm training on are screenshots that usually are the same but may have small differences. I found that normal subtraction seems to show the differences less than absdiff.
This is the code I use for absdiff:
diff = cv2.absdiff(img1, img2)
mask = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
th = 1
imask = mask>1
canvas = np.zeros_like(img2, np.uint8)
canvas[imask] = img2[imask]
And then this for normal subtraction:
def extract_diff(self,imageA, imageB, image_name, path):
subtract = imageB.astype(np.float32) - imageA.astype(np.float32)
mask = cv2.inRange(np.abs(subtract),(30,30,30),(255,255,255))
th = 1
imask = mask>1
canvas = np.zeros_like(imageA, np.uint8)
canvas[imask] = imageA[imask]
Thanks!
A difference can be negative or positive.
For some number types, such as uint8 (unsigned 8-bit int), which can't be negative (have no sign), a negative value wraps around and the value would make no sense anymore. Other types can be signed (e.g. floats, signed ints), so a negative value can be represented correctly.
That's why cv.absdiff exists. It always gives you absolute differences, and those are okay to represent in an unsigned type.
Example with numbers: a = 4, b = 6. a-b should be -2, right?
That value, as an uint8, will wrap around to become 0xFE, or 254 in decimal. The 254 value has some relation to the true -2 difference, but it also incorporates the range of values of the data type (8 bits: 256 values), so it's really just "code".
cv.absdiff would give you the absolute of the difference (-2), which is 2.

How is Spark reading my image using the image format?

It might be a silly question but I can't figure out how Spark read my image using the spark.read.format("image").load(....) argument.
After importing my image which gives me the following:
>>> image_df.select("image.height","image.width","image.nChannels", "image.mode", "image.data").show()
+------+-----+---------+----+--------------------+
|height|width|nChannels|mode| data|
+------+-----+---------+----+--------------------+
| 430| 470| 3| 16|[4D 55 4E 4C 54 4...|
+------+-----+---------+----+--------------------+
I arrive to the conclusion that:
my image is 430x470 pixels,
my image is colored (RGB due to nChannels = 3) which is an openCV compatible-type,
my image mode is 16 which corresponds to a particular openCV byte-order.
Does someone knows which website/documentation I could browse to know more about it?
the data in the data column is of type Binary but:
when I run image_df.select("image.data").take(1) I got an output which seems to be only one array (see below).
>>> image_df.select("image.data").take(1)
# **1/** Here are the last elements of the result
....<<One Eternity Later>>....x92\x89\x8a\x8d\x84\x86\x89\x80\x84\x87~'))]
# 2/ I got also several part of the result which looks like:
.....\x89\x80\x80\x83z|\x7fvz}tpsjqtkrulsvmsvmsvmrulrulrulqtkpsjnqhnqhmpgmpgmpgnqhnqhn
qhnqhnqhnqhnqhnqhmpgmpgmpgmpgmpgmpgmpgmpgnqhnqhnqhnqhnqhnqhnqhnqhknejmdilcilchkbh
kbilcilckneloflofmpgnqhorioripsjsvmsvmtwnvypx{ry|sz}t{~ux{ry|sy|sy|sy|sz}tz}tz}tz}
ty|sy|sy|sy|sz}t{~u|\x7fv|\x7fv}.....
What come next are linked to the results displayed above. Those might be due to my lack of knowledge concerning openCV (or else). Nonetheless:
1/ I don't understand the fact that if I got an RGB image, I should have 3 matrix but the output finishes by .......\x84\x87~'))]. I was more thinking on obtaining something like [(...),(...),(...\x87~')].
2/ Is this part has a special meaning? Like those are the separator between each matrix or something?
To be more clear about what I'm trying to achieve, I want to process images to do pixel comparison between each images. Therefore, I want to know the pixel values for a given position in my image (I assume that if I have an RGB image, I shall have 3 pixel values for a given position).
Example: let's say that I have a webcam pointing to the sky only during the day and I want to know the values of a pixel at a position corresponding to the top left sky part, I found out that the concatenation of those values gives the colour Light Blue which says that the photo was taken on a sunny day. Let's say that the only possibility is that a sunny day takes the colour Light Blue.
Next I want to compare the previous concatenation with another concat of pixel values at the exact same position but from a picture taken the next day. If I found out that they are not equal then I conclude that the given picture was taken on a cloudy/rainy day. If equal then sunny day.
Any help on that would be highly appreciated. I have vulgarized my example for a better understanding but my goal is pretty much the same. I know that ML model can exist to achieve those stuff but I would be happy to try this first. My first goal is to split this column into 3 columns corresponding to each color code: a red matrix, a green matrix, a blue matrix
I think I have the logic. I used the keras.preprocessing.image.img_to_array() function to understand how the values are classified (since I have an RGB image, I must have 3 matrix: one for each color R G B). Posting that if someone wonder how it works, I might be wrong but I think I have something :
from keras.preprocessing import image
import numpy as np
from PIL import Image
# Using spark built-in data source
first_img = spark.read.format("image").schema(imageSchema).load(".....")
raw = first_img.select("image.data").take(1)[0][0]
np.shape(raw)
(606300,) # which is 470*430*3
# Using keras function
img = image.load_img(".../path/to/img")
yy = image.img_to_array(img)
>>> np.shape(yy)
(430, 470, 3) # the form is good but I have a problem of order since:
>>> raw[0], raw[1], raw[2]
(77, 85, 78)
>>> yy[0][0]
array([78., 85., 77.], dtype=float32)
# Therefore I used the numpy reshape function directly on raw
# to have 470 matrix of 3 lines and 470 columns:
array = np.reshape(raw, (430,470,3))
xx = image.img_to_array(array) # OPTIONAL and not used here
>>> array[0][0] == (raw[0],raw[1],raw[2])
array([ True, True, True])
>>> array[0][1] == (raw[3],raw[4],raw[5])
array([ True, True, True])
>>> array[0][2] == (raw[6],raw[7],raw[8])
array([ True, True, True])
>>> array[0][3] == (raw[9],raw[10],raw[11])
array([ True, True, True])
So if I understood well, spark will read the image as a big array - (606300,) here - where in fact each element are ordered and corresponds to their respective color shade (R G B).
After doing my little transformations, I obtain 430 matrix of 3 columns x 470 lines. Since my image is (470x430) for (WidthxHeight), each matrix corresponds to a pixel heigth position and inside each: 3 columns for each color and 470 lines for each width position.
Hope that helps someone :)!

Torch tensor equivalent function to matlab's "find"?

In a nutshell, I would like to know if there is a tensor command in torch that gives me the indices of elements in a tensor that satisfy a certain criteria.
Here is matlab code that illustrates what I would like to be able to do in torch:
my_mat = magic(3); % returns a 3 by 3 matrix with the numbers 1 through 9
greater_than_fives = find(my_mat > 5); % find indices of all values greater than 5, the " > 5" is a logical elementwise operator that returns a matrix of all 0's and 1's and finally the "find" command picks out the indices with a "1" in them
my_mat(greater_than_fives) = 0; % set all values greater than 5 equal to 0
I understand that I could do this in torch using a for loop, but is there some equivalent to matlab's find command that would allow me to do this more compactly?
x[x:gt(5)] = 0
In general there are x:gt :lt :ge :le :eq
There is also the general :apply function tha takes in an anonymous function and applies it to each element.

OpenCV: Essential Matrix Decomposition

I am trying to extract Rotation matrix and Translation vector from the essential matrix.
<pre><code>
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Mat_<double> R = svd_u * Mat(W).t() * svd_vt; //or svd_u * Mat(W) * svd_vt;
Mat_<double> t = svd_u.col(2); //or -svd_u.col(2)
</code></pre>
However, when I am using R and T (e.g. to obtain rectified images), the result does not seem to be right(black images or some obviously wrong outputs), even so I used different combination of possible R and T.
I suspected to E. According to the text books, my calculation is right if we have:
E = U*diag(1, 1, 0)*Vt
In my case svd.w which is supposed to be diag(1, 1, 0) [at least in term of a scale], is not so. Here is an example of my output:
svd.w = [21.47903827647813; 20.28555196246256; 5.167099204708699e-010]
Also, two of the eigenvalues of E should be equal and the third one should be zero. In the same case the result is:
eigenvalues of E = 0.0000 + 0.0000i, 0.3143 +20.8610i, 0.3143 -20.8610i
As you see, two of them are complex conjugates.
Now, the questions are:
Is the decomposition of E and calculation of R and T done in a right way?
If the calculation is right, why the internal rules of essential matrix are not satisfied by the results?
If everything about E, R, and T is fine, why the rectified images obtained by them are not correct?
I get E from fundamental matrix, which I suppose to be right. I draw epipolar lines on both the left and right images and they all pass through the related points (for all the 16 points used to calculate the fundamental matrix).
Any help would be appreciated.
Thanks!
I see two issues.
First, discounting the negligible value of the third diagonal term, your E is about 6% off the ideal one: err_percent = (21.48 - 20.29) / 20.29 * 100 . Sounds small, but translated in terms of pixel error it may be an altogether larger amount.
So I'd start by replacing E with the ideal one after SVD decomposition: Er = U * diag(1,1,0) * Vt.
Second, the textbook decomposition admits 4 solutions, only one of which is physically plausible (i.e. with 3D points in front of the camera). You may be hitting one of non-physical ones. See http://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E .

Resources