How to detect contiguos images - opencv

I am trying to detect when two images correspond to a chunk that matches the other image but there is no overlap.
That is, suppose we have the Lenna image:
Someone unknown to me has split it vertically in two and I must know if both pieces are connected or not (assume that they are independent images or that one is a piece of the other).
A:
B:
The positive part is that I know the order of the pieces, the negative part is that there may be other images and I must know which of them fit or not to join them.
My first idea has been to check if the MAE between the last row of A and the first row B is low.
def mae(a, b):
min_mae = 256
for i in range(-5, 5, 1):
a_s = np.roll(a, i, axis=1)
value_mae = np.mean(abs(a_s - b))
min_mae = min(min_mae, value_mae)
return min_mae
if mae(im_a[im_a.shape[0] - 1:im_a.shape[0], ...], im_b[0:1, ...]) < threshold:
# join images a and b
The problem is that it is a not very robust metric.
I have done the same using the horizontal derivative, as well as applying various smoothing filters, but I find myself in the same situation.
Is there a way to solve this problem?

Your method seems like a decent one. Even on visual inspection it looks reasonable:
Top (Bottom row expanded)
Bottom (Top row expanded)
Diff of the images:
It might even be more clear if you also check neighboring columns, but this already looks like the images are similar enough.
Code
import cv2
import numpy as np
# load images
top = cv2.imread("top.png");
bottom = cv2.imread("bottom.png");
# gray
tgray = cv2.cvtColor(top, cv2.COLOR_BGR2GRAY);
bgray = cv2.cvtColor(bottom, cv2.COLOR_BGR2GRAY);
# expand rows
texp = tgray;
bexp = bgray;
trow = np.zeros_like(texp);
brow = np.zeros_like(bexp);
trow[:] = texp[-1, :];
brow[:] = bexp[0, :];
trow = trow[:100, :];
brow = brow[:100, :];
# check absolute difference
ldiff = trow - brow;
rdiff = brow - trow;
diff = np.minimum(ldiff, rdiff);
# show
cv2.imshow("top", trow);
cv2.imshow("bottom", brow);
cv2.imshow("diff", diff);
cv2.waitKey(0);
# save
cv2.imwrite("top_out.png", trow);
cv2.imwrite("bottom_out.png", brow);
cv2.imwrite("diff_out.png", diff);

Related

How to rotate a non-squared image in frequency domain

I want to rotate an image in frequency domain. Inspired in the answers in Image rotation and scaling the frequency domain? I managed to rotate square images. (See the following python script using OpenCV)
M = cv2.imread("lenna.png")
M=np.float32(M)
hanning=cv2.createHanningWindow((M.shape[1],M.shape[0]),cv2.CV_32F)
M=hanning*M
sM = fftshift(M)
rotation_center=(M.shape[1]/2,M.shape[0]/2)
rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,rot_matrix,(FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))
This works fine with squared images. (Better results could be achieved by padding the image)
However, when only using a non-squared portion of the image, the rotation in frequency domain shows some kind of shearing effect.
Any idea on how to achieve this? Obivously I could pad the image to make it square, however the final purpose of all this is to rotate FFTs as fast as possible for an iterative image registration algorithm and this would slightly slow down the algorithm.
Following the suggestion of #CrisLuengo I found the affine transform needed to avoid padding the image. Obviously it will depend on the image size and the application but for my case avoidding the padding is very interesting.
The modified script looks now like:
#rot_matrix=cv2.getRotationMatrix2D(rotation_center,angle,1.0)
kx=1.0
ky=1.0
if(M.shape[0]>M.shape[1]):
kx= float(M.shape[0]) / M.shape[1]
else:
ky=float(M.shape[1])/M.shape[0]
affine_transform = np.zeros([2, 3])
affine_transform[0, 0] = np.cos(angle)
affine_transform[0, 1] = np.sin(angle)*ky/kx
affine_transform[0, 2] = (1-np.cos(angle))*rotation_center[0]-ky/kx*np.sin(angle)*rotation_center[1]
affine_transform[1, 0] = -np.sin(angle)*kx/ky
affine_transform[1, 1] = np.cos(angle)
affine_transform[1, 2] = kx/ky*np.sin(angle)*rotation_center[0]+(1-np.cos(angle))*rotation_center[1]
FsM = fftshift(cv2.dft(sM,flags = cv2.DFT_COMPLEX_OUTPUT))
rFsM=cv2.warpAffine(FsM,affine_transform, (FsM.shape[1],FsM.shape[0]),flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
IrFsM = ifftshift(cv2.idft(ifftshift(rFsM),flags=cv2.DFT_REAL_OUTPUT))

Is it possible to vectorize this calculation in numpy?

Can the following expression of numpy arrays be vectorized for speed-up?
k_lin1x = [2*k_lin[i]*k_lin[i+1]/(k_lin[i]+k_lin[i+1]) for i in range(len(k_lin)-1)]
Is it possible to vectorize this calculation in numpy?
x1 = k_lin
x2 = k_lin
s = len(k_lin)-1
np.roll(x2, -1) #do this do bring the column one position right
result1 = x2[:s]+x1[:s] #your divider. You add everything but the last element
result2 = x2[:s]*x1[:s] #your upper part
# in one line
result = 2*x2[:s]*x1[:s] / (x2[:s]+x1[:s])
You last column wont be added or taken into the calculations and you can do this by simply using np.roll to shift the columns. x2[0] = x1[1], x2[1] = x1[2].
This is barely a demo of how you should approach google numpy roll. Also instead of using s on x2 you can simply drop the last column since it's useless for the calculations.

How to estimate? "simple" Nonlinear Regression + Parameter Constraints + AR residuals

I am new to this site so please bear with me. I want to
the nonlinear model as shown in the link: https://i.stack.imgur.com/cNpWt.png by imposing constraints on the parameters a>0 and b>0 and gamma1 in [0,1].
In the nonlinear model [1] independent variable is x(t) and dependent are R(t), F(t) and ΞΎ(t) is the error term.
An example of the dataset can be shown here: https://i.stack.imgur.com/2Vf0j.png 68 rows of time series
To estimate the nonlinear regression I use the nls() function with no problem as shown below:
NLM1 = nls(**Xt ~ (aRt-bFt)/(1-gamma1*Rt), start = list(a = 10, b = 10, lamda = 0.5)**,algorithm = "port", lower=c(0,0,0),upper=c(Inf,Inf,1),data = temp2)
I want to estimate NLM1 with allowing for also an AR(1) on the residuals.
Basically I want the same procedure as we go from lm() to gls(). My problem is that in the gnls() function I dont know how to put contraints for the model parameters a, b, gamma1 and the model estimates wrong values for them.
nls() has the option for lower and upper bounds. I cant do the same on gnls()
In the gnls(): I need to add the contraints something like as in nls() lower=c(0,0,0),upper=c(Inf,Inf,1)
NLM1_AR1 = gnls( model = Xt ~ (aRt-bFt)/(1-gamma1*Rt), data = temp2, start = list(a =13, b = 10, lamda = 0.5),correlation = corARMA(p = 1))
Does any1 know the solution on how to do it?
Thank you

Zero padding / median filtering

I'm trying to implement median filtering using image j .
I am having trouble with the zero padding as it adds extra zeros to the bottom and far left of the picture.
This is what I have done so far, if you guys can help me out:
Dialog.create("9x9 median filtering");
Dialog.addMessage("9x9 median filtering");
Dialog.show();
setBatchMode(true);
median_filter_9();
setBatchMode("exit and display");
// Produce the 9x9 median image
function median_filter_9()
{
width = getWidth();
height= getHeight();
//if you want to apply this median filter to 16bit
depth = bitDepth();
nBin= pow(2, depth);
//nBin hold max gray intensity value
filteHisto = newArray(nBin);
//filteHisto = newArray(255);
fiveBYFive = newArray(81);
//this is what i used for middle position of array to get median
middlePos = round(81/2);
//-3, -3 will get you position 0,0 of a 9x9 matrix if you start in the middle
for(j=-2;j<width-2;j++){
for(i=-2;i<height-2;i++){
z=0;
for(r=0;r<9;r++){
for(c=0;c<9;c++){
//Extend outside image boundaries using zero padding.
//error here: adds extra to bottom and farleft of picture
if(j+r<0||j+r>=width||i+c<0||i+c>=height){
fiveBYFive[z]=0;
z++;
}else{
v = getPixel(j+r,i+c);
fiveBYFive[z]= v;
z++;
}
}
}
//sort the array to find median
Array.sort(fiveBYFive);
median = fiveBYFive[middlePos];
setPixel(j, i, median);
}
updateDisplay();
}
}
One problem you're seeing at the edges of your image is because you are padding your 9x9 window with zeroes ok, but you still take the median value as the middle of the 81 item window.
So, for example, in the first column of the image, you zero-pad at least 36 elements (more at the top and bottom), which means that you only need to find 4 or 5 more zero pixels in the image to make the median element zero.
The easiest fix is to adjust your median element's index (initialised to 81/2 on each iteration) upward according to how many zeroes you added, or just count how many non-zero pixels you used and then find the median mid-way through that range in your sorted array (taking account of sort order).
In this way, you take the median value of the actual pixels you found and ignore the padded zeroes.
Probably, you missed changing your code from the original 5x5 to 9x9, because the start/end indices are in any case wrong and should be
for(j=-4;j<width;j++){
for(i=-4;i<height;i++){
The other possible source of confusion later is with this line, where it looks like you've confused width and height
if(j+r<0||j+r>=width||i+c<0||i+c>=height)
If j is the column index and i is the row index, it should be
if(j+c<0||j+c>=width||i+r<0||i+r>=height)
Although for a square window this doesn't actually make any difference in practice.

Improving detection of the orange colour in MATLAB

One of my tasks is to detect some colours from ant colonies from the 16000 images. So, I've already done it very good with blue, pink and green, but now I need to improve detection of the orange colour. It's a bit tricky for me, since I am new one in a field of image processing. I put some examples what I have done and what was my problem.
Raw image:http://img705.imageshack.us/img705/2257/img4263u.jpg
Detection of the orange colour:http://img72.imageshack.us/img72/8197/orangedetection.jpg
Detection of the green colour:http://img585.imageshack.us/img585/1347/greendetection.jpg
I had used selectPixelsAndGetHSV.m to get the HSV value, and after it I used colorDetectHSV.m to detect pixels with the same HSV value.
Could you give me any sugesstion how to improve detection of the orange colour and not to detect whole ants and broods around them?
Thank you in advance!
function [K]=colorDetectHSV(RGB, hsvVal, tol)
HSV = rgb2hsv(RGB);
% find the difference between required and real H value:
diffH = abs(HSV(:,:,1) - hsvVal(1));
[M,N,t] = size(RGB);
I1 = zeros(M,N); I2 = zeros(M,N); I3 = zeros(M,N);
T1 = tol(1);
I1( find(diffH < T1) ) = 1;
if (length(tol)>1)
% find the difference between required and real S value:
diffS = abs(HSV(:,:,2) - hsvVal(2));
T2 = tol(2);
I2( find(diffS < T2) ) = 1;
if (length(tol)>2)
% find the difference between required and real V value:
difV = HSV(:,:,3) - hsvVal(3);
T3 = tol(3);
I3( find(diffS < T3) ) = 1;
I = I1.*I2.*I3;
else
I = I1.*I2;
end
else
I = I1;
end
K=~I;
subplot(2,1,1),
figure,imshow(RGB); title('Original Image');
subplot(2,1,2),
figure,imshow(~I,[]); title('Detected Areas');
You don't show what you are using as target HSV values. These may be the problem.
In the example you provided, a lot of areas are wrongly selected whose hue ranges from 30 to 40. These areas correspond to ants body parts. The orange parts you want to select actually have a hue ranging from approximately 7 to 15, and it shouldn't be difficult to differentiate them from ants.
Try adjusting your target values (especially hue) and you should get better results. Actually you can also probably disregard brightness and saturation, hue seems to be sufficient in this case.

Resources