How to stitch two images have different absolute coordinates? - opencv

for example, stitch
first image
1 1 1
1 1 1
1 1 1
second image
2 2 2 2
2 2 2 2
2 2 2 2
and What I want
0 0 0 2 2 2 2
1 1 1 2 2 2 2
1 1 1 2 2 2 2
1 1 1 0 0 0 0
or
1 1 1 0 0 0 0
1 1 1 2 2 2 2
1 1 1 2 2 2 2
0 0 0 2 2 2 2
In python, that is easy to make like..
temp_panorama = np.zeros((1's height+abs(2's upper part length), 1's width+2's width))
temp_panorama[(2's upper part length) : 1's height, 0 : 1's width] = img1[:]
temp_panorama[0 : 2's height, 1's width +1 :] = img2[:, :]
but how can I implement the same function in C++'s opencv?

use subimages:
// ROI where first image will be placed
cv::Rect firstROI = cv::Rect(x1,y2, first.cols, first.height);
cv::Rect secondROI = cv::Rect(x2,y2, second.cols, second.height);
// create an image big enought to hold the result
cv::Mat canvas = cv::Mat::zeros(cv::Size(std::max(x1+first.cols, x2+second.cols), std::max(y1+first.rows, y2+second.rows)), first.type());
// use subimages:
first.copyTo(canvas(firstROI));
second.copyTo(canvas(secondROI));
in your example:
x1 = 0,
y1 = 1,
x2 = 3,
y2 = 0
first.cols == 3
first.rows == 3
second.cols == 4
second.rows == 3

Related

How can I label connected components in APL?

I'm trying to do leet puzzle https://leetcode.com/problems/max-area-of-island/, requiring labelling connected (by sides, not corners) components.
How can I transform something like
0 0 1 0 0
0 0 0 0 0
0 1 1 0 1
0 1 0 0 1
0 1 0 0 1
into
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3
I've played with the stencil ⌺ operator and also tried using scan operators but still not quite there. Can somebody help?
We can start off by enumerating the ones. We do the by applying the function ⍸ (where, but since all are 1s, it is equivalent to 1,2,3,…) # at the subset masked by ⊢ the bits themselves, i.e. ⍸#⊢:
⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 3 0 4
0 5 0 0 6
0 7 0 0 8
Now we need to flood-fill the lowest number in each component. We do this with repeated application until the fix-point ⍣≡ of processing Moore neighbourhoods ⌺3 3. To get the von Neumann neighbours, we reshape the 9 elements in the Moore neighbourhood into a 4-row 2-column matrix with 4 2⍴ and use ⊢/ to select the right column. We remove any 0s with 0~⍨ them prepend , the original value ⍵[2;2] (even if 0) and have ⌊/ select the smallest value:
{⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 4
0 2 0 0 4
0 2 0 0 4
We map the values to indices by finding their ⊢ indices ⍳⍨ in the unique elements of ∘∪ 0 followed by , the ravelled matrix ,:
(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
1 1 2 1 1
1 1 1 1 1
1 3 3 1 4
1 3 1 1 4
1 3 1 1 4
And decrement which adjusts back to begin with zero:
¯1+(⊢⍳⍨∘∪0,,){⌊/⍵[2;2],0~⍨⊢/4 2⍴⍵}⌺3 3⍣≡⍸#⊢m
0 0 1 0 0
0 0 0 0 0
0 2 2 0 3
0 2 0 0 3
0 2 0 0 3

Transform string variable into 0-1 columns

As a very begginer in SPSS I would ask you for help with some transformation from table A into table B. I have to recode values of "brand" variable into columns and make 0-1 variables.
#table A#
nr brand
1 GREEN CARE PROFESSIONAL
1 GREEN CARE PROFESSIONAL
1 GREEN CARE PROFESSIONAL
2 HENKEL
3 HENKEL
3 HENKEL
3 HENKEL
3 VIZIR
4 BIEDRONKA
4 BOBINI
4 BOBINI
4 BOBINI
4 BOBINI
4 BOBINI
4 HENKEL
5 VIZIR
6 HENKEL
#table B#
nr GREEN HENKEL VIZIR BIEDR BOBINI
1 1 0 0 0 0
1 1 0 0 0 0
1 1 1 0 0 0
2 0 1 0 0 0
3 0 1 0 0 0
3 0 1 0 0 0
3 0 1 0 0 0
3 0 0 1 0 0
4 0 0 0 1 0
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 0 0 0 1
4 0 1 0 0 0
5 0 0 1 0 0
6 0 1 0 0 0
I can do it in this particular case in this simple way:
compute HENKEL=0.
...
do if BRAND='GREEN_CARE' .
compute GREEN_CARE=1.
else if ....
but the loop has to be usable with another variable and different number of values ect. I was trying to make it all day and gave up.
Do you have any idea to make it in a easy way?
Thanks!
The following syntax does the job on the sample data you provided.
First, let's recreate the sample data to demonstrate on:
Data list list/nr (f1) brand (a30).
begin data
1 "GREEN CARE PROFESSIONAL"
1 "GREEN CARE PROFESSIONAL"
1 "GREEN CARE PROFESSIONAL"
2 "HENKEL"
3 "HENKEL"
3 "HENKEL"
3 "HENKEL"
3 "VIZIR"
4 "BIEDRONKA"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "BOBINI"
4 "HENKEL"
5 "VIZIR"
6 "HENKEL"
end data.
dataset name originalDataset.
Now for the restructure.
sort cases by nr brand.
* creating an index to enumerate cases for each combination of `nr` and `brand`.
* This is necessary for the `casestovars` command to work later.
compute ind=1.
if $casenum>1 and lag(nr)=nr and lag(brand)=brand ind=lag(ind)+1.
exe.
* variable names can't have spaces in them, so changing the category names accordingly.
compute brand=replace(rtrim(brand)," ","_").
sort cases by nr ind brand.
compute exist=1.
casestovars /id=nr ind /index= brand/autofix=no.

Torch: Concatenating tensors of different dimensions

I have a x_at_i = torch.Tensor(1,i) that grows at every iteration where i = 0 to n. I would like to concatenate all tensors of different sizes into a matrix and fill the remaining cells with zeroes. What is the most idiomatic way to this. For example:
x_at_1 = 1
x_at_2 = 1 2
x_at_3 = 1 2 3
x_at_4 = 1 2 3 4
X = torch.cat(x_at_1, x_at_2, x_at_3, x_at_4)
X = [ 1 0 0 0
1 2 0 0
1 2 3 0
1 2 3 4 ]
If you know n and assuming you have access to your x_at_i easily at each iteration I would try something like
X = torch.Tensor(n, n):zero()
for i = 1, n do
X[i]:narrow(1, 1, i):copy(x_at[i])
end

OneVsRestClassifier(svm.SVC()).predict() gives continous values

I am trying to use y_scores=OneVsRestClassifier(svm.SVC()).predict() on datasets
like iris and titanic .The trouble is that I am getting y_scores as continous values.like for iris dataset I am getting :
[[ -3.70047231 -0.74209097 2.29720159]
[ -1.93190155 0.69106231 -2.24974856]
.....
I am using the OneVsRestClassifier for other classifier models like knn,randomforest,naive bayes and they are giving appropriate results in the form of
[[ 0 1 0]
[ 1 0 1]...
etc on the iris dataset .Please help.
Well this is simply not true.
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> clf = OneVsRestClassifier(SVC())
>>> clf.fit(iris['data'], iris['target'])
OneVsRestClassifier(estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False),
n_jobs=1)
>>> print clf.predict(iris['data'])
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
maybe you called decision_function instead (which would match your output dimension, as predict is supposed to return a vector, not a matrix). Then, SVM returns signed distances to each hyperplane, which is its decision function from mathematical perspective.

Making a big matrix using sub-matrices in OpenCV

I like to make a 11 x 11 matrix using 5 x 5 matrices as follow.
Is there any way better than this?
int csz = 5;
Mat zz = Mat::zeros(csz, csz, CV_32FC1);
Mat oo = Mat::ones(csz, csz, CV_32FC1);
Mat hh = 0.5 * Mat::ones((csz*2 + 1), 1, CV_32FC1);//column matrix
cv::Mat chkpat1((csz * 2 + 1), (csz * 2 + 1), CV_32FC1);
chkpat1(Range(0, 5),Range(0, 5)) = zz;//first quadrant
chkpat1(Range(0, 5),Range(6, 11)) = oo;//second quadrant
chkpat1(Range(5, 11),Range(0, 5)) = oo;//third quadrant
chkpat1(Range(6, 11),Range(6, 11)) = oo;//fourth quadrant
chkpat1(Range(0, 11),Range(5, 6)) = hh;//middle column
chkpat1(Range(5, 6),Range(0, 11)) = hh.t();//middle row
This is shorter, so in that sense it is better:
cv::Mat chkpat1(11, 11, CV_32FC1, cv::Scalar(1.0f));
chkpat1(cv::Rect(0, 0, 5, 5)) = cv::Scalar(0.0f);
chkpat1(cv::Rect(0, 5, 11, 1)) = cv::Scalar(0.5f);
chkpat1(cv::Rect(5, 0, 1, 11)) = cv::Scalar(0.5f);
This produces (which is what I think you wanted):
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1

Resources