Making a big matrix using sub-matrices in OpenCV - opencv

I like to make a 11 x 11 matrix using 5 x 5 matrices as follow.
Is there any way better than this?
int csz = 5;
Mat zz = Mat::zeros(csz, csz, CV_32FC1);
Mat oo = Mat::ones(csz, csz, CV_32FC1);
Mat hh = 0.5 * Mat::ones((csz*2 + 1), 1, CV_32FC1);//column matrix
cv::Mat chkpat1((csz * 2 + 1), (csz * 2 + 1), CV_32FC1);
chkpat1(Range(0, 5),Range(0, 5)) = zz;//first quadrant
chkpat1(Range(0, 5),Range(6, 11)) = oo;//second quadrant
chkpat1(Range(5, 11),Range(0, 5)) = oo;//third quadrant
chkpat1(Range(6, 11),Range(6, 11)) = oo;//fourth quadrant
chkpat1(Range(0, 11),Range(5, 6)) = hh;//middle column
chkpat1(Range(5, 6),Range(0, 11)) = hh.t();//middle row

This is shorter, so in that sense it is better:
cv::Mat chkpat1(11, 11, CV_32FC1, cv::Scalar(1.0f));
chkpat1(cv::Rect(0, 0, 5, 5)) = cv::Scalar(0.0f);
chkpat1(cv::Rect(0, 5, 11, 1)) = cv::Scalar(0.5f);
chkpat1(cv::Rect(5, 0, 1, 11)) = cv::Scalar(0.5f);
This produces (which is what I think you wanted):
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0 0 0 0 0 0.5 1 1 1 1 1
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1
1 1 1 1 1 0.5 1 1 1 1 1

Related

Query data based on condition in InfluxDB

I'm fresher level person in influxDB and I strongly believe that this query is little hard to me. So before starting my question, first I'm showing my snippets
Measurement: Power
time id level wp1 wp2 wp3
---- -- ----- --- --- ---
1674012918143941800 1 0.2 0 0 0
1674012946442606600 1 0.3 0 0 0
1674012956373793000 1 0.4 0 0 0
1674012960513270300 1 0.5 0 0 0
1674012964636921000 1 0.6 0 0 0
1674012969387566800 1 0.7 0 0 0
1674012975236928500 1 0.8 0 0 0
1674012979176158400 1 0.9 0 0 0
1674012985735524600 1 1 0 0 0
1674013002642438500 1 1.1 0.9 0 0
1674013014405825000 1 1.2 17.3 0 0
1674013024368173700 1 1.2 17.2 0 0
1674013030284261700 1 1.1 17.5 0 0
1674013035202912500 1 1 17.5 0 0
1674013048221657300 1 0.9 17.1 0 0
1674013055478454000 1 0.8 17.7 0 0
1674013063071284300 1 0.7 17.9 0 0
1674013071595783200 1 0.6 17.5 0 0
1674013121251525800 1 0.5 17.4 0 0
1674013149886737100 1 0.4 0.4 0 0
1674013162976838600 1 0.5 0 0 0
1674013171731741300 1 0.6 0 0 0
1674013175329837000 1 0.7 0 0 0
1674013179396594100 1 0.8 0 0 0
1674013185464567200 1 0.9 0 0 0
1674013202283232100 1 1 0 6.2 0
1674013215395312900 1 1.1 0 22.3 0
1674013222458146000 1 1.1 0 22.1 0
1674013242581765500 1 1 0 22.6 0
1674013249941544200 1 0.9 0 22.6 0
1674013257400012100 1 0.8 0 22.3 0
1674013263325355200 1 0.7 0 22.5 0
1674013269931344900 1 0.6 0 22.2 0
1674013287223505700 1 0.5 0 22 0
1674013296618633300 1 0.4 0 0.2 0
1674013306827048400 1 0.4 0 0 0
If you see the row 1674013002642438500 has wp1 is 0.9 which is greater than 0.5 and this wp1 value consistent by time and fall down at 1674013149886737100 row with value 0.4 which is less than 0.5 and in between of both time, if we find mid row then we'll get this row 1674013048221657300 1 0.9 17.1 0 0 So here we can call this start, stop and mid as a cycle.
And this kind of multiple cycle we can got in a day. And I wanna fetch the rows.
So if I search cycles for wp1 then the result data will be:
time id level wp1 wp2 wp3
---- -- ----- --- --- ---
1674013002642438500 1 1.1 0.9 0 0
1674013048221657300 1 0.9 17.1 0 0
1674013149886737100 1 0.4 0.4 0 0
And same if we search based on wp2 then the retrieve data should be:
time id level wp1 wp2 wp3
---- -- ----- --- --- ---
1674013202283232100 1 1 0 6.2 0
1674013249941544200 1 0.9 0 22.6 0
1674013296618633300 1 0.4 0 0.2 0
And if the search condition based on both wp1 & wp2 then result will be:
time id level wp1 wp2 wp3
---- -- ----- --- --- ---
1674013002642438500 1 1.1 0.9 0 0
1674013048221657300 1 0.9 17.1 0 0
1674013149886737100 1 0.4 0.4 0 0
1674013202283232100 1 1 0 6.2 0
1674013249941544200 1 0.9 0 22.6 0
1674013296618633300 1 0.4 0 0.2 0
Could you please help me with this? I'll be so thankful to you.

How to stitch two images have different absolute coordinates?

for example, stitch
first image
1 1 1
1 1 1
1 1 1
second image
2 2 2 2
2 2 2 2
2 2 2 2
and What I want
0 0 0 2 2 2 2
1 1 1 2 2 2 2
1 1 1 2 2 2 2
1 1 1 0 0 0 0
or
1 1 1 0 0 0 0
1 1 1 2 2 2 2
1 1 1 2 2 2 2
0 0 0 2 2 2 2
In python, that is easy to make like..
temp_panorama = np.zeros((1's height+abs(2's upper part length), 1's width+2's width))
temp_panorama[(2's upper part length) : 1's height, 0 : 1's width] = img1[:]
temp_panorama[0 : 2's height, 1's width +1 :] = img2[:, :]
but how can I implement the same function in C++'s opencv?
use subimages:
// ROI where first image will be placed
cv::Rect firstROI = cv::Rect(x1,y2, first.cols, first.height);
cv::Rect secondROI = cv::Rect(x2,y2, second.cols, second.height);
// create an image big enought to hold the result
cv::Mat canvas = cv::Mat::zeros(cv::Size(std::max(x1+first.cols, x2+second.cols), std::max(y1+first.rows, y2+second.rows)), first.type());
// use subimages:
first.copyTo(canvas(firstROI));
second.copyTo(canvas(secondROI));
in your example:
x1 = 0,
y1 = 1,
x2 = 3,
y2 = 0
first.cols == 3
first.rows == 3
second.cols == 4
second.rows == 3

Extra zeros appended in confusion matrix making it 3x3 instead of 2x2 using IsolationForest for Anomaly detection

I am using below code to predict anomaly detection. It is a binary classification so the confusion matrix should be 2x2 instead it is 3x3. There are extra zeros appended in T-shape. Similar thing happened using OneClassSVM few weeks back as well but I thought I was doing something wrong. Could you please help me fix this?
import numpy as np
import pandas as pd
import os
from sklearn.ensemble import IsolationForest
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from sklearn import metrics
from sklearn.metrics import roc_auc_score
data = pd.read_csv('opensky_train.csv')
#to make sure that normal data contains no anomaly
sortedData = data.sort_values(by=['class'])
target = pd.DataFrame(sortedData['class'])
Y = target.replace(['surveill', 'other'], [1,0])
X = sortedData.drop(['class'], axis = 1)
x_normal = X.iloc[:200,:]
y_normal = Y.iloc[:200,:]
x_anomaly = X.iloc[200:,:]
y_anomaly = Y.iloc[200:,:]
Edited:
column_values = y_anomaly.values.ravel()
unique_values = pd.unique(column_values)
print(unique_values)
Output : [0 1]
clf = IsolationForest(random_state=0).fit(x_normal)
pred = clf.predict(x_anomaly)
print(pred)
Output : [ 1 1 1 1 1 1 -1 1 -1 1 1 1 1 1 1 1 1 1 1 -1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 -1 1 1 1 1 1 1 -1 1 1 -1 1 1 -1 1 1 -1 1 -1 1
-1 1 1 -1 -1 1 -1 -1 1 1 1 1 -1 1 1 -1 -1 1 1 1 1 1 1 1
-1 1 1 1 1 1 1 1 1 1 -1]
#printing the results
print(confusion_matrix(y_anomaly, pred))
print (classification_report(y_anomaly, pred))
Result:
Confusion Matrix :
[[ 0 0 0]
[ 7 0 60]
[12 0 28]]
precision recall f1-score support
-1 0.00 0.00 0.00 0
0 0.00 0.00 0.00 67
1 0.32 0.70 0.44 40
accuracy 0.26 107
macro avg 0.11 0.23 0.15 107
weighted avg 0.12 0.26 0.16 107
Inliers are labeled 1, while outliers are labeled -1
Source: scikit-learn Anomaly and Outlier detection.
Your example has transformed the classes to 0,1 - so the three possible options are -1,0,1
You need to change from
Y = target.replace(['surveill', 'other'], [1,0])
to
Y = target.replace(['surveill', 'other'], [1,-1])

OneVsRestClassifier(svm.SVC()).predict() gives continous values

I am trying to use y_scores=OneVsRestClassifier(svm.SVC()).predict() on datasets
like iris and titanic .The trouble is that I am getting y_scores as continous values.like for iris dataset I am getting :
[[ -3.70047231 -0.74209097 2.29720159]
[ -1.93190155 0.69106231 -2.24974856]
.....
I am using the OneVsRestClassifier for other classifier models like knn,randomforest,naive bayes and they are giving appropriate results in the form of
[[ 0 1 0]
[ 1 0 1]...
etc on the iris dataset .Please help.
Well this is simply not true.
>>> from sklearn.multiclass import OneVsRestClassifier
>>> from sklearn.svm import SVC
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> clf = OneVsRestClassifier(SVC())
>>> clf.fit(iris['data'], iris['target'])
OneVsRestClassifier(estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,
kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False),
n_jobs=1)
>>> print clf.predict(iris['data'])
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 2 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
maybe you called decision_function instead (which would match your output dimension, as predict is supposed to return a vector, not a matrix). Then, SVM returns signed distances to each hyperplane, which is its decision function from mathematical perspective.

Delphi 7 boolean equations aren't working

I have a program written in Delphi 7 that appears to be experiencing some logic issues. the following line never gives a true value even when my watch window says it should.
Seq^.step[1] :=
(PlcStart^ and (not Seq^.Step[2])) or
(RetryDelay^.Done and (not Seq^.Step[2])) or
(Seq^.Step[1] and (not Seq^.Step[reset_]));
my watch window shows that (PlcStart^ and (not Seq^.Step[2])) or (RetryDelay^.Done and (not Seq^.Step[2])) or (Seq^.Step[1] and (not Seq^.Step[reset_])) is true but the value of Seq^.Step[1] never gets set to true.
The real strange part is that I have a number of programs with the exact same line that all appear to be working correctly.
Seq^.step[1] :=
(PlcStart^ and (not Seq^.Step[2])) or
(RetryDelay^.Done and (not Seq^.Step[2])) or
(Seq^.Step[1] and (not Seq^.Step[reset_]));
I'm not familiar with Delphi but I am familiar with boolean logic. If I'm reading this right your're saying:
(A ∧ ¬B) ∨ (C ∧ ¬B) ∨ (D ∧ ¬E)
in javascript that's:
(a && !b) || (c && !b) || (d && !e)
Using http://mustpax.github.io/Truth-Table-Generator/ to generate a truth table and converting "false" to "0" and "true" to "1", we get the truth table:
a b c d e (a & !b) | (c & !b) | (d & !e)
1 1 1 1 1 0
0 1 1 1 1 0
1 0 1 1 1 1
0 0 1 1 1 1
1 1 0 1 1 0
0 1 0 1 1 0
1 0 0 1 1 1
0 0 0 1 1 0
1 1 1 0 1 0
0 1 1 0 1 0
1 0 1 0 1 1
0 0 1 0 1 1
1 1 0 0 1 0
0 1 0 0 1 0
1 0 0 0 1 1
0 0 0 0 1 0
1 1 1 1 0 1
0 1 1 1 0 1
1 0 1 1 0 1
0 0 1 1 0 1
1 1 0 1 0 1
0 1 0 1 0 1
1 0 0 1 0 1
0 0 0 1 0 1
1 1 1 0 0 0
0 1 1 0 0 0
1 0 1 0 0 1
0 0 1 0 0 1
1 1 0 0 0 0
0 1 0 0 0 0
1 0 0 0 0 1
0 0 0 0 0 0
This table may or may not be correct, I haven't verified it. You can go through it and decide for yourself. Anyway, assuming it is correct, you could check the expected output for your given input and verify whether your expectations are correct.

Resources