Can't Figure out Logic Error involving Setting Rotation Matrix - opencv

I'm trying to extract the 3x3 rotation matrix from the 3x4 pose matrix I have. However, two values are differing even though I have very simple code setting one to the other. I'm banging my head against the wall because I have no idea why this is happening. Here is the code:
std::cout << "Camera pose matrix from optical flow homography" << std::endl;
for (int e = 0; e < pose.rows; e++) {
for (int f = 0; f < pose.cols; f++) {
std::cout << pose.at<double>(e,f) << " " << e << " " << f;
std::cout << " ";
}
std::cout << "\n" << std::endl;
}
std::cout << "Creating rotation matrix" << std::endl;
Mat rotvec = Mat::eye(3, 3, CV_32FC1);
for (int s = 0; s < pose.rows; s++) {
for (int g = 0; g < pose.cols-1; g++) {
rotvec.at<double>(s, g) = pose.at<double>(s,g);
std::cout << rotvec.at<double>(s,g) << " " << s << " " << g;
std::cout << " ";
}
std::cout << "\n" << std::endl;
}
std::cout << "Rotation matrix" << std::endl;
for (int e = 0; e < pose.rows; e++) {
for (int f = 0; f < pose.cols-1; f++) {
std::cout << rotvec.at<double>(e,f) << " " << e << " " << f;
std::cout << " ";
std::cout << pose.at<double>(e,f) << " " << e << " " << f;
std::cout << " ";
}
std::cout << "\n" << std::endl;
}
Here is the output:
Camera pose matrix from optical flow homography
5.26354e-315 0 0 0 0 1 0.0078125 0 2 0 0 3
0.0078125 1 0 0 1 1 0 1 2 5.26354e-315 1 3
0 2 0 5.26354e-315 2 1 1.97626e-323 2 2 7.64868e-309 2 3
Creating rotation matrix
5.26354e-315 0 0 0 0 1 0.0078125 0 2
0.0078125 1 0 0 1 1 0 1 2
0 2 0 5.26354e-315 2 1 1.97626e-323 2 2
Rotation matrix
5.26354e-315 0 0 5.26354e-315 0 0 0 0 1 0 0 1 5.26354e-315 0 2 0.0078125 0 2
0.0078125 1 0 0.0078125 1 0 0 1 1 0 1 1 0.0078125 1 2 0 1 2
0 2 0 0 2 0 5.26354e-315 2 1 5.26354e-315 2 1 1.97626e-323 2 2 1.97626e-323 2 2
Here you can see I'm trying to save the first three columns of pose into the rotvec matrix. When I actually set the rotation matrix equal to the pose for those three columns, I get the correct matrix, as the second matrix is equal to the first three columns of the first matrix. However, when I check the rotation matrix once again, (third matrix) it is not the same as the output I require on coordinates (0, 2) and (1, 2). (I outputted the rotvec matrix number next to the pose matrix number, and you can see at these coordinates the numbers do not match). I am not sure why this is happening, could someone please help me out?

Solved my problem for anyone else who stumbles upon this later: I just changed the Mat type to CV_64F (to make it double) for both rotvec and pose, and used all to display. Creds to berak for pointing me in the right direction.

Related

Some thing wrong in using autoDiffToGradientMatrix

I use following code to generate discrete dynamic gradient matrix. It looks like to generate different value in some matrix element when runs the same get_dynamics_gradient2() for several times. I can't find any obvious mistake from many testing. Could you tell me how to get it correct?
#include "drake/common/find_resource.h"
#include "drake/multibody/parsing/parser.h"
#include "drake/multibody/plant/multibody_plant.h"
#include "drake/systems/framework/diagram_builder.h"
#include <Eigen/Dense>
#include "drake/math/autodiff.h"
#include "drake/math/autodiff_gradient.h"
#include "iostream"
using namespace std;
using drake::FindResourceOrThrow;
using drake::multibody::MultibodyPlant;
using drake::multibody::Parser;
using drake::systems::Context;
using drake::systems::InputPort;
using drake::systems::DiagramBuilder;
using Eigen::VectorXd;
using drake::AutoDiffVecXd;
using Eigen::MatrixXd;
using drake::AutoDiffXd;
using drake::math::initializeAutoDiff;
using drake::math::autoDiffToGradientMatrix;
using drake::math::autoDiffToValueMatrix;
MatrixXd get_dynamics_gradient2(std::unique_ptr<MultibodyPlant<AutoDiffXd>>& plant_ad,std::unique_ptr<Context<AutoDiffXd>>& context_ad,const VectorXd& x_val,const VectorXd& u_val) {
AutoDiffVecXd x_ad = initializeAutoDiff(x_val);
context_ad -> SetContinuousState(x_ad);
AutoDiffVecXd u_ad = initializeAutoDiff(u_val);
const InputPort<Eigen::AutoDiffScalar<Eigen::VectorXd>>& actuation_port = plant_ad -> get_actuation_input_port();
actuation_port.FixValue(context_ad.get(), u_ad);
auto derivatives = plant_ad -> AllocateTimeDerivatives();
plant_ad -> CalcTimeDerivatives(*context_ad, derivatives.get());
AutoDiffVecXd xdot_ad = derivatives -> get_vector().CopyToVector();
AutoDiffVecXd x_next_ad = xdot_ad * 0.1 + x_ad;
MatrixXd x_next = autoDiffToValueMatrix(x_next_ad);
AutoDiffVecXd x_next_ad_t = x_next_ad.transpose();
AutoDiffVecXd u_ad_t = u_ad.transpose();
AutoDiffVecXd xu_next_ad_t(x_next_ad_t.rows()+u_ad_t.rows(),x_next_ad_t.cols());
xu_next_ad_t << x_next_ad_t,u_ad_t;
MatrixXd AB = autoDiffToGradientMatrix(xu_next_ad_t.transpose());
return AB;
}
int main(int argc, char* argv[]) {
const double time_step = 0;
DiagramBuilder<double> builder;
const std::string relative_name = "drake/wc/ll0/acrobot.sdf";
const std::string full_name = FindResourceOrThrow(relative_name);
MultibodyPlant<double>& acrobot = *builder.AddSystem<MultibodyPlant>(time_step);
Parser parser(&acrobot);
parser.AddModelFromFile(full_name);
acrobot.Finalize();
std::unique_ptr<MultibodyPlant<AutoDiffXd>> plant_ad = MultibodyPlant<double>::ToAutoDiffXd(acrobot);
std::unique_ptr<Context<AutoDiffXd>> context_ad = plant_ad -> CreateDefaultContext();
VectorXd x_valp = VectorXd(4);
x_valp << 1.3, 5.8, 1.5, 0.02;
VectorXd u_valp= VectorXd(1);
u_valp << 0.01;
MatrixXd AB1 = get_dynamics_gradient2(plant_ad, context_ad, x_valp, u_valp);
MatrixXd AB2 = get_dynamics_gradient2(plant_ad, context_ad, x_valp, u_valp);
MatrixXd AB3 = get_dynamics_gradient2(plant_ad, context_ad, x_valp, u_valp);
MatrixXd AB4 = get_dynamics_gradient2(plant_ad, context_ad, x_valp, u_valp);
cout <<"AB2-AB1" << endl << AB2-AB1 <<endl;
cout <<"AB3-AB1" << endl << AB3-AB1 <<endl;
cout <<"AB4-AB1" << endl << AB4-AB1 <<endl;
return 0;
}
Here is the result:
AB2-AB1
0 0 0 0
0 0 0 0
0 0 0.735961 0.377986
0 0 -1.47292 -0.756483
0 1.73059e-77 1.96356 0
AB3-AB1
0 0 0 0
0 0 0 0
0 0 -0.35439 -0.35439
0 0 0.70926 0.70926
0 1.73059e-77 0.963558 0
AB4-AB1
0 0 0 0
0 0 0 0
0 0 1.28067 1.50475
0 0 -2.56308 -3.01153
0 2.90227e-157 0.963558 0

opencv2 covariance matrix strange results

The following code gives inconsistent covariance matrix sizes.
cv::Mat A = (cv::Mat_<float>(3,2) << -1, 1, -2, 3, 4, 0);
cv::Mat covar1, covar2, covar3, covar4, mean;
calcCovarMatrix(A, covar1, mean, CV_COVAR_NORMAL | CV_COVAR_ROWS);
calcCovarMatrix(A, covar2, mean, CV_COVAR_SCRAMBLED | CV_COVAR_ROWS);
calcCovarMatrix(A, covar3, mean, CV_COVAR_NORMAL | CV_COVAR_COLS);
calcCovarMatrix(A, covar4, mean, CV_COVAR_SCRAMBLED | CV_COVAR_COLS);
std::cout << "size: " << covar1.size() << "\n";
std::cout << "size: " << covar2.size() << "\n";
std::cout << "size: " << covar3.size() << "\n";
std::cout << "size: " << covar4.size() << "\n";
covar1 and covar2 should have the same size because they both describe the covariance over the rows, and covar3 and covar4 should have the same size because they both describe the covariance over the columns, respectively. However, the output is:
size: [2 x 2]
size: [3 x 3]
size: [3 x 3]
size: [2 x 2]
The calcCovarMatrix() docs, specifically say that when using CV_COVAR_SCRAMBLED "The covariance matrix will be nsamples x nsamples."

Horn-Schunck optical flow implementation issue

I am trying to implement Horn-Schunck optical flow algorithm by NumPy and OpenCV
I use Horn-Schunck method on wiki and original paper
But my implementation fails on following simple example
Frame1:
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 255 255 0 0 0 0 0 0 0]
[ 0 255 255 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
Frame2:
[[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 255 255 0 0 0 0 0]
[ 0 0 0 255 255 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
This is just small white rectangle that moves by 2 pixels on frame2
My implementation produce following flow
U part of flow (I apply np.round to every part of flow. Original values is pretty the same):
[[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
V part of flow:
[[ 0. 1. 0. -1. -0. 0. 0. 0. 0. 0.]
[-0. -0. 0. 0. 0. 0. 0. 0. 0. 0.]
[-0. -1. -0. 1. 0. 0. 0. 0. 0. 0.]
[-0. -0. -0. 0. 0. 0. 0. 0. 0. 0.]
[-0. -0. -0. 0. 0. 0. 0. 0. 0. 0.]]
It look like this flow is incorrect (Because if i move every pixel of frame2 in direction of corresponding flow component i never get frame1)
Also my implementation fails on real images
But if i move rectangle by 1 pixel right (or left or top or down) my implementation produce:
U part of flow:
[[1 1 1 .....]
[1 1 1 .....]
......
[1 1 1 .....]]
V part of flow:
[[0 0 0 .....]
[0 0 0 .....]
......
[0 0 0 .....]]
I suppose that this flow is correct because i can reconstruct frame 1 by following procedure
def translateBrute(img, u, v):
res = np.zeros_like(img)
u = np.round(u).astype(np.int)
v = np.round(v).astype(np.int)
for i in xrange(img.shape[0]):
for j in xrange(img.shape[1]):
res[i, j] = takePixel(img, i + v[i, j], j + u[i, j])
return res
where takePixel is simple function that returns pixel intensity if input coordinates lays inside of image or intensity on image border otherwise
This is my implementation
import cv2
import sys
import numpy as np
def takePixel(img, i, j):
i = i if i >= 0 else 0
j = j if j >= 0 else 0
i = i if i < img.shape[0] else img.shape[0] - 1
j = j if j < img.shape[1] else img.shape[1] - 1
return img[i, j]
#Numerical derivatives from original paper: http://people.csail.mit.edu/bkph/papers/Optical_Flow_OPT.pdf
def xDer(img1, img2):
res = np.zeros_like(img1)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
sm += takePixel(img1, i, j + 1) - takePixel(img1, i, j)
sm += takePixel(img1, i + 1, j + 1) - takePixel(img1, i + 1, j)
sm += takePixel(img2, i, j + 1) - takePixel(img2, i, j)
sm += takePixel(img2, i + 1, j + 1) - takePixel(img2, i + 1, j)
sm /= 4.0
res[i, j] = sm
return res
def yDer(img1, img2):
res = np.zeros_like(img1)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
sm += takePixel(img1, i + 1, j ) - takePixel(img1, i, j )
sm += takePixel(img1, i + 1, j + 1) - takePixel(img1, i, j + 1)
sm += takePixel(img2, i + 1, j ) - takePixel(img2, i, j )
sm += takePixel(img2, i + 1, j + 1) - takePixel(img2, i, j + 1)
sm /= 4.0
res[i, j] = sm
return res
def tDer(img, img2):
res = np.zeros_like(img)
for i in xrange(res.shape[0]):
for j in xrange(res.shape[1]):
sm = 0
for ii in xrange(i, i + 2):
for jj in xrange(j, j + 2):
sm += takePixel(img2, ii, jj) - takePixel(img, ii, jj)
sm /= 4.0
res[i, j] = sm
return res
averageKernel = np.array([[ 0.08333333, 0.16666667, 0.08333333],
[ 0.16666667, 0. , 0.16666667],
[ 0.08333333, 0.16666667, 0.08333333]], dtype=np.float32)
#average intensity around flow in point i,j. I use filter2D to improve performance.
def average(img):
return cv2.filter2D(img.astype(np.float32), -1, averageKernel)
def translateBrute(img, u, v):
res = np.zeros_like(img)
u = np.round(u).astype(np.int)
v = np.round(v).astype(np.int)
for i in xrange(img.shape[0]):
for j in xrange(img.shape[1]):
res[i, j] = takePixel(img, i + v[i, j], j + u[i, j])
return res
#Core of algorithm. Iterative scheme from wiki: https://en.wikipedia.org/wiki/Horn%E2%80%93Schunck_method#Mathematical_details
def hornShunckFlow(img1, img2, alpha):
img1 = img1.astype(np.float32)
img2 = img2.astype(np.float32)
Idx = xDer(img1, img2)
Idy = yDer(img1, img2)
Idt = tDer(img1, img2)
u = np.zeros_like(img1)
v = np.zeros_like(img1)
#100 iterations enough for small example
for iteration in xrange(100):
u0 = np.copy(u)
v0 = np.copy(v)
uAvg = average(u0)
vAvg = average(v0)
# '*', '+', '/' operations in numpy works component-wise
u = uAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idx * (Idx * uAvg + Idy * vAvg + Idt)
v = vAvg - 1.0/(alpha**2 + Idx**2 + Idy**2) * Idy * (Idx * uAvg + Idy * vAvg + Idt)
if iteration % 10 == 0:
print 'iteration', iteration, np.linalg.norm(u - u0) + np.linalg.norm(v - v0)
return u, v
if __name__ == '__main__':
img1c = cv2.imread(sys.argv[1])
img2c = cv2.imread(sys.argv[2])
img1g = cv2.cvtColor(img1c, cv2.COLOR_BGR2GRAY)
img2g = cv2.cvtColor(img2c, cv2.COLOR_BGR2GRAY)
u, v = hornShunckFlow(img1g, img2g, 0.1)
imgRes = translateBrute(img2g, u, v)
cv2.imwrite('res.png', imgRes)
print img1g
print translateBrute(img2g, u, v)
Optimization scheme are taken from wikipedia and numerical derivatives are taken from original paper.
Anyone have idea why my implementation produce incorrect flow?
I can provide any additional info if it necessary
PS Sorry for my poor english
UPD:
I implement Horn-Schunck cost function
def grad(img):
Idx = cv2.filter2D(img, -1, np.array([
[-1, -2, -1],
[ 0, 0, 0],
[ 1, 2, 1]], dtype=np.float32))
Idy = cv2.filter2D(img, -1, np.array([
[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]], dtype=np.float32))
return Idx, Idy
def hornShunckCost(Idx, Idy, Idt, u, v, alpha):
#return sum(sum(It**2))
udx, udy = grad(u)
vdx, vdy = grad(v)
return (sum(sum((Idx*u + Idy*v + Idt)**2)) +
(alpha**2)*(sum(sum(udx**2)) +
sum(sum(udy**2)) +
sum(sum(vdx**2)) +
sum(sum(vdy**2))
))
and check value of this function inside iterations
if iteration % 10 == 0:
print 'iter', iteration, np.linalg.norm(u - u0) + np.linalg.norm(v - v0)
print hornShunckCost(Idx, Idy, Idt, u, v, alpha)
If i use simple example with rectangle that has been moved by one pixel everything is ok: value of cost function decrease at every step.
But on example with rectangle that has been moved by two pixels value of cost function increase at every step.
This behaviour of algorithm is really strange
Maybe i choose incorrect way to calculate cost function.
I lost a fact that classic Horn-Schunck scheme uses linearized data term (I1(x, y) - I2(x + u(x, y), y + v(x, y))). This linearization make optimization easy but disallows large displacements
To handle big displacements there are next approach Pyramidal Horn-Schunck

How to loop over array in Z3Py

As part of a reverse engineering exercise, I'm trying to write a Z3 solver to find a username and password that satisfy the program below. This is especially tough because the z3py tutorial that everyone refers to (rise4fun) is down.
#include <iostream>
#include <string>
using namespace std;
int main() {
string name, pass;
cout << "Name: ";
cin >> name;
cout << "Pass: ";
cin >> pass;
int sum = 0;
for (size_t i = 0; i < name.size(); i++) {
char c = name[i];
if (c < 'A') {
cout << "Lose: char is less than A" << endl;
return 1;
}
if (c > 'Z') {
sum += c - 32;
} else {
sum += c;
}
}
int r1 = 0x5678 ^ sum;
int r2 = 0;
for (size_t i = 0; i < pass.size(); i++) {
char c = pass[i];
c -= 48;
r2 *= 10;
r2 += c;
}
r2 ^= 0x1234;
cout << "r1: " << r1 << endl;
cout << "r2: " << r2 << endl;
if (r1 == r2) {
cout << "Win" << endl;
} else {
cout << "Lose: r1 and r2 don't match" << endl;
}
}
I got that code from the assembly of a binary, and while it may be wrong I want to focus on writing the solver. I'm starting with the first part, just calculating r1, and this is what I have:
from z3 import *
s = Solver()
sum = Int('sum')
name = Array('name', IntSort(), IntSort())
for c in name:
s.add(c < 65)
if c > 90:
sum += c - 32
else:
sum += c
r1 = Xor(sum, 0x5678)
print s.check()
print s.model()
All I'm asserting is that there are no letters less than 'A' in the array, so I expect to get back an array of any size that has numbers greater than 65.
Obviously this is completely wrong, mainly because it infinite loops. Also, I'm not sure I'm calculating sum correctly, because I don't know if it's initialized to 0. Could someone help figure out how to get this first loop working?
EDIT: I was able to get a z3 script that is close to the C++ code shown above:
from z3 import *
s = Solver()
sum = 0
name = Array('name', BitVecSort(32), BitVecSort(32))
i = Int('i')
for i in xrange(0, 1):
s.add(name[i] >= 65)
s.add(name[i] < 127)
if name[i] > 90:
sum += name[i] - 32
else:
sum += name[i]
r1 = sum ^ 0x5678
passwd = Array('passwd', BitVecSort(32), BitVecSort(32))
r2 = 0
for i in xrange(0, 5):
s.add(passwd[i] < 127)
s.add(passwd[i] >= 48)
c = passwd[i] - 48
r2 *= 10
r2 += c
r2 ^= 0x1234
s.add(r1 == r2)
print s.check()
print s.model()
This code was able to give me a correct username and password. However, I hardcoded the lengths of one for the username and five for the password. How would I change the script so I wouldn't have to hard code the lengths? And how would I generate a different solution each time I run the program?
Arrays in Z3 do not necessarily have any bounds. In this case the index-sort is Int, which means unbounded integers (not machine integers). Consequently, for c in name will run forever because it enumerates name[0], name[1], name[2], ...
It seems that you actually have a bound in the original program (name.size()), so it would suffice to enumerate up to that limit. Otherwise you might need a quantifier, e.g., \forall x of Int sort . name[x] < 65. This comes with all the warnings about quantifiers, of course (see e.g., the Z3 Guide)
Suppose the length is to be determined. Here is what I think you could do:
length = Int('length')
x = Int('x')
s.add(ForAll(x,Implies(And(x>=0,x<length),And(passwd[x] < 127,passwd[x] >=48))))

Issue with cv::Mat::zeros initialization

my problem is just astonishing. This is the code
#define NCHANNEL 3
#define NFRAME 100
Mat RR = Mat::zeros(NCHANNEL, NFRAME-1, CV_64FC1);
double *p_0 = RR.ptr<double>(0);
double *p_1 = RR.ptr<double>(1);
double *p_2 = RR.ptr<double>(2);
cout<< p_0[NFRAME-1] << endl << p_1[NFRAME-1] << endl << p_2[NFRAME-1] << endl;
And the output is: 0 0 -6.27744e+066 .
Where is that awful number come from? it seems I'm printing a pointer or something rough in memory. (uh, 0 is the value of all other elements, of course).
You are accessing after the last element of Mat. If you use NFRAME-1 for initialization then the last element has NFRAME-2 index.

Resources