The whole picture of the problem is as below:
I cloned the ORB-SLAM3 from https://github.com/rexdsp/ORB_SLAM3_Windows. It can be built and works well. Then I tried to evaluate its accuracy with EuRoC MH_01_easy, but found it was at least 10 times worse in release mode than the accuracy reported in the paper. After debugging several days, I found the reason was in the function below:
void EdgeSE3ProjectXYZ::linearizeOplus() {
g2o::VertexSE3Expmap * vj = static_cast<g2o::VertexSE3Expmap *>(_vertices[1]);
g2o::SE3Quat T(vj->estimate());
g2o::VertexSBAPointXYZ* vi = static_cast<g2o::VertexSBAPointXYZ*>(_vertices[0]);
Eigen::Vector3d xyz = vi->estimate();
Eigen::Vector3d xyz_trans = T.map(xyz);
double x = xyz_trans[0];
double y = xyz_trans[1];
double z = xyz_trans[2];
auto projectJac = -pCamera->projectJac(xyz_trans);
_jacobianOplusXi = projectJac * T.rotation().toRotationMatrix();
double* buf = (double*)_jacobianOplusXi.data();
Eigen::Matrix<double,3,6> SE3deriv;
SE3deriv << 0.f, z, -y, 1.f, 0.f, 0.f,
-z , 0.f, x, 0.f, 1.f, 0.f,
y , -x , 0.f, 0.f, 0.f, 1.f;
buf = (double*)SE3deriv.data();
printf("S: %f %f %f %f %f %f\n", buf[0], buf[3], buf[6], buf[9], buf[12], buf[15]);
printf("S: %f %f %f %f %f %f\n", buf[1], buf[4], buf[7], buf[10], buf[13], buf[16]);
printf("S: %f %f %f %f %f %f\n", buf[2], buf[5], buf[8], buf[11], buf[14], buf[17]);
_jacobianOplusXj = projectJac * SE3deriv;
buf = (double*)_jacobianOplusXj.data();
printf("j: %f %f %f %f %f %f\n", buf[0], buf[2], buf[4], buf[6], buf[8], buf[10]);
printf("j: %f %f %f %f %f %f\n", buf[1], buf[3], buf[5], buf[7], buf[9], buf[11]);
}
The function does not work correctly in release mode, as what is shown in the log: (Jac is the content of projectJac printed in other function)
Jac: 6.586051 0.000000 5.789042
Jac: 0.000000 6.566550 -0.414389
S: 0.000000 69.640217 -4.394719 1.000000 0.000000 0.000000
S: -69.640217 0.000000 -61.212726 0.000000 1.000000 0.000000
S: 4.394719 61.212726 0.000000 0.000000 0.000000 1.000000
j: 25.478939 354.888530 0.000000 -0.000000 -0.000000 5.797627
j: -2.092870 -29.150968 0.000000 -0.000000 -0.000000 -0.476224
The whole system works correctly in Debug mode,the accuracy is similar to the results reported in the paper. And the log of the function above is as follows:
Jac: 6.586051 0.000000 5.789042
Jac: 0.000000 6.566550 -0.414389
S: 0.000000 69.640217 -4.394719 1.000000 0.000000 0.000000
S: -69.640217 0.000000 -61.212726 0.000000 1.000000 0.000000
S: 4.394719 61.212726 0.000000 0.000000 0.000000 1.000000
j: -25.441210 -813.017008 28.943840 -6.586051 -0.000000 -5.789042
j: 459.117113 25.365882 401.956448 0.000000 -6.566550 0.414389
Have you met this kind of problem before? Could you tell me the reason of this problem? I guess it is caused by compiler settings. Could anyone tell me how to solve this problem?
Thanks a lot.
Hi, I changed the declaration of projectJac and the whole system works correctly. My changes are as follows:
original:
auto projectJac = -pCamera->projectJac(xyz_trans);
modification:
Eigen::Matrix<double, 2, 3> projectJac = -pCamera->projectJac(xyz_trans);
The definition of function projectJac is as follows:
Eigen::Matrix<double, 2, 3> Pinhole::projectJac(const Eigen::Vector3d &v3D) {
Eigen::Matrix<double, 2, 3> Jac;
Jac(0, 0) = mvParameters[0] / v3D[2];
Jac(0, 1) = 0.f;
Jac(0, 2) = -mvParameters[0] * v3D[0] / (v3D[2] * v3D[2]);
Jac(1, 0) = 0.f;
Jac(1, 1) = mvParameters[1] / v3D[2];
Jac(1, 2) = -mvParameters[1] * v3D[1] / (v3D[2] * v3D[2]);
printf("v3D: %f %f %f\n", v3D[0], v3D[1], v3D[2]);
printf("Jac: %f %f %f %f %f %f\n", Jac(0, 0), Jac(0, 1), Jac(0, 2), Jac(1, 0), Jac(1, 1), Jac(1, 2));
return Jac;
}
Now it is clear the bug is caused by "auto". Could you tell me what is reason for this? How should I changed the settings in Visual Studio 2019 to make "auto" works correctly? Thanks a lot.
I have some images with fisheye distortion and their corresponding CAVHORE calibration files. I want to have the images undistorted, using OpenCV (namely cv2.fisheye.undistortImage) for now, which needs the intrinsic matrix K and distortion coefficients D.
I have been reading about camera models and their conversions. It seems constructing K is pretty easy (Section 2.2.4) when there is no radial distortion, but getting distortion coefficients D and solving for KRCr is not straightforward. Experimentally I played with the image with no radial distortion assumption and constructed K from given H_* and V_* parameters. The result is undistorted but not perfect.
The question is, given a calibration file as below, is there any formula or approximation to obtain the distortion coefficients? Or, is there an easier way to undistort using CAVHORE parameters?
Codebase, formula, pointer, anything is appreciated, thanks.
Example CAVHORE file:
C = -0.000000 -0.000000 -0.000000
A = 0.000000 -0.000000 1.000000
H = 2080.155870 0.000000 3010.375794
V = -0.000000 2078.727106 1932.069537
O = 0.000096 0.000068 1.000000
R = 0.000000 -0.040627 -0.004186
E = -0.003159 0.004129 -0.001279
...
Hs = 2080.155870
Hc = 3010.375794
Vs = 2078.727106
Vc = 1932.069537
Theta = -1.570796 (-90.000000 deg)
I am trying to apply a certain rotation to an image, but it doesn't work as expected. The rotation I have is:
[0.109285 0.527975 0.000000
-0.527975 0.109285 0.000000
0.000000 0.000000 1.000000]
Which should be a rotation of ~78 degrees around the camera center (or the Z axis if you prefer).
To build a homography, as there is no translation component, I use the formula: K * R * K^-1 (infinite homography).
The code I use to transform the image (320x240) is:
cv::warpPerspective(image1, image2, K * R * K.inv(), image1.size());
where K is:
[276.666667 0.000000 160.000000
0.000000 276.666667 120.000000
0.000000 0.000000 1.000000]
The resulting matrix from K * R * K.inv() is:
[0.109285 0.527975 79.157461
-0.527975 0.109285 191.361865
0.000000 0.000000 1.000000]
The result should just be a rotation of the image, but the image gets "zoomed out" like this:
What am I doing wrong?
Apparently my rotation matrix was wrong.
I have a scene with one physical body.
self.player.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:self.player.size.height / 2];
self.player.physicsBody.dynamic = YES;
self.player.physicsBody.allowsRotation = NO;
self.player.physicsBody.mass = 1;
[self addChild:self.player];
The gravity is set to be -9.8:
self.physicsWorld.gravity = CGVectorMake( 0.0, -9.8 );
I try to check if that's true:
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
double delta = currentTime - _pasttime;
_pasttime = currentTime;
float ax = (self.player.physicsBody.velocity.dx-self.oldVelocity.dx)/delta;
float ay = (self.player.physicsBody.velocity.dy-self.oldVelocity.dy)/delta;
self.oldVelocity = self.player.physicsBody.velocity;
NSLog(#"The acceleration is: %f %f",ax,ay);
}
And I get the following output:
2014-04-10 17:12:12.469 PhysicsTest[4508:60b] The acceleration is: 0.000000 -413.161133
2014-04-10 17:12:12.485 PhysicsTest[4508:60b] The acceleration is: 0.000000 -1732.006226
2014-04-10 17:12:12.502 PhysicsTest[4508:60b] The acceleration is: 0.000000 -829.603760
2014-04-10 17:12:12.518 PhysicsTest[4508:60b] The acceleration is: 0.000000 -860.666260
2014-04-10 17:12:12.534 PhysicsTest[4508:60b] The acceleration is: 0.000000 -831.616455
2014-04-10 17:12:12.568 PhysicsTest[4508:60b] The acceleration is: 0.000000 -514.745300
2014-04-10 17:12:12.585 PhysicsTest[4508:60b] The acceleration is: 0.000000 -1667.948853
2014-04-10 17:12:12.601 PhysicsTest[4508:60b] The acceleration is: 0.000000 -839.192383
2014-04-10 17:12:12.618 PhysicsTest[4508:60b] The acceleration is: 0.000000 -807.683167
2014-04-10 17:12:12.651 PhysicsTest[4508:60b] The acceleration is: 0.000000 -523.592285
What the hell?
I have a thresholded image of size 320x320 pixels. I loop through the entire image in blocks of 20x20 pixels by setting ROI. I need to find the average value of each block. So I pass these blocks of images to the function 'cvAvg'. I am facing the below problems.
The return type of 'cvAvg' is 'CvScalar' which has 4 doubles. I could not interpret CvScalar from the docs. I need only a single average value of the pixels maybe of the format 'float' or 'double' based on which I need to make other decisions. How do I extract a single value out of the function return value. I do not want to iterate through all the pixels and find average. I want to process in blocks of 20x20.
For Eg: If 200 pixels are white and 200 pixels are black in a block of 20x20, I want to be able to extract a single value so that I can make a decision that the block has 50% white pixels and I thought mean/average would be a good way to know.
I created a variable of type CvScalar to retrieve and print the values returned by the function 'cvAvg. But all the values of the thresholded image are same
0.000000 255.000000 0.000000 0.000000
0.000000 255.000000 0.000000 0.000000
0.000000 255.000000 0.000000 0.000000
0.000000 255.000000 0.000000 0.000000
and this goes on 256 times for looping through all the blocks of the image which could not be right because the thresholded image has different parts of white and black. Whats going on here? Code Below. imgGreenThresh is the "binary image" thresholded for green.
IplImage* imgDummy = cvCreateImage(cvGetSize(imgGreenThresh), 8, 1); //Create a dummy image of the same size as thresholded image
cvCopy(imgGreenThresh, imgDummy); //Copy the thresholded image for further operations
CvRect roi; //Rectangular ROI
CvSize size;
int r, c, N=20;
int count = 0;
float LaserState[16][16]; //Create 16x16 matrix to hold the laser state values.
CvScalar meanValue; // individual windows mean value
size = cvGetSize(imgDummy); //returns image ROI, in other words, height and width of matrix
//Iteratively send the different ROIs for processing.
for (r = 0; r < size.height; r += N)
for (c = 0; c < size.width; c += N)
{
count++;
roi.x = c;
roi.y = r;
roi.width = (c + N > size.width) ? (size.width - c) : N;
roi.height = (r + N > size.height) ? (size.height - r) : N;
cvSetImageROI(imgDummy, roi);
meanValue = cvAvg(imgDummy);
printf("%f\t%f\t%f\t%f\n", meanValue);
cvResetImageROI(imgDummy);
}
//cvResetImageROI(imgDummy); //Do not forget to reset ROIs
Thanks!
sudhir.
My solution for computing the average of a single channel can be found in the following link:
Average values of a single channel
For your case, you have a ROI instead of a single channel. In OpenCV, an ROI is also a Mat object (if you are using OpenCV's C++ code) which you can pass to the cv::mean(...) function to get back a Scalar object. Only the first entry of that object will be set properly to the mean of your ROI and that is the value you want. See the link above for details.
I created another variable SingleMeanValue and assigned the 0th value of meanValue(CvScalar type). For some reason meanvalue[0] was not working as proposed by #lightalchemist. I don't know if that is supposed to work, I am quite new to this. Maybe it works in C++ but in C this maybe the correct way.
double SingleMeanValue = meanValue.val[0]; worked and below is the output from the updated code.
0.000000
0.000000
35.062500
247.350000
109.012500
137.700000
243.525000
51.000000
0.000000
0.000000 etc..
Thanks for the help #lightalchemist