I came across an error during execute stereoCalibrate in Opencv 2.4.11, which is says :
OpenCV Error: Assertion failed (!fixedSize() || ((Mat*)obj)->size.operator()() == Size(cols, rows)) in cv::_OutputArray::create,
I think this must be some size error between these parameters, which go through them one by one. But there is still error. I hope someone awesome could find the error from the assembly code below. Here is the method call in my code.
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cv::Size(1920,1080), m.rvec, m.tvec, m.evec, m.fvec,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5)
,CV_CALIB_FIX_INTRINSIC + CV_CALIB_USE_INTRINSIC_GUESS
);
In my code, m.rvec is (3,3,CV_64F), m.tvec is (3,1,CV_64F), m.evec and m.fvec are not preallocated which is same with the stereoCalibrate example. And intr.cameraMatrix is (3,3,CV_64F) and intr.distCoeffs is (8,1,CV_64F), objPoints is computed from the checkerboard which stores the 3d position of corners and all z value for point is zero.
After reading advice from #Josh, I modify the code as plain output mat object which are in CV_64F, but it still throws this assertion.
cv::Mat R, t, e, f;
double error = cv::stereoCalibrate(
objPoints, cali0.imgPoints, cali1.imgPoints,
camera0.intr.cameraMatrix, camera0.intr.distCoeffs,
camera1.intr.cameraMatrix, camera1.intr.distCoeffs,
cali0.imgSize, R, t, e, f,
cv::TermCriteria(CV_TERMCRIT_ITER + CV_TERMCRIT_EPS, 100, 1e-5));
Finally I solved this problem, as a reminder, make sure the camera parameters you passed in are not const type....
Why go for assembly? OpenCV is open source and you can check the code you're calling here: https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3523
If you get assertion fails in OpenCV it's usually because you've passed a matrix with an incorrect shape. OpenCV is extremely picky. The assertion fail is on an OutputArray, so checking the function signature there are four possible culprits:
OutputArray _Rmat, OutputArray _Tmat, OutputArray _Emat, OutputArray _Fmat
The sizing is done inside cv::stereoCalibrate here:
https://github.com/opencv/opencv/blob/master/modules/calib3d/src/calibration.cpp#L3550
_Rmat.create(3, 3, rtype);
_Tmat.create(3, 1, rtype);
<-- snipped -->
if( _Emat.needed() )
{
_Emat.create(3, 3, rtype);
p_matE = &(c_matE = _Emat.getMat());
}
if( _Fmat.needed() )
{
_Fmat.create(3, 3, rtype);
p_matF = &(c_matF = _Fmat.getMat());
}
The assertion is being triggered in one of these calls, the code is here:
https://github.com/opencv/opencv/blob/master/modules/core/src/matrix.cpp#L2241
Try passing in plain Mat objects without preallocating their shape.
Related
I have a code on mathematica which is working perfectly. I need to save the results and call them back in another code. But when I do and try to recall the result I have this error (see image attached). It seems that The part of the code saving the result is not working properly. here is the code for saving:
filename = "solutions/soltest_26aout";
Block[{FullDefinition = Definition},
Save[filename, {interP2CM, inter\[Xi]a2CM, inter\[Xi]b2CM, solp,
sol\[Xi], sol\[Zeta], h, gridx, gridv, Nx, Nv, minx, maxx, minva,
maxva, minvb, maxvb, S, L, LRva, LRvb, a, A,
B, \[Sigma]a, \[Sigma]b, \[Rho], \[Gamma], \[Psi], \[Phi], \
\[Tau], \[Delta], Veq, Ueq, inter\[Sigma]ax2CM, inter\[Sigma]bx2CM,
interpiab2CM, interpia2CM, interpib2CM, interpiba2CM,
inter\[Sigma]ap2CM, inter\[Sigma]bp2CM, interr2CM, inter\[Mu]x2CM,
interg2CM, intersa2CM, SSx2CM}]
];
Now after I call the solution like:
<< "solutions/soltest_26aout";
and this is the error messages I got:click here to see the error message
Thanks for helping!
I have a folder of positives and another of negatives images in JPG format, and I want to train an SVM based on that images, I've done the following but I receive an error:
Mat classes = new Mat();
Mat trainingData = new Mat();
Mat trainingImages = new Mat();
Mat trainingLabels = new Mat();
CvSVM clasificador;
for (File file : new File(path + "positives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.ones(new Size(1, 1), CvType.CV_32FC1));
}
for (File file : new File(path + "negatives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.zeros(new Size(1, 1), CvType.CV_32FC1));
}
trainingImages.copyTo(trainingData);
trainingData.convertTo(trainingData, CvType.CV_32FC1);
trainingLabels.copyTo(classes);
CvSVMParams params = new CvSVMParams();
params.set_kernel_type(CvSVM.LINEAR);
clasificador = new CvSVM(trainingData, classes, new Mat(), new Mat(), params);
When I try to run that I obtain:
OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData, file ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp, line 857
Exception in thread "main" CvException [org.opencv.core.CvException: ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp:857: error: (-5) train data must be floating-point matrix in function cvCheckTrainData
]
at org.opencv.ml.CvSVM.CvSVM_1(Native Method)
at org.opencv.ml.CvSVM.<init>(CvSVM.java:80)
I can't manage to train the SVM, any idea? Thanks
Assuming that you know what you are doing by reshaping an image and using it to train SVM, the most probable cause of this is that your
Mat img = Highgui.imread(file.getAbsolutePath());
fails to actually read an image, generating a matrix img with null data property, which will eventually trigger the following in the OpenCV code:
// check parameter types and sizes
if( !CV_IS_MAT(train_data) || CV_MAT_TYPE(train_data->type) != CV_32FC1 )
CV_ERROR( CV_StsBadArg, "train data must be floating-point matrix" );
Basically train_data fails the first condition (being a valid matrix) rather than failing the second condition (being of type CV_32FC1).
In addition, even though reshape works on the *this object, it acts like a filter and its effect is not permanent. If it's used in a single statement without immediately being used or assigned to another variable it will be useless. Change the following lines in your code:
img.reshape(1, 1);
trainingImages.push_back(img);
to:
trainingImages.push_back(img.reshape(1, 1));
Just as the error says, You need to change type of Your matrix, from integer type, probably CV_8U, to floating point one, CV_32F or CV_64F. To do it You can use cv::Mat::convertTo(). Here is a bit about depths and types of matrices.
I'm already looking for hours but I can't find the problem.
I get the following error when I want to stitch two images together:
OopenCV error: assertion failed (y==0 || data && dims >=1 && (unsigned)y < (unsigned > size.p[0])) in unkown function...
This is the code (pano.jpg was already stitched together in a previous run of the algorithm were the same algorithm did work...):
cv::Mat img1 = imread("input2.jpg");
cv::Mat img2 = imread("pano.jpg");
std::vector<cv::Mat> vectest;
vectest.push_back(img2);
vectest.push_back(img1);
cv::Mat result;
cv::Stitcher stitcher = cv::Stitcher::createDefault( false );
stitcher.setPanoConfidenceThresh(0.01);
detail::BestOf2NearestMatcher *matcher = new detail::BestOf2NearestMatcher(false, 0.001/*=match_conf*/);
detail::SurfFeaturesFinder *featureFinder = new detail::SurfFeaturesFinder(100);
stitcher.setFeaturesMatcher(matcher);
stitcher.setFeaturesFinder(featureFinder);
cv::Stitcher::Status status = stitcher.stitch( vectest, result );
You can find the images here:
pano.jpg: https://dl.dropbox.com/u/5276376/pano.jpg
input2.jpg: https://dl.dropbox.com/u/5276376/input2.jpg
Edit:
I compiled opencv 2.4.2 myself but still the same problem...
The system crashes in the stitcher.cpp file on the following line:
blender_->feed(img_warped_s, mask_warped, corners[img_idx]);
In this feed function it crashed at this line:
int y_ = y - y_tl;
const Point3_<short>* src_row = src_pyr_laplace[i].ptr<Point3_<short> >(y_);
Point3_<short>* dst_row = dst_pyr_laplace_[i].ptr<Point3_<short> >(y);
And finally this assertion in mat.hpp:
template<typename _Tp> inline _Tp* Mat::ptr(int y)
{
CV_DbgAssert( y == 0 || (data && dims >= 1 && (unsigned)y < (unsigned)size.p[0]) );
return (_Tp*)(data + step.p[0]*y);
}
strange that everything works fine for some people here...
I stitching images now,but not using stitch High Level Functionality instead encode every step by opencv2.4.2. As far as I know, you could have a try about first SurfFeaturesFinder, second BestOf2NearestMatcher. Just a try, Good luck!
I have a problem over here with some simple stitching tool test using OpenCV.
Here s my code:
IplImage *pLeft,
*pRight;
pLeft = cvLoadImage( "left.jpg" );
pRight = cvLoadImage( "right.jpg" );
cv::Mat cvMatLeft( pLeft, true ),
cvMatRight( pRight, true );
std::vector<cv::Mat> imgs;
imgs.push_back( cvMatLeft );
imgs.push_back( cvMatRight );
cv::Mat cvMatOutput;
cv::Stitcher myStitcher = cv::Stitcher::createDefault( true );
cv::Stitcher::Status myStatus = myStitcher.stitch( imgs, cvMatOutput );
I get back the enum ERR_NEED_MORE_IMGS while running this code.
When i debug into the functions called by OpenCV i did recognize the following uncertainty:
stitch( )'s first argument is an cv::InputArray named images. Taking a closer look at it shows, that the arguments sz.width and sz.height are 0.
Further on running through estimateTransform( ) twice the function matchImages( ) is called where the member imgs_ is checked. This one is derived from the InputArray and has (resulting) the size( ) (of images) being 0.
This leads to the mentioned enum.
What am i doing wrong? Something on initialization of the stitcher or the cv::Mat?
Thanks in advance
I think it occurs when you use similar image. When you use the images which the number of extraced feature points is small, so it does.
So, I'm trying to extract some SURF keypoints, but I want to impose these key points! So, I put the last parameter to "true" which is "useProvidedKeypoints".
Also, when I create my Keypoint, I used the default constructor (so some default values there). I only change the point "pt" and the octave that I set to 3.
I'm using the C++ interface with SURF. But I know that the problem is right at cvExtractSURF because I copied that part of the code in mine to help me debug.
When I call that function, with the last parameter set to true, I got this error:
OpenCV Error: Bad argument (Unknown array type) in cvarrToMat, file /home/widgg/opencv/trunk/modules/core/src/matrix.cpp, line 651
terminate called after throwing an instance of 'cv::Exception'
what(): /home/widgg/opencv/trunk/modules/core/src/matrix.cpp:651: error: (-5) Unknown array type in function cvarrToMat
I really don't know what I'm doing wrong!
EDIT:
Here's some code. First how I create the keypoints (I left a couple of informations, like the layer_id stuff, but you get the main idea):
for (json_pt_info_vector::iterator b_beg = beg->points.begin(); b_beg != b_end; ++b_beg)
{
int layer_id = b_beg->layer_id;
json_point_info_coord &jpic = b_beg->coord;
jpic.feature_id = features[layer_id].keypoints.size();
KeyPoint kp;
kp.octave = 3;
kp.pt.x = jpic.x;
kp.pt.y = jpic.y;
features[layer_id].keypoints.push_back(kp);
}
Here's the call to SURF:
SURF surf(300, 3, 4);
for (int i = 0; i < nb_img; ++i)
{
debug_msg("extract_features #4.1");
cv::detail::ImageFeatures &cdif = features[i];
Mat gray_image = imread(param.layer_images[i], 0); // 0 = force to gray scale!
debug_msg("extract_features #4.2");
vector<float> descriptors;
debug_msg("extract_features #4.3");
surf(gray_image, Mat(), cdif.keypoints, descriptors, true); // MUST BE TRUE TO FORCE THE PROVIDED KEYPOINTS
debug_msg("extract_features #4.4");
cdif.descriptors = Mat(descriptors, true).reshape(1, (int)cdif.keypoints.size());
debug_msg("extract_features #4.5");
gray_image.release();
debug_msg("extract_features #4.6");
images[i] = imread(param.layer_images[i]); // keep the image open
}
It crashes after #4.3 in the debug message!
Hope that helps!
EDIT 2:
I replaced some part by cv::SurfDescriptorExtracter. I replace everything from 4.3 to 4.5 with the following line:
extractor.compute(gray_image, cdif.keypoints, cdif.descriptors);
So now, there's still a bug, but it's located somewhere else, not necessary related to this question!
I'm surprised that the call to surf(gray_image, Mat(), cdif.keypoints, descriptors, true) even compiles. the descriptors argument should be a cv::Mat, not a vector.