I have a vector of images and vector of descriptor values extracted using HOG descriptor in Opencv:
vector<Mat> images;
vector< vector < float> > v_descriptorsValues;
These vectors are previously initialized with proper images and values. The part of the code that causes an Opencv error :
Mat reData(images.size(), v_descriptorsValues[0].size(),true);
for (int i=0; i< images.size(); i++)
Mat(v_descriptorsValues[i]).copyTo(reData.row(i));
And the Opencv error i got:
OpenCV Error: Assertion failed (!fixedSize() || ((Mat*)obj)->size.operator()() == _sz) in unknown function, file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 1344
Actually in the last line of code i want to copy all the element of v_descriptorsValues to reData Mat.
Any idea that can solve the problem?
Related
I'm trying to import my own pre-trained Caffe Googlenet model using OpenCV v.3.4.3, where i run a Caffe test after training using the model deploy file and everything was working fine. However when feeding the OpenCv net (after loading it) with image blob i get an exception.
OpenCv Code:
Net net = Dnn.readNetFromCaffe("deploy.prototxt","bvlc_googlenet.caffemodel");
Mat image = Imgcodecs.imread(input.getAbsolutePath(), Imgcodecs.IMREAD_COLOR);
Mat blob = Dnn.blobFromImage(image);
System.out.println(image);
System.out.println(blob);
net.setInput(blob);
Mat result = net.forward().reshape(1);
Output Error:
Mat [ 24*15*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0x1bcd0740, dataAddr=0x1a9d1880 ]
Mat [ -1*-1*CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x1bcd0eb0, dataAddr=0x1a4e4340 ]
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(3.4.3) Z:\build tools\opencv-3.4.3\modules\dnn\src\layers\fully_connected_layer.cpp:73: error: (-215:Assertion failed) 1 <= blobs.size() && blobs.size() <= 2 in function 'cv::dnn::FullyConnectedLayerImpl::FullyConnectedLayerImpl'
]
at org.opencv.dnn.Net.forward_1(Native Method)
at org.opencv.dnn.Net.forward(Net.java:62)
at test.OpenCVTests.main(OpenCVTests.java:54)
Caffe-train-val-model.prototxt
Caffe-deploy-model.prototxt
Thanks in advance!
This issue was solved for me here:
https://github.com/opencv/opencv/issues/12578#issuecomment-422304736
"there is no loss3/classifier_retrain from Caffe-deploy-model.prototxt
in Caffe-train-val-model.prototxt. If you tried to run this model
several times in Caffe you'll get different outputs for the same input
because Caffe fills missed weights randomly."
Copyright (github.com/dkurt)
According to the document, SIFT object could use the function below to compute descriptors for multiple images:
virtual void compute (InputArrayOfArrays images, std::vector< std::vector< KeyPoint > > &keypoints, OutputArrayOfArrays descriptors)
I'm trying to compute the SIFT descriptors for multiple images with the code below:
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors;
f2d->compute(train_imgs, train_keypoints, train_descriptors);
It could be compiled under Mac OS 10.10.5 with opencv3, while it might terminate during execution with error:
libc++abi.dylib: terminating with uncaught exception of type std::length_error: vector
Or I could change the type of train_descriptors into Mat (instead of vector< Mat >), it'll still fail during execution with another error:
OpenCV Error: Assertion failed (_descriptors.kind() == _InputArray::STD_VECTOR_MAT) in compute, file /tmp/opencv320151228-32931-2p5ggk/opencv-3.1.0/modules/features2d/src/feature2d.cpp, line 126 libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /tmp/opencv320151228-32931-2p5ggk/opencv-3.1.0/modules/features2d/src/feature2d.cpp:126: error: (-215) _descriptors.kind() == _InputArray::STD_VECTOR_MAT in function compute
What type of train_descriptors should I use to make this code compile and run correctly?
Could anyone told me what's the difference between vector< Mat > and OutputArrayOfArrays?
Your code
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors;
f2d->compute(train_imgs, train_keypoints, train_descriptors);
works well if train_imgs is a vector<Mat>.
You don't need to create a vector of 50000 elements, simply use vector<Mat> train_descriptors;.
OutputArrayOfArrays, like InputArray, OutputArray and the like, are an abstraction layer that OpenCV uses to allow to pass to a function both cv::Mat and std::vector. You should never use these classes explicitly. From OpenCV doc:
The class is designed solely for passing parameters. That is, normally you should not declare class members, local and global variables of this type.
Also, note that OutputArrayOfArrays is just a typedef of OutputArray.
This code could work I guess:
Ptr<Feature2D> f2d = xfeatures2d::SIFT::create();
vector<vector<KeyPoint>> train_keypoints;
f2d->detect(train_imgs, train_keypoints);
vector<Mat> train_descriptors = vector<Mat>(5e4);
f2d->compute(train_imgs, train_keypoints, train_descriptors);
I am using the BackgroundsubtractorMOG() to basically extract a mask to separate out the foreground. I am then using convexHull() on the mask to locate the position of a moving object.
But i am getting the following error:
openCV Error: Assertion failed (nelems >= 0 && (depth == CV_32F || depth == CV_32S)) in convexHull, file /home/ameya/OpenCV2.4.2/modules/imgproc/src/contours.cpp, line 1947
terminate called after throwing an instance of 'cv::Exception'
what(): /home/ameya/OpenCV2.4.2/modules/imgproc/src/contours.cpp:1947: error: (-215) nelems >= 0 && (depth == CV_32F || depth == CV_32S) in function convexHull
I have checked the no. of elements as well as type-casted the mask matrix. But the error still persists.
Has anyone encountered a similar problem before. I am using OpenCV 2.4.2
Use this format, it will help (notice typecasting to Mat):
convexhull(Mat(inputarray),hull,0,0)
are you calling convexhull on your mask image there ?
it's supposed to work with point2d(or index) vectors, eg. from findContours()
//Open the image
Mat img_rgb = imread("sudoku2.png", CV_LOAD_IMAGE_GRAYSCALE);
if (img_rgb.empty())
{
cout<<"Cannot open the image"<<endl;
return;
}
Mat img_bw = img_rgb > 128;
imwrite("image_bw.jpg", img_bw);
Now, I want to get all pixels of img_bw and save it into a matrix M (int[img_bw.rows][img_bw.cols]). How to do it in C++.
What format ?
The raw byte data in cv::Mat is available from the .ptr() function, ie img_bw.ptr().
Opencv also has an xml and json read and write functions for matrices, just by using the << operator - see opencv tutorial on xml and yaml i/o
EDIT: In c++ you can access pixels with the .at operator.
Use img_data.at<uchar>(x,y) for an unsigned char (CV_8U) pixel and img_data.at<float>(x,y) for a CV_32F image.
I'm trying to implement the fourier transformation in frequency domain.
I used getOptimalDFTSize accordingly, and I copied the image and mask, in bigger images, suitable for fourier transformation. I used the sample code from here as a reference.
Now, I have to separate the real and imaginary part, and to perform pixelwise multiplication of the image imaginary part with the mask imaginary part, and the same for the real part.But when I try to do so, I get the following error message:
OpenCV Error: Assertion failed (type == srcB.type() && srcA.size() == srcB.size()) in mulSpectrums, file /build/buildd/opencv-2.1.0/src/cxcore/cxdxt.cpp, line 1855
/build/buildd/opencv-2.1.0/src/cxcore/cxdxt.cpp:1855: error: (-215) type == srcB.type() && srcA.size() == srcB.size() in function mulSpectrums
The code is following:
//fourier transfromation of real and imaginary part
Mat complex_image, real_image, complex_mask, real_mask;
cv::dft(new_image, complex_image, DFT_COMPLEX_OUTPUT);
cv::dft(new_image, real_image, DFT_REAL_OUTPUT);
cv::dft(new_mask, complex_mask, DFT_COMPLEX_OUTPUT);
cv::dft(new_mask, real_mask, DFT_REAL_OUTPUT);
//pixelwise multiplication
Mat multiplied_complex, multiplied_real;
cv::mulSpectrums(complex_image, complex_mask, multiplied_complex, DFT_COMPLEX_OUTPUT );
cv::mulSpectrums(real_image, real_mask, multiplied_real, DFT_REAL_OUTPUT);
What am I doing wrong here?
Image and mask should have same size (width and height) and (most probably this is problem) type. So if it is different type you need to convert one of them so they have equal type.