Why is my cv::Mat-Matrix sparse insted of dense? - opencv

due to my current work with OpenVino I have to use OpenCV. I have to convert a std::vector to a cv::Mat-array. My exemplaric code looks like this:
std::vector<float> inputvector(10*10,1.1111);
cv::Mat image = cv::Mat(10,10,CV_32FC1);
for(int i=0;i<10;i++)
{
for (int j=0;j<10;j++)
{
image.at<float>(i,j) = inputvector.at(10*i+j);
}
}
Now I have to wrap my data by Blob::Ptr without allocation of new memory:
Blob::Ptr imgBlob = wrapMat2Blob(image);
For the last line above I get the following error message from OpenVINO inference engine:
Doesn't support conversion from not dense cv::Mat
I do not understand this as my 10*10 array holds the 1.1111-value in every position. Can somebody explain that? Thanks!

Related

Avoiding memory leaks while using vector<Mat>

I am trying to write a code that uses opencv Mat objects it goes something like this
Mat img;
vector<Mat> images;
for (i = 1; i < 5; i++)
{
img.create(h,w,type) // h,w and type are given correctly
// input an image from somewhere to img correctly.
images.push_back(img);
img.release()
}
for (i = 1; i < 5; i++)
images[i].release();
I however still seem to have memory leakage can anyone tell me why it is so?
I thought that if the refcount of a mat object = 0 then the memory should be automatically deallocated
You rarely need to call release explicitly, since OpenCV Mat objects take automatically care of internal memory.
Also take care that Mat copy just copies creates a new header pointing to the same data. If the original Mat goes out of scope you are left with an invalid matrix. So when you push the image into the vector, use a deep copy (clone()) to avoid that it the image into the vector becomes invalid.
Since you mentioned:
I have a large 3D image stored in a Mat object. I am running over it using for loops. creating a 2D mat called "image" putting the slices into image, pushing back image to vector images. releasing the image. And later doing a for loop on the images vector releasing all the matrices one by one.
You can store all slices into the vector with the following code. To release the images in the vector, just clear the vector.
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
using namespace std;
int main()
{
// Init the multidimensional image
int sizes[] = { 10, 7, 5 };
Mat data(3, sizes, CV_32F);
randu(data, Scalar(0, 0, 0), Scalar(1,1,1));
// Put slices into images
vector<Mat> images;
for (int z = 0; z < data.size[2]; ++z)
{
// Create the slice
Range ranges[] = { Range::all(), Range::all(), Range(z, z + 1) };
Mat slice(data(ranges).clone()); // with clone slice is continuous, but still 3d
Mat slice2d(2, &data.size[0], data.type(), slice.data); // make the slice a 2d image
// Clone the slice into the vector, or it becomes invalid when slice goes of of scope.
images.push_back(slice2d.clone());
}
// You can deallocate the multidimensional matrix now, if needed
data.release();
// Work with slices....
// Release the vector of slices
images.clear();
return 0;
}
Please try this code, which is basically what you do:
void testFunction()
{
// image width/height => 80MB images
int size = 5000;
cv::Mat img = cv::Mat(size, size, CV_8UC3);
std::vector<cv::Mat> images;
for (int i = 0; i < 5; i++)
{
// since image size is the same for i==0 as the initial image, no new data will be allocated in the first iteration.
img.create(size+i,size+i,img.type()); // h,w and type are given correctly
// input an image from somewhere to img correctly.
images.push_back(img);
// release the created image.
img.release();
}
// instead of manual releasing, a images.clear() would have been enough here.
for(int i = 0; i < images.size(); i++)
images[i].release();
images.clear();
}
int main()
{
cv::namedWindow("bla");
cv::waitKey(0);
for(unsigned int i=0; i<100; ++i)
{
testFunction();
std::cout << "another iteration finished" << std::endl;
cv::waitKey(0);
}
std::cout << "end of main" << std::endl;
cv::waitKey(0);
return 0;
}
After the first call of testFunction, memory will be "leaked" so that the application consumes 4 KB more memory on my device. But not more "leaks" after additional calls for me...
So this looks like your code is ok and the "memory leak" isn't related to that matrix creation and releasing, but maybe to some "global" things happening within the openCV library or C++ to optimize future function calls or memory allocations.
I encountered same problems when iterate openCV mat. The memory consumption can be 1.1G, then it stopped by warning that no memory. In my program, there are macro #define new new(FILE, LINE), crashed with some std lib. So I deleted all Overloading Operator about new/delete. When debugging, it has no error. But when it runs, I got "Debug Assertion Failed! Expression: _pFirstBlock == pHead". Following the instruction
Debug Assertion Error in OpenCV
I changed setting from MT (Release)/MTd (Debug)to
Project Properties >> Configuration Properties >> C/C++ >> Code Generation and changing the Runtime Library to:
Multi-threaded Debug DLL (/MDd), if you are building the Debug version of your code.
Multi-threaded DLL(/MD), if you are building the Release version of your code.
The memory leak is gone. The memory consumption is 38M.

OpenCV MultiBandBlender doesn't work

I try to blend my images into pano with MultiBandBlender, but it return black pano. But FeatherBlender works fine. What I doing wrong?
blendImages(const std::vector<cv::Point> &corners, std::vector<cv::Mat> images)
{
std::vector<cv::Size> sizes;
for(int i = 0; i < images.size(); i++)
sizes.push_back(images[i].size());
float blend_strength = 5;
cv::Size dst_sz = cv::detail::resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
cv::Ptr<cv::detail::Blender> blender = cv::detail::Blender::createDefault(cv::detail::Blender::MULTI_BAND);
//cv::detail::FeatherBlender* fb = dynamic_cast<cv::detail::FeatherBlender*>(blender.get());
//fb->setSharpness(1.f/blend_width);
cv::detail::MultiBandBlender* mb = dynamic_cast<cv::detail::MultiBandBlender*>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.)) - 1.));
blender->prepare(corners, sizes);
for(int i = 0; i < images.size(); i++)
{
cv::Mat image_s;
images[i].convertTo(image_s, CV_16SC3);
blender->feed(image_s, cv::Mat::ones(image_s.size(), CV_8UC1), corners[i]);
}
cv::Mat pano;
cv::Mat panoMask = cv::Mat::ones(dst_sz, CV_8UC1);
blender->blend(pano, panoMask);
return pano;
}
Three possible causes:
Try keeping all image_s and masks in a vector, and feed with the following structure:
for (int i = 0; i < images_s.size(); ++i)
blender->feed(images_s[i], masks[i], corners[i]);
Don't initialize panoMask to ones before blending.
Make sure corners are well defined
Actually, I can't compile your code with OpenCV 2.4, because of blender.get function. There is no such a function in my build of OpenCV 2.4.
Anyway, if you wish to make a panorama, you'd better not use resultRoi function. You need boundingRect. I suppose, it is really hard to get all horizontally aligned images for one panorama.
Also, look at my answer here. It demonstrates how to use MultiBandBlender.
Hey I was getting the same black pano while using MultiBand blender in opencv. Actually the issue was resolved by changing
cv::Mat::ones(image_s.size(), CV_8UC1)
to
cv::Mat::ones(image_s.size(), CV_8UC1)*255
This is because Mat::ones initialize all the pixels to a value of numerical 1, Thus, we need to muliply it with 255 in order to get a pure black & white mask.
And, thanks, your issue solved my problem :)

How to use MatVector in JavaCV

Hi I'm trying to write some camera calibration code and I'm having a hard time using MatVectors in JavaCV that should be the equivalents of std::vec in C++.
This is how i generate my image and object points:
Mat objectPoints = new Mat(allImagePoints.rows(),1,opencv_core.CV_32FC3);
float x = 0;
float y = 0;
for (int h=0;h<patternHeight;h++) {
y = h*rectangleSize;
for (int w=0;w<patternWidth;w++) {
x = w*rectangleSize;
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w), x);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+1, y);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+2, 0);
}
}
MatVector allObjectPointsVec = new MatVector(allImagePoints.cols());
MatVector allImagePointsVec = new MatVector(allImagePoints.cols());
for (int i=0;i<allImagePoints.cols();i++) {
allObjectPointsVec.put(i,objectPoints);
allImagePointsVec.put(i,allImagePoints.col(i));
}
My image points are given in the Mat allImagePoints and as you can see I create corresponding vectors allObjectPointsVec and allImagePointsVec accordingly. When i try to do a camera calibration with these points i get the following error:
OpenCV Error: Assertion failed (ni > 0 && ni == ni1) in cv::collectCalibrationData, file ..\..\..\..\opencv\modules\calib3d\src\calibration.cpp, line 3193
java.lang.reflect.InvocationTargetException
...
which seems like the lengths of the image and object points don't coincide but i'm pretty sure that i got this right. Printing the MatVector objects gives
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237b8a0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#4d353a7a]
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237acd0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#772f4d0]
which also confuses me as I would have expected that the capacity should correspond to the length (number of matrices in the vector). If I print the size field I get the expected value. If i access a random element in the vector (e.g. allObjectPointsVec.get(i)) and print it to a string, I reveive the following:
AbstractArray[width=1,height=77,depth=32,channels=3] (for object points)
AbstractArray[width=1,height=77,depth=32,channels=2] (for image points)
which is what I would expect... Any ideas? To me this seems sort of a bug, also because I don't understand what the capacity represents if not the vector length...

opencv MAT object Array Dynamic allocation with initialization

i am new to opencv and has less knowledge on cpp..i need to dynamically create an array of Mat object with the given initial values its giving me error
Mat *M=new Mat[variable](rows,cols,CV_8UC1,Scalar(0));
error:ISO C++ forbids initialization in array new[-fpermissive]
please suggest a correct syntax for my semantics
You need to initialize them all in a loop:
Mat *M = new Mat [variable];
for (int i=0; i<variable; i++)
M[i].create(rows,cols,CV_8UC1,Scalar(0));
Or use a 3-dimensional Mat:
int dims[3] = {variable,rows,cols};
Mat M(3,dims,CV_8UC1,Scalar(0));
But when you want to read/write images with imread() or imwrite(), I suggest using the first solution.

Downsampling without smoothing

Is there a built-in way to downsample an image in OpenCV 2.3.1 without prior Gaussian smoothing (which is performed by pyrDown C++ function).
Thanks.
Maybe you're looking for resize().
# Python code:
import cv2
large_img = cv2.imread('our_large_image.jpg')
small_to_large_image_size_ratio = 0.2
small_img = cv2.resize(large_img, # original image
(0,0), # set fx and fy, not the final size
fx=small_to_large_image_size_ratio,
fy=small_to_large_image_size_ratio,
interpolation=cv2.INTER_NEAREST)
Instead of interpolation=cv2.INTER_NEAREST you can use any of these interpolation methods.
resize() with interpolation= INTER_NEAREST.
EDIT
hmmm, what if you write the function yourself?
double factor;
int newcols = round(mat.cols*factor);
int newrows = round(mat.rows*factor);
Mat newmat = Mat(newcol, newrows, mat.type());
for (int i=0;i<mat.cols;i++){
for (int j=0;j<mat.cols;j++){
newmat_<yourtype> (round(i*factor), round(j*factor)) = mat_<yourtype>(i, j);
}
}
I haven't checked whether the code works or not (most likely not), but you get the idea.
You can use Image Pyramids: pyrDown, the links of the opencv document is
http://docs.opencv.org/2.4/doc/tutorials/imgproc/pyramids/pyramids.html

Resources