Using cvReshape after convertion from IplImage to CvMat by cvGetImage - opencv

I need to get 1D vectors from input grayscale images in order to calculate covariance matrix. So I'm trying to convert IplImage to CvMat and then reshape it.
At the first time I used the following code:
CvMat *image_matrix = cvCreateMat(image->width, image->height, CV_32FC1);
cvConvert(image, image_matrix);
CvMat iv_p, *image_vector = cvCreateMat(image->widht * image->height, 1, CV_32FC1);
image_vector = cvReshape(image_matrix, &iv_p, 1, image->widht * image->height);
But it gave me
Assertion failed (src.size == dst.size && src.channels<> ==
dst.channels<> in cvConvertScale)
So I find here another way:
CvMat i_p, *image_matrix;
image_matrix = cvGetMat(image, &i_p, 0, 0);
CvMat iv_p, *image_vector = cvCreateMat(image->widht * image->height, 1, CV_32FC1);
image_vector = cvReshape(image_matrix, &iv_p, 1, image->widht * image->height);
But this time I get
Image step is wrong (The matrix is not continuous, thus the number of rows can not be changed> in cvReshape.
Could anybody please suggest any solution to my problem?

Related

OpenCV fisheye::projectpoints assertion faild

I want to project a single point (-1450,-1660) on an image
I am using opencv 4.0.1 c++
I have the camera matrix and distortion coefficient
and my code is
vector <Point3f> inputpoints;
Point3f myPoint;
myPoint.x = -1450;
myPoint.y = -1660;
myPoint.z = 0;
inputpoints.push_back(myPoint);
vector<Point2f> outputpoints;
vector<Point3f> tvec;
tvec.push_back(Point3f(0, 0, 0));
vector<Point3f> rvec;
rvec.push_back(Point3f(0, 0, 0));
double mydata[9] = { 3.3202343554882879e+02, 1., 6.4337059696010670e+02, 0, 3.3196938477610536e+02, 5.3844814394773562e+02, 0., 0., 1. };
Mat mycameraMatrix = Mat(3, 3, CV_64F, mydata);
double mydata2[4] = { -1.1129472191078109e-03, 4.9443845791693870e-02,
-7.2244333582166609e-03, -1.7309984187889034e-03 };
Mat mydiscoff = Mat{ 4,1, CV_64F ,mydata2 };
Mat newCamMat1= Mat(3, 3, CV_64F);
cv::fisheye::projectPoints(inputpoints, rvec, tvec, mycameraMatrix, mydiscoff, outputpoints);
when I run the program I get this exception
OpenCV(4.0.1) Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == CV_MAT_CN(type0) && ((1 << type0) & fixedDepthMask) != 0)) in cv::debug_build_guard::_OutputArray::create, file c:\build\master_winpack-build-win64-vc15\opencv\modules\core\src\matrix_wrap.cpp, line 1395
I changed the type of camera matrix and distortion coefficient to CV_32f but I still got the same error , I am a very beginner in openCV ..so can any one tell me what caused this exception?
I know the rvec should be 3*3 but I just followed someone else code who wrote that can be written in this way
okay the problem was that projectpoints and fisheye::projectpoints differ in the order of parameters ..so I was putting the order which belongs to projectpoints

cvCalibrateCamera2 - how to properly define rotation matrix?

I try to use cvCalibrateCamera2, but I get error that rotation matrix is not properly defined:
...calibration.cpp:1495: error: (-5) the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views
I have already tried all possibilities from that info but I still get this error.
My code:
CvMat *object_points = cvCreateMat((int)pp.object_points.size(), 1, CV_32FC3);
CvMat *image_points = cvCreateMat((int)pp.image_points.size(), 1, CV_32FC2);
const CvMat point_counts = cvMat((int)pp.point_counts.size(), 1, CV_32SC1, &pp.point_counts[0]);
for (size_t i=0; i<pp.object_points.size(); i++)
{
object_points->data.fl[i*3+0] = (float)pp.object_points[i].x;
object_points->data.fl[i*3+1] = (float)pp.object_points[i].y;
object_points->data.fl[i*3+2] = (float)pp.object_points[i].z;
image_points->data.fl[i*2+0] = (float)pp.image_points[i].x;
image_points->data.fl[i*2+1] = (float)pp.image_points[i].y;
}
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
cvCalibrateCamera2(object_points, image_points, &point_counts,
cvSize(pp.width, pp.height), camera->m_calib_K,
camera->m_calib_D, tempR, &tempData->m_calib_T,
CV_CALIB_USE_INTRINSIC_GUESS)
// camera->calib_T is defined as:
// double m_calib_T_data[3];
// cvMat(3, 1, CV_64F, camera->m_calib_T_data);
I thought that rotation matrix used in cvCalibrateCamera2 should be 1x3 (then I want to use Rodrigues function to get 3x3 matrix) but it doesn't work as any other combination mentioned in error.
Any ideas?
And I use opencv 2.4.0 (maybe there is bug in that method, but for some reasons I can't use later version of opencv)
I think the statement is clear. I am not confident with C# but I know it requires a strong initialization.
The problem in line
CvMat* tempR = cvCreateMat(1, 3, CV_32F);
is that tempR should have a line 1x3 for every N objects point you use. In this sense, the statement becomes clear
...calibration.cpp:1495: error: (-5) the output array of rotation
vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9
array, where n is the number of views
You must create a tempR like that (more or less, I do not know how to calculate N in C#)
CvMat* tempR = cvCreateMat(N, 3, CV_32F);
Try to extract N from dimensions of object.point.size. If it does not work, try image.point.size

Passing OpenCV mat to OpenCL kernel for execution

I am trying to combine Opencv with OpenCL for creating image buffer and pass it to GPU.
I have imx6 which uses vivante core (GPU).
Do not support OCL feature of opencv.
I am using OpenCV for reading an image which is in Mat and then want to convert it to float array and pass to kernel for execution.
But I'm getting an error segment fault while running cl program.
Probably I am not able to convert cv::mat to cl_float2, please help.
Snippet of code:
/* Load Image using opencv in opencl*/
cv::Mat shore_mask = cv::imread("1.jpg", 1);
cl_float2 *im = (cl_float2*)shore_mask.data;
cl_int h = shore_mask.rows;
cl_int w = shore_mask.cols;
/* Transfer data to memory buffer */
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
How do I convert mat to float matrix and pass it to opencl kernel for execution?
cv::Mat shore_mask = cv::imread("1.jpg", 1);
This gives you BGR image of size 3*width*height*8bit. Now, you do
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
Read size is width*height*64 bit - you are getting over the array bounds. Convert your cv::Mat to proper type or change read size.

opencv (SVM) - the parameter nu is out of range

The following code generates an exception saying:
One of arguments' values is out of range- the parameter nu must be between 0 and 1
I wonder why this is happening when I've already set it to something between 0 and 1.
CvSVM svm;
CvParamGrid CvParamGrid_C(pow(2.0,-5), pow(2.0,15), pow(2.0,2));
CvParamGrid CvParamGrid_gamma(pow(2.0,-15), pow(2.0,3), pow(2.0,2));
CvParamGrid CvParamGrid_nu(0.4, 0.8,0.1);
const cv::Mat labelsMat(250, 1, CV_32FC1, labels);
const cv::Mat trainingDataMat(250,35, CV_32FC1, trainingData);
CvSVMParams paramz;
paramz.kernel_type = CvSVM::RBF; paramz.svm_type = CvSVM::NU_SVR;
paramz.term_crit = cvTermCriteria(CV_TERMCRIT_ITER,100,0.000001);
svm.train_auto(trainingDataMat, labelsMat, cv::Mat(), cv::Mat(), paramz, 5,
CvParamGrid_C, CvParamGrid_gamma, CvSVM::get_default_grid(CvSVM::P),
CvParamGrid_nu,vSVM::get_default_grid(CvSVM::COEF),
CvSVM::get_default_grid(CvSVM::DEGREE), true);
paramz = svm.get_params();
Can someone help?
From opencv documentation: The grid is logarithmic, so step must always be greater then 1.

3D matrix multiplication in opencv for RGB color mixing

I'm trying to perform an RGB Color mixing operation in opencv. I have the image contained in an MxNx3 Mat. I would like to multiple this with a 3x3 matrix. In Matlab I do the following:
*Flatten the image from MxNx3 to a MNx3
*multiply the MNx3 matrix by the 3x3 color mixing matrix
*reshape back to a MxNx3
In Opencv I would like to do the following:
void RGBMixing::mixColors(Mat &imData, Mat &rgbMixData)
{
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
// Scale the coefficents
multiply(rgbMixMat, 1, rgbMixMat, 256);
Mat temp = imData.reshape(0, 1);
temp = temp.t();
multiply(temp, rgbMixMat, rgbMixData);
}
This compiles but generates an exception:
OpenCV Error: Sizes of input arguments do not match (The operation is
neither 'a rray op array' (where arrays have the same size and the
same number of channels) , nor 'array op scalar', nor 'scalar op
array') in arithm_op, file C:/slave/WinI
nstallerMegaPack/src/opencv/modules/core/src/arithm.cpp, line 1253
terminate called after throwing an instance of 'cv::Exception'
what():
C:/slave/WinInstallerMegaPack/src/opencv/modules/core/src/arithm.cpp:
1253: error: (-209) The operation is neither 'array op array' (where
arrays have the same size and the same number of channels), nor
'array op scalar', nor 'sca lar op array' in function arithm_op
This application has requested the Runtime to terminate it in an
unusual way. Please contact the application's support team for more
information.
Update 1:
This is code that appears to work:
void RGBMixing::mixColors(Mat &imData, Mat&rgbMixData)
{
Size tempSize;
uint32_t channels;
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
Mat flatImage = imData.reshape(1, 3);
tempSize = flatImage.size();
channels = flatImage.channels();
cout << "temp channels: " << channels << " Size: " << tempSize.width << " x " << tempSize.height << endl;
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage.t() * rgbMixMat;
mixedImage = mixedImage.t();
rgbMixData = mixedImage.reshape(3, 1944);
channels = rgbMixData.channels();
tempSize = rgbMixData.size();
cout << "temp channels: " << channels << " Size: " << tempSize.width << " x " << tempSize.height << endl;
}
But the resulting image is distorted. If I skip the multiplication of the two matrices and just assign
mixedImage = flatFloatImage
The resulting image looks fine (just not color mixed). So I must be doing something wrong, but am getting close.
I see a couple of things here:
For scaling the coefficients, OpenCV supports multiplication by scalar so instead of multiply(rgbMixMat, 1, rgbMixMat, 256); you should do directly rgbMixMat = 256 * rgbMixMat;.
If that is all your code, you don't properly initialize or assign values to imData, so the line Mat temp = imData.reshape(0, 1); is probably going to crash.
Assuming that imData is a MxNx3 (3-channel Mat), you want to reshape that into a MNx3 (1-channel). According to the documentation, when you write Mat temp = imData.reshape(0, 1); you are saying that you want the number of channels to remain the same, and the row, should be 1. Instead, it should be:
Mat myData = Mat::ones(100, 100, CV_32FC3); // 100x100x3 matrix
Mat myDataReshaped = myData.reshape(1, myData.rows*myData.cols); // 10000x3 matrix
Why do you take the transpose temp = temp.t(); ?
When you write multiply(temp, rgbMixMat, mixData);, this is the per-element product. You want the matrix product, so you just have to do mixData = myDataReshaped * rgbMixMat; (and then reshape that).
Edit: It crashes if you don't use the transpose, because you do imData.reshape(1, 3); instead of imData.reshape(1, imData.rows);
Try
void RGBMixing::mixColors(Mat &imData, Mat&rgbMixData)
{
Size tempSize;
uint32_t channels;
float rgbmix[] = {1.4237, -0.12364, -0.30003, -0.65221, 2.1936, -0.54141, -0.38854, -0.47458, 1.8631};
Mat rgbMixMat(3, 3, CV_32F, rgbmix);
Mat flatImage = imData.reshape(1, imData.rows*imData.cols);
Mat flatFloatImage;
flatImage.convertTo(flatFloatImage, CV_32F);
Mat mixedImage = flatFloatImage * rgbMixMat;
rgbMixData = mixedImage.reshape(3, imData.rows);
}

Resources