Can not convert with cvMerge,DFT - opencv

I am trying to make the dft of one single channeled image, and as cvDft is expecting complex values, I was adviced to merge the original image with another image with all 0's so this last one will be considered as imaginary part.
My problem comes when using cvmerge function,
Mat tmp = imread(filename,0);
if( tmp.empty() )
{cout << "Usage: dft <image_name>" << endl;
return -1;}
Mat Result(tmp.rows,tmp.cols,CV_64F,2);
Mat tmp1(tmp.rows,tmp.cols,CV_64F, 0);
Mat image(tmp.rows,tmp.cols,CV_64F,2);
cvMerge(tmp,tmp1,image);`
It gives me the next error: can not convert cvMAt to cvArr
Anyone could help me? thanks!

1) it seems like you're mixing up 2 different styles of opencv code
cv::Mat (- Mat) is a c++ class from the new version of opencv, cvMerge is a c function from the old version of opencv.
instead of using cvmerge use merge
2) you're trying to merge a matrix (tmp) of type CV_8U (probably) with a CV_64F
use convertTo to get tmp as CV_64F
3) why is your Result & image mats (the destination mat) are initializes to cv::Scalar(2)? i think you're misusing the constractor parameters. see here for more info.
4) you're image mat is a single channel mat and you wanted it as a 2 channel mat (as mentioned in the question), change the declaration to
Mat image(tmp.rows,tmp.cols,CV_64FC2);

Related

OpenCV ConvertTo CV_32SC1 from CV_8UC1

Hello I am using opencv in version 3.4 and want to read an image (*.pgm) and then convert it to CV_32SC1. Therefore I use the following code (part):
img = imread(f, CV_LOAD_IMAGE_GRAYSCALE);
img.convertTo(imgConv, CV_32SC1);
The problem is the following, all pixels are converted to zero, and I don't understand why. I'm checking by (and imshow("Image", imgConv);)
cout << static_cast<int>(img.at<uchar>(200,100));
cout << static_cast<int32_t>(imgConv.at<int32_t>(200,100)) << endl;
In my example this results in
74
74
I tested several points of the image, all pixels are simply the same, but shouldn't them being converted automatically to the 32 bit range, or do I have to manage that manually?
You have to manage that manually. This is why cv::Mat::convertTo() has another parameter, a scale. For instance, if you want to convert from CV_8U to CV_32F you'd typically
img.convertTo(img2, CV_32F, 1.0/255.0);
to scale to the typical float-valued range. I'm not sure what your expected range for CV_32SC1 is, since you're going from unsigned to signed, but just add the scale factor you feel is right.

Opencv Read Mat with double precision YAML

I am trying to read data from a YAML file using the tutorials available at the OpenCV website. I am using the ">>" operator as suggested.
cv::Mat R;
cv::FileStorage fs;
fs.open(filename, cv::FileStorage::READ);
R >> fs["matrix"];
It basically works but I want the matrix to be in double precision not in float precision. Typing the matrix R as a double matrix does not do the job. What would be the right way to achieve this.
To load the image as "double precision", aka of type CV_64F, the image need to be also stored as CV_64F.
If the image is saved as "single precision", aka of type CV_32F, you can however load it as CV_32F, and then convert to CV_64F with:
R.convertTo(R, CV_64F);
You can check the format of the saved matrix opening the yml file, and checking the field dt. For CV_32F formats it will have a f in it, for CV_64F formats it will have a d.

OpenCV FaceRecognizer wrong shapes for given matrices

I'm trying to make a FisherFaceRecognizer's predict() method work, but I keep getting an error
Bad argument (Wrong shapes for given matrices. Was size(src) =
(1,108000), size(W) = (36000,1).) in subspaceProject, file
/tmp/opencv-DCb7/OpenCV-2.4.3/modules/contrib/src/lda.cpp, line 187
This is similar to a question that was asked at Wrong shapes for given matrices in OPENCV
but in my case, both source and training images are the same data type, full color.
My code is adapted from the tutorial at http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html#fisherfaces
however, my test image is larger than the training images, so I needed to work on a region of interest (ROI) of the right size.
Here's how I read the images and converted sizes. I cloned the ROI matrix because an
earlier error message told me the target matrix must be contiguous:
vector<Mat> images;
images.push_back( cvLoadImage( trainingList[i].c_str()));
IplImage* img;
img = cvLoadImage( imgName.c_str() );
// take ROI and clone into a new Mat
Mat testSample1(img, Rect( xLoc, yLoc, images[0].cols, images[0].rows));
Mat testSample = testSample1.clone();
// Create a FisherFaceRecognizer in OpenCV
Ptr<FaceRecognizer> FFR = createFisherFaceRecognizer(0,DBL_MAX);
model->train(images, labels);
cout << " check of data type testSample is " << testSample.type() << " images is " << images[0].type() << endl;
int predictedLabel = model->predict(testSample);
//
I get an exception message at the predict statement.
The cout statement tells me both matrices have type 16, yet somehow it still doesn't believe the matrices are the same size and data type...
You should ensure the shapes, not types
Try
cout << testSample.rows << testSample.cols << images[0].rows << images[0].cols ;
Also
ensure that both ,training img & test img, are in the same color space
If not, Try
cvtColor(testSample, testSample_inSameSpaceOfTraining, CV_BGR2***); // default opencv colors "BGR"
I found out that the FisherFaceRecognizer requires grayscale images, so I should have loaded both training and test images like this:
trainingImages.push_back( imread( trainingList[i].c_str(), CV_LOAD_IMAGE_GRAYSCALE));
and
Mat img;
img = imread( imgName.c_str(), CV_LOAD_IMAGE_GRAYSCALE );
(also reconciled the type of img for consistency). The grayscale-only requirement is documented in the OpenCV reference manual (pdf available online) but apparently not in any of the online tutorials or other documents for FisherFaceRecognizer.

get matrix of image in opencv

//Open the image
Mat img_rgb = imread("sudoku2.png", CV_LOAD_IMAGE_GRAYSCALE);
if (img_rgb.empty())
{
cout<<"Cannot open the image"<<endl;
return;
}
Mat img_bw = img_rgb > 128;
imwrite("image_bw.jpg", img_bw);
Now, I want to get all pixels of img_bw and save it into a matrix M (int[img_bw.rows][img_bw.cols]). How to do it in C++.
What format ?
The raw byte data in cv::Mat is available from the .ptr() function, ie img_bw.ptr().
Opencv also has an xml and json read and write functions for matrices, just by using the << operator - see opencv tutorial on xml and yaml i/o
EDIT: In c++ you can access pixels with the .at operator.
Use img_data.at<uchar>(x,y) for an unsigned char (CV_8U) pixel and img_data.at<float>(x,y) for a CV_32F image.

Convert 1 channel image to 3 channel

I am trying to convert a 1 channel image (16 bit) to a 3 channel image in OpenCV 2.3.1. I am having trouble using the merge function and get the following error:
Mat temp, tmp2;
Mat hud;
tmp2 = cv_ptr->image;
tmp2.convertTo(temp, CV_16UC1);
temp = temp.t();
cv::flip(temp, temp, 1);
resize(temp, temp, Size(320, 240));
merge(temp, 3, hud);
error: no matching function for call to ‘merge(cv::Mat&, int, cv::Mat&)’
Can anyone help me with this? Thanks in advance!
If temp is the 1 channel matrix that you want to convert to 3 channels, then the following will work:
cv::Mat out;
cv::Mat in[] = {temp, temp, temp};
cv::merge(in, 3, out);
check the Documenation for more info.
Here is a solution that does not require replicating the single channel image before creating a 3-channel image from it. The memory footprint of this solution is 3 times less than the solution that uses merge (by volting above).
See openCV documentation for cv::mixChannels if you want to understand why this works
// copy channel 0 from the first image to all channels of the second image
int from_to[] = { 0,0, 0,1, 0,2};
Mat threeChannelImage(singleChannelImage.size(), CV_8UC3);
mixChannels(&singleChannelImage, 1, & threeChannelImage, 1, from_to, 3);
It looks like you aren't quite using merge correctly. You need to specify all of the cannels that are to be 'merged'. I think you want a three channel frame, with all the channels identical, in Python I would write this:
cv.Merge(temp, temp, temp, None, hud)
From the opencv documentation:
cvMerge: Composes a multi-channel array from several single-channel arrays or inserts a single channel into the array.

Resources