OpenCV: How to convert CV_8UC1 mat to CV_8UC3 - opencv

How to convert CV_8UC1 Mat to CV_8UC3 with OpenCV?
Mat dst;
Mat src(height, width, CV_8UC1, (unsigned char*) captureClient->data());
src.convertTo(dst, CV_8UC3);
but dst.channels() = 1

I've found that the best way to do this is:
cvtColor(src, dst, COLOR_GRAY2RGB);
The image will look the same as when it was grayscale CV_8UC1 but it will be a 3 channel image of type CV_8UC3.

From the documentation on convertTo
void Mat::convertTo(Mat& m, int rtype, double alpha=1, double beta=0) const
rtype – The desired destination matrix type, or rather, the depth (since the number of channels will be the same with the source one). If rtype is negative, the destination matrix will have the same type as the source.
You want to create a matrix for each of the 3 channels you want to create and then use the merge function. See the answers to this question

The convention is, that for the type CV_8UC3, the pixels values range from 0 to 255, and for type CV_32FC3 from 0.0 to 1.0. Thus you need to use a scaling factor of 255.0, instead of 1.0:
Mat::convertTo(newImage, CV_32FC1, 255.0);

Related

opencv c++ inverse fourier transformation does not give same image

I have a bgr image and convert to lab channels.
I tried to check if the idft image of the result of dft of L channel image is the same.
// MARK: Split LAB Channel each
cv::Mat lab_resized_host_image;
cv::cvtColor(resized_host_image, lab_resized_host_image, cv::COLOR_BGR2Lab);
imshow("lab_resized_host_image", lab_resized_host_image);
cv::Mat channel_L_host_image, channel_A_host_image, channel_B_host_image;
std::vector<cv::Mat> channel_LAB_host_image(3);
cv::split(lab_resized_host_image, channel_LAB_host_image);
// MARK: DFT the channel_L host image.
channel_L_host_image = channel_LAB_host_image[0];
imshow("channel_L_host_image", channel_L_host_image);
cv::Mat padded_L;
int rows_L = getOptimalDFTSize(channel_L_host_image.rows);
int cols_L = getOptimalDFTSize(channel_L_host_image.cols);
copyMakeBorder(channel_L_host_image, padded_L, 0, rows_L - channel_L_host_image.rows, 0, cols_L - channel_L_host_image.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes_L[] = {Mat_<float>(padded_L), Mat::zeros(padded_L.size(), CV_32F)};
Mat complexI_L;
merge(planes_L, 2, complexI_L);
dft(complexI_L, complexI_L);
// MARK: iDFT Channel_L.
Mat complexI_channel_L = complexI_L;
Mat complexI_channel_L_idft;
cv::dft(complexI_L, complexI_channel_L_idft, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
normalize(complexI_channel_L_idft, complexI_channel_L_idft, 0, 1, NORM_MINMAX);
imshow("complexI_channel_L_idft", complexI_channel_L_idft);
Each imshow give me different image... I think normalization would be error...
what is wrong? help!
original image
idft
OpenCV’s FFT is not normalized by default. One of the forward/backward transform pair must be normalized for the pair to reproduce the input values. Simply add cv::DFT_SCALE to the options:
cv::dft(complexI_mid_frequency_into_channel_A, iDFT_mid_frequency_into_channel_A, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT|cv::DFT_SCALE);

(opencv) imread with CV_LOAD_IMAGE_GRAYSCALE yields a 4 channels Mat

The following code reads an image from a file into a cv::Mat object.
#include <string>
#include <opencv2/opencv.hpp>
cv::Mat load_image(std::string img_path)
{
cv::Mat img = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE);
cv::Scalar intensity = img.at<uchar>(0, 0);
std::cout << intensity << std::endl;
return img;
}
I would expect the cv::Mat to have only one channel (namely, the intensity of the image) but it has 4.
$ ./test_load_image
[164, 0, 0, 0]
I also tried converting the image with
cv::Mat gray(img.size(), CV_8UC1);
img.convertTo(gray, CV_8UC1);
but the gray matrix is also a 4 channels one.
I'd like to know if it's possible to have a single channel cv::Mat. Intuitively, that's what I would expect to have when dealing with a grayscale (thus, single channel) image.
The matrix is single channel. You're just reading the values in the wrong way.
Scalar is a struct with 4 values. Constructing a Scalar with a single value will result in a Scalar with the first value set, and the remaining at zero.
In your case, only the first values make sense. The zeros are as default for Scalar.
However, you don't need to use a Scalar:
uchar intensity = img.at<uchar>(0, 0);
std::cout << int(intensity) << std::endl; // Print the value, not the ASCII character

incorrect size of vector when converting Mat to Vector

I'm trying to convert the result Mat of templateMatch with the following code (which was found at: this question):
void convertmatVec(const cv::Mat& m, std::vector<uchar>& v) {
if (m.isContinuous()) {
v.assign(m.datastart, m.dataend);
}
else {
printf("failed to convert / not continuous");
return;
}
}
and when I check the size of the output it's not the same as the product of result's columns and rows (which is the same when I try to convert another Mat that I created):
result:
another Mat created with:
cv::Mat test(result.size(),false);
test.setTo(cv::Scalar(255));
which is then converted shows that the size is the same as the product:
So my question is how can I get the result's data so I can process it futher because I'm assuming the size of the vector should be the same as the product which it clearly isn't.
EDIT1: Added templateMatching code
void matchTemplatenoRotation(cv::Mat src, cv::Mat templ) {
cv::Mat img_display, result;
src.copyTo(img_display);
int result_cols = src.cols - templ.cols + 1;
int result_rows = src.rows - templ.rows + 1;
result.create(result_rows, result_cols, CV_32FC1);
cv::matchTemplate(src, templ, result, CV_TM_SQDIFF_NORMED);
cv::normalize(result, result, 0, 1, cv::NORM_MINMAX, -1, cv::Mat());
cv::Point minLoc, maxLoc;
double minVal, maxVal;
cv::minMaxLoc(result, &minVal, &maxVal, &minLoc, &maxLoc, cv::Mat());
cv::Point matchLoc = minLoc;
//end of templatematching
EDIT2 follow up question:
How come when I create another Mat with following code:
cv::Mat test(cv::Size(result.cols, result.rows),true);
test.setTo(cv::Scalar(255));
//cv::imshow("test3", test);
std::vector<float> testVector;
convertmatVec(test, testVector);
the vector size is as following:
You have 4 times the expected number of elements in your vector because your matrix is of type CV_32FC1. If you look at the type of m.datastart and m.dataend you will see that they are uchar* and not float* as you expect.
To correct this, change v.assign(m.datastart, m.dataend); to v.assign((float*)m.datastart, (float*)m.dataend);. And you will need to pass a vector of float instead of a vector<uchar>.
Of course, your conversion function will only work for float type matrices. you could add some tests to detect the type of the matrix inside the function.
To answer your follow up question, it appears that you have the inverse problem. You are passing a CV_8U type matrix to a function that expects a CV_32F type one. Use your old conversion function for 8-bit matrices and use the fix I suggested for 32-bit floating-values matrices. You can also add a test inside the conversion function to automatically choose the right conversion. I also advise you to read a bit on OpenCV Mat class to understand better what type of data is in your matrices.
It looks like your const cv::Mat& m has four channels.
Another thing to consider: if this Mat is the result of a matchTemplate(), then you should be using a vector<float>, instead of vector<uchar>.

opencv split vs mixChannels

To separate hue channel from HSV image, here is the code using the mixChannels function:
/// Transform it to HSV
cvtColor( src, hsv, CV_BGR2HSV );
/// Use only the Hue value
hue.create( hsv.size(), hsv.depth() );
int ch[] = { 0, 0 };
mixChannels( &hsv, 1, &hue, 1, ch, 1 );
But I know split function can also do this:
vector<Mat> chs;
split(hsv, chs);
Mat hue = chs[0];
Is that OK?
If these are the same, I think split method is more clean. Am I right?
You are pretty much right, split() is used to split all the channels of an multi-channel matrix into single channel matrices. On the other hand if you are interested in only one channel you can use mixChannels(). So you don't have to allocate memory for other channels as we do with split().
Keep things simple and use extractChannel, which wraps mixChannels for you.
cv::Mat hue;
int cn = 0; // hue
cv::extractChannel(hsv, hue, cn);

Image created does not display correctly

Hi I am using following function to create an image.
Mat im(584, 565, CV_8UC1, imgg);
imwrite("Output_Image.tif", im);
but the problem is that when I display the image "Output_Image.tif". Right hand side portion is overlapped onto the left handside portion. I am not able to understand what is happening. Please explain as I am beginner to opencv. Thanks
Are you sure that the image is CV_8UC1 (colospace grayscale)?
It looks like the image is an BGR image (Blue, Green, Red) and when you use an CV_8UC3 image as an CV_8UC1 image it does that.
Change this line:
Mat im(584, 565, CV_8UC1, imgg);
To this line:
Mat im(584, 565, CV_8UC3, imgg);
EDIT for mixChannels() (see comments):
C++: void mixChannels(const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, const int* fromTo, size_t npairs)
the Mat* dst is an array of Mats which must be allocated with the right size and depth before calling mixChannels. In that array your cv::Mats will really have 1 channel.
Code Example:
cv::Mat rgb(500, 500, CV_8UC3, cv::Scalar(255,255,255));
cv::Mat redChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat greenChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat blueChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat outArray[] = {redChannel, greenChannel, blueChannel };
int from_to[] = {0,0 , 1,2 , 3,3};
cv::mixChannels(&rgb, 1, outArray, 3, from_to, 3);
Its a little bit more complex than the split() function so heres a link to the documentary, especially the from_to array is hard to understand at first
Link to documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#mixchannels

Resources