Hi I am using following function to create an image.
Mat im(584, 565, CV_8UC1, imgg);
imwrite("Output_Image.tif", im);
but the problem is that when I display the image "Output_Image.tif". Right hand side portion is overlapped onto the left handside portion. I am not able to understand what is happening. Please explain as I am beginner to opencv. Thanks
Are you sure that the image is CV_8UC1 (colospace grayscale)?
It looks like the image is an BGR image (Blue, Green, Red) and when you use an CV_8UC3 image as an CV_8UC1 image it does that.
Change this line:
Mat im(584, 565, CV_8UC1, imgg);
To this line:
Mat im(584, 565, CV_8UC3, imgg);
EDIT for mixChannels() (see comments):
C++: void mixChannels(const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, const int* fromTo, size_t npairs)
the Mat* dst is an array of Mats which must be allocated with the right size and depth before calling mixChannels. In that array your cv::Mats will really have 1 channel.
Code Example:
cv::Mat rgb(500, 500, CV_8UC3, cv::Scalar(255,255,255));
cv::Mat redChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat greenChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat blueChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat outArray[] = {redChannel, greenChannel, blueChannel };
int from_to[] = {0,0 , 1,2 , 3,3};
cv::mixChannels(&rgb, 1, outArray, 3, from_to, 3);
Its a little bit more complex than the split() function so heres a link to the documentary, especially the from_to array is hard to understand at first
Link to documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#mixchannels
Related
I'm trying to extract and display
Y channel from YUV converted image
My code is as follows:
Mat src, src_resized, src_gray;
src = imread("11.jpg", 1);
resize(src, src_resized, cvSize(400, 320));
cvtColor(src_resized, src_resized, cv::COLOR_BGR2RGB);
/*
I've tried both with and without the upper conversion
(mentioned here as bug
http://stackoverflow.com/questions/7954416/converting-yuv-into-bgr-or-rgb-in-opencv
in an opencv 2.4.* version - mine is 2.4.10 )
*/
cvtColor(src_resized, src_gray, CV_RGB2YUV); //YCrCb
vector<Mat> yuv_planes(3);
split(src_gray,yuv_planes);
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
// same result withg = Mat::zeros(Size(src_gray.cols, src_gray.rows), CV_8UC1);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
merge(channels, fin_img);
imshow("Y ", fin_img);
waitKey(0);
return 0;
As result I was expecting a Gray image showing luminescence.
Instead I get a B/G/R channel image depending on the position of (first/second/third)
channels.push_back(yuv_planes[0]);
as shown here:
What am I missing? (I plan to use the luminance to do a sum of rows/columns and extracting the License Plate later using the data obtained)
The problem was displaying the luminescence only in one channel instead of filling all channels with it.
If anyone else hits the same problem just change
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
to
(fill all channels with desired channel)
Mat fin_img;
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);
I cannot properly convert and/or display an ROI using a QRect in a QImage and create a cv::Mat image out of the QImage.
The problem is symmetric, i.e, I cannot properly get the ROI by using a cv::Rect in a cv::Mat and creating a QImage out of the Mat. Surprisingly, everything work fine whenever the width and the height of the cv::Rect or the QRect are equal.
In what follows, my full-size image is the cv::Mat matImage. It is of type CV_8U and has a square size of 2048x2048
int x = 614;
int y = 1156;
// buggy
int width = 234;
int height = 278;
//working
// int width = 400;
// int height = 400;
QRect ROI(x, y, width, height);
QImage imageInit(matImage.data, matImage.cols, matImage.rows, QImage::Format_Grayscale8);
QImage imageROI = imageInit.copy(ROI);
createNewImage(imageROI);
unsigned char* dataBuffer = imageROI.bits();
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
cv::namedWindow( "openCV imshow() from a cv::Mat image", cv::WINDOW_AUTOSIZE );
cv::imshow( "openCV imshow() from a cv::Mat image", tempImage);
The screenshot below illustrates the issue.
(Left) The full-size cv::Mat matImage.
(Middle) the expected result from the QImage and the QRect (which roughly corresponds to the green rectangle drawn by hand).
(Right) the messed-up result from the cv::Mat matImageROI
By exploring other issues regarding conversion between cv::Mat and QImage, it seems the stride becomes "non-standard" in some particular ROI sizes. For example, from this post, I found out one just needs to change cv::Mat::AUTO_STEP to imageROI.bytesPerLine() in
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
so I end up instead with:
cv::Mat matImageROI(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, imageROI.bytesPerLine());
For the other way round, i.e, creating the QImage from a ROI of a cv::Mat, one would use the property cv::Mat::step.
For example:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, matImageROI.step, QImage::Format_Grayscale8);
instead of:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, QImage::Format_Grayscale8);
Update
For the case of odd ROI sizes, although it is not a problem when using either imshow() with cv::Mat or QImage in QGraphicsScene, it becomes an issue when using openGL (with QOpenGLWidget). I guess the simplest workaround is just to constrain ROIs to have even sizes.
I'm using OpenCV for Windows Phone 8.1 (Windows runtime) in c++ with the release from MS Open Tech https://github.com/MSOpenTech/opencv.
This version is based on OpenCV 3 and the medianBlur function seemms to have a problem.
When I use a square image, the medianBlur works perfectly, but when I use a rectangle image, the medianBlur produces strange effects...
Here the result: http://fff.azurewebsites.net/opencv.png
The code that I use:
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(width, height, CV_8UC4);
memcpy(mat.data, pPixels, 4 * height*width);
cv::Mat timg(mat);
cv::medianBlur(mat, timg, 9);
cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data, 4 * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
I did'nt find where the problem is... Is it a bug in my code ? or a bug of OpenCV 3 ? from the code of MS Open Tech ?
Thanks for your help !
You are inverting the height and the width when creating the cv::Mat.
Opencv Doc on Mat
according to the doc, you should create like that :
Mat img(height, width, CV_8UC3);
When you use cv::Size however, you give the width first.
Mat img(Size(width,height),CV_8UC3);
It is a bit confusing, but there is certainly a reason.
Try this code,change some code.
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(cv::Size(width, height), CV_8UC3);
memcpy(mat.data, pPixels, sizeof(byte) * height*width);
cv::Mat timg(mat.size(),CV_8UC3);
cv::medianBlur(mat, timg, 9);
// cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data,sizeof(byte) * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
How to convert CV_8UC1 Mat to CV_8UC3 with OpenCV?
Mat dst;
Mat src(height, width, CV_8UC1, (unsigned char*) captureClient->data());
src.convertTo(dst, CV_8UC3);
but dst.channels() = 1
I've found that the best way to do this is:
cvtColor(src, dst, COLOR_GRAY2RGB);
The image will look the same as when it was grayscale CV_8UC1 but it will be a 3 channel image of type CV_8UC3.
From the documentation on convertTo
void Mat::convertTo(Mat& m, int rtype, double alpha=1, double beta=0) const
rtype – The desired destination matrix type, or rather, the depth (since the number of channels will be the same with the source one). If rtype is negative, the destination matrix will have the same type as the source.
You want to create a matrix for each of the 3 channels you want to create and then use the merge function. See the answers to this question
The convention is, that for the type CV_8UC3, the pixels values range from 0 to 255, and for type CV_32FC3 from 0.0 to 1.0. Thus you need to use a scaling factor of 255.0, instead of 1.0:
Mat::convertTo(newImage, CV_32FC1, 255.0);
I have an RGB large-image, and an RGB small-image.
What is the fastest way to replace a region in the larger image with the smaller one?
Can I define a multi-channel ROI and then use copyTo? Or must I split each image to channels, replace the ROI and then recombine them again to one?
Yes. A multi channel ROI and copyTo will work. Something like:
int main(int argc,char** argv[])
{
cv::Mat src = cv::imread("c:/src.jpg");
//create a canvas with 10 pixels extra in each dim. Set all pixels to yellow.
cv::Mat canvas(src.rows + 20, src.cols + 20, CV_8UC3, cv::Scalar(0, 255, 255));
//create an ROI that will map to the location we want to copy the image into
cv::Rect roi(10, 10, src.cols, src.rows);
//initialize the ROI in the canvas. canvasROI now points to the location we want to copy to.
cv::Mat canvasROI(canvas(roi));
//perform the copy.
src.copyTo(canvasROI);
cv::namedWindow("original", 256);
cv::namedWindow("canvas", 256);
cv::imshow("original", src);
cv::imshow("canvas", canvas);
cv::waitKey();
}