I have an array of RGB values from ImageMagick and I would like to create an IplImage structure in opencv. I was wondering how can I set the contents of the IplImage to be the rgb array values without writing the ImageMagick image to the hdd and then rereading it.
so you want to convert an ImageMagick Image object to an IplImage. It is just a matter of writing it to a buffer and then create an IplImage out of that buffer. The code from here (http://www.imagemagick.org/discourse-server/viewtopic.php?f=23&t=18183):
void Magick2Ipl(Image magicImage, IplImage* cvImage)
{
int width= magicImage.size().width();
int height = magicImage.size().height();
byte* blob= new byte[cvImage->imageSize];
magicImage.write(0,0, width, height, "BGRA", MagickCore::CharPixel, blob);
memcpy(cvImage->imageData, blob, cvImage->imageSize);
delete [] blob;
}
Best regards,
Daniel
You can do it by this way also,
Image image("Image_Path");
int width = image.size().width();
int height = image.size().height();
IplImage* mat = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);
image.write(0, 0, width, height, "BGR", Magick::CharPixel, (char*)mat->imageData);
cvShowImage("image", mat);
Related
I cannot properly convert and/or display an ROI using a QRect in a QImage and create a cv::Mat image out of the QImage.
The problem is symmetric, i.e, I cannot properly get the ROI by using a cv::Rect in a cv::Mat and creating a QImage out of the Mat. Surprisingly, everything work fine whenever the width and the height of the cv::Rect or the QRect are equal.
In what follows, my full-size image is the cv::Mat matImage. It is of type CV_8U and has a square size of 2048x2048
int x = 614;
int y = 1156;
// buggy
int width = 234;
int height = 278;
//working
// int width = 400;
// int height = 400;
QRect ROI(x, y, width, height);
QImage imageInit(matImage.data, matImage.cols, matImage.rows, QImage::Format_Grayscale8);
QImage imageROI = imageInit.copy(ROI);
createNewImage(imageROI);
unsigned char* dataBuffer = imageROI.bits();
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
cv::namedWindow( "openCV imshow() from a cv::Mat image", cv::WINDOW_AUTOSIZE );
cv::imshow( "openCV imshow() from a cv::Mat image", tempImage);
The screenshot below illustrates the issue.
(Left) The full-size cv::Mat matImage.
(Middle) the expected result from the QImage and the QRect (which roughly corresponds to the green rectangle drawn by hand).
(Right) the messed-up result from the cv::Mat matImageROI
By exploring other issues regarding conversion between cv::Mat and QImage, it seems the stride becomes "non-standard" in some particular ROI sizes. For example, from this post, I found out one just needs to change cv::Mat::AUTO_STEP to imageROI.bytesPerLine() in
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
so I end up instead with:
cv::Mat matImageROI(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, imageROI.bytesPerLine());
For the other way round, i.e, creating the QImage from a ROI of a cv::Mat, one would use the property cv::Mat::step.
For example:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, matImageROI.step, QImage::Format_Grayscale8);
instead of:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, QImage::Format_Grayscale8);
Update
For the case of odd ROI sizes, although it is not a problem when using either imshow() with cv::Mat or QImage in QGraphicsScene, it becomes an issue when using openGL (with QOpenGLWidget). I guess the simplest workaround is just to constrain ROIs to have even sizes.
I'm using OpenCV for Windows Phone 8.1 (Windows runtime) in c++ with the release from MS Open Tech https://github.com/MSOpenTech/opencv.
This version is based on OpenCV 3 and the medianBlur function seemms to have a problem.
When I use a square image, the medianBlur works perfectly, but when I use a rectangle image, the medianBlur produces strange effects...
Here the result: http://fff.azurewebsites.net/opencv.png
The code that I use:
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(width, height, CV_8UC4);
memcpy(mat.data, pPixels, 4 * height*width);
cv::Mat timg(mat);
cv::medianBlur(mat, timg, 9);
cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data, 4 * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
I did'nt find where the problem is... Is it a bug in my code ? or a bug of OpenCV 3 ? from the code of MS Open Tech ?
Thanks for your help !
You are inverting the height and the width when creating the cv::Mat.
Opencv Doc on Mat
according to the doc, you should create like that :
Mat img(height, width, CV_8UC3);
When you use cv::Size however, you give the width first.
Mat img(Size(width,height),CV_8UC3);
It is a bit confusing, but there is certainly a reason.
Try this code,change some code.
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(cv::Size(width, height), CV_8UC3);
memcpy(mat.data, pPixels, sizeof(byte) * height*width);
cv::Mat timg(mat.size(),CV_8UC3);
cv::medianBlur(mat, timg, 9);
// cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data,sizeof(byte) * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
Hi I am using following function to create an image.
Mat im(584, 565, CV_8UC1, imgg);
imwrite("Output_Image.tif", im);
but the problem is that when I display the image "Output_Image.tif". Right hand side portion is overlapped onto the left handside portion. I am not able to understand what is happening. Please explain as I am beginner to opencv. Thanks
Are you sure that the image is CV_8UC1 (colospace grayscale)?
It looks like the image is an BGR image (Blue, Green, Red) and when you use an CV_8UC3 image as an CV_8UC1 image it does that.
Change this line:
Mat im(584, 565, CV_8UC1, imgg);
To this line:
Mat im(584, 565, CV_8UC3, imgg);
EDIT for mixChannels() (see comments):
C++: void mixChannels(const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, const int* fromTo, size_t npairs)
the Mat* dst is an array of Mats which must be allocated with the right size and depth before calling mixChannels. In that array your cv::Mats will really have 1 channel.
Code Example:
cv::Mat rgb(500, 500, CV_8UC3, cv::Scalar(255,255,255));
cv::Mat redChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat greenChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat blueChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat outArray[] = {redChannel, greenChannel, blueChannel };
int from_to[] = {0,0 , 1,2 , 3,3};
cv::mixChannels(&rgb, 1, outArray, 3, from_to, 3);
Its a little bit more complex than the split() function so heres a link to the documentary, especially the from_to array is hard to understand at first
Link to documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#mixchannels
Currently I have the following:
[cameraFeed] -> [gaussianBlur] -> [sobel] -> [luminanceThreshold] -> [rawDataOutput]
the rawDataOuput I would like to pass it to the OpenCV findCountours function. Unfortunately, I can't figure out the right way to do this. The following is the callback block that gets the rawDataOuput and that should pass it to the OpenCV function but it does not work. I am assuming there are a few things involved such as converting the BGRA image given by GPUImage to CV_8UC1 (single channel) but I am not able to figure them out. Some help would be much appreciated, thanks!
// Callback on raw data output
__weak GPUImageRawDataOutput *weakOutput = rawDataOutput;
[rawDataOutput setNewFrameAvailableBlock:^{
[weakOutput lockFramebufferForReading];
GLubyte *outputBytes = [weakOutput rawBytesForImage];
NSInteger bytesPerRow = [weakOutput bytesPerRowInOutput];
// OpenCV stuff
int width = videoSize.width;
int height = videoSize.height;
size_t step = bytesPerRow;
cv::Mat mat(height, width, CV_8UC1, outputBytes, step); // outputBytes should be converted to type CV_8UC1
cv::Mat workingCopy = mat.clone();
// PASS mat TO OPENCV FUNCTION!!!
[weakOutput unlockFramebufferAfterReading];
// Update rawDataInput if we want to display the result
[rawDataInput updateDataFromBytes:outputBytes size:videoSize];
[rawDataInput processData];
}];
Try change CV_8UC1 to CV_8UC4 and then convert to gray.
In code replace lines
cv::Mat mat(height, width, CV_8UC1, outputBytes, step);
cv::Mat workingCopy = mat.clone();
with
cv::Mat mat(height, width, CV_8UC4, outputBytes, step);
cv::Mat workingCopy;
cv::cvtColor(mat, workingCopy, CV_RGBA2GRAY);
I am trying to segment all shades of red form an image using hue saturation values and use InRangeS function to create a mask which should have all red areas whitened and all others blacked(a new 1 channel image). Thwn Inpaint them to kind of obscure the segmented portions.
My code is as given.
However I am unable to get an output image, it doesnt segment the desired color range.
Any pointers on my approach and why it isnt working. ?
int main(){
IplImage *img1=cvLoadImage("/home/techrascal/projects/test1/image2.jpeg");
//IplImage *img3;
IplImage *imghsv;
IplImage *img4;
CvSize sz=cvGetSize(img1);
imghsv=cvCreateImage(sz,IPL_DEPTH_8U,3);
img4=cvCreateImage(sz,IPL_DEPTH_8U,1);
int width = img1->width;
int height = img1->height;
int bpp = img1->nChannels;
//int w=img4->width;
//int h=img4->height;
//int bn=img4->nChannels;
cvNamedWindow("original", 1);
cvNamedWindow("hsv",1);
cvNamedWindow("Blurred",1);
int r,g,b;
// create inpaint mask: img 4 will behave as mask
cvCvtColor(img1,imghsv,CV_BGR2HSV);
CvScalar hsv_min = cvScalar(0, 0, 0, 0);
CvScalar hsv_max = cvScalar(255, 0, 0, 0);
//cvShowImage("hsv",imghsv);
cvInRangeS( imghsv, hsv_min, hsv_max, img4 );
cvInpaint(img1, img4, img1, 3,CV_INPAINT_NS );
cvShowImage("Blurred",img1);
cvReleaseImage(&img1);
cvReleaseImage(&imghsv);
cvReleaseImage(&img4);
//cvReleaseImage(&img3);
char d=cvWaitKey(10000);
cvDestroyAllWindows();
return 0;}
Your code logic seems correct but you will definetely need to adjust your hsv range values
(hsv_min and hsv_max).
Read this detailed guide that show you hsv range defined in opencv
http://www.shervinemami.co.cc/colorConversion.html