I have a pointer to image:
IplImage *img;
which has been converted to Mat
Mat mt(img);
Then, the Mat is sent to a function that gets a reference to Mat as input void f(Mat &m);
f(mt);
Now I want to copy back the Mat data to the original image.
Do you have any suggestion?
Best
Ali
Your answer can be found in the documentation here: http://opencv.willowgarage.com/documentation/cpp/c++_cheatsheet.html
Edit:
The first half of the first code area indeed talks about the copy constructor which you already have.
The second half of the first code area answers your question. Reproduced below for clarity.
//Convert to IplImage or CvMat, no data copying
IplImage ipl_img = img;
CvMat cvmat = img; // convert cv::Mat -> CvMat
For the following case:
double algorithm(IplImage* imgin)
{
//blabla
return erg;
}
I use the following way to call the function:
cv::Mat image = cv::imread("image.bmp");
double erg = algorithm(&image.operator IplImage());
I have made some tests and how it looks the image object will manage the memory. The operator IplImage() will only construct the header for IplImage. Maybe this could be useful?
You can use this form:
Your Code:
plImage *img;
Mat mt(img);
f(mt);
Now copy back the Mat data to the original image.
img->imageData = (char *) mt.data;
You can also copy the data instead of pointer:
memcpy(mt.data, img->imageData, (mt.rows*mt.cols));
(mt.rows*mt.cols) is the size that you should use for copy all data the mt to img.
Hope I helped
Related
I am now doing image registration with ITK library. I read source images with OpenCV, then convert them to ITKImage; after registration, I convert the result to CVMat and use imwrite to store it.
However, ITKOmageToCVMat always gives a white image (show by imshow), and after imwrite, the result isn't stored in the folder. That's so strange...
Below is my code:
cv::Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
cv::Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
typedef float PixelType;
const unsigned int Dimension = 2;
typedef itk::Image< PixelType, Dimension > FixedImageType;
typedef itk::Image< PixelType, Dimension > MovingImageType;
typedef itk::OpenCVImageBridge BridgeType;
FixedImageType::Pointer fixedImage = BridgeType::CVMatToITKImage<FixedImageType>(img1);
MovingImageType::Pointer movingImage = BridgeType::CVMatToITKImage<MovingImageType>(img2);
Mat img3 = itk::OpenCVImageBridge::ITKImageToCVMat<MovingImageType>(movingImage);
display("moving image", img3);
string filename3 = "img3";
imwrite(filename3, img3);
Even without registration, just convert an image from CVMat to ITKImage, then convert back, it doesn't work.... Do you have any idea? Thank you :)
Your code is almost fine and it should work but you have to consider 2 things. One is your images' type. When you read an image from hard disk, pixels' values are between 0 and 255 in "uchar" type but you defined the ITK's images in float type { typedef float PixelType; }, so when you convert them back to cv::Mat , they're still float, but their values are more than 1 (0~255) and the maximum value of a float image for "imshow" command, has to be "1", so you just need to divide your image to 255:
imshow("moving image", img3/255);
The second problem is the filename: string filename3 = "img3"; you have to determine the image's format to save, like string filename3 = "img3.bmp";
I have graped an image from videoCapture object then i converted it to QImage to send it to server. After i receive it from the server side i want to do some image processing on the received image which is QImage. So before i performing any processing i have to convert it back to cv::Mat image.
I have function converting cv::Mat to QImage
// Copy input Mat
const uchar *qImageBuffer = (const uchar*)mat.data;
// Create QImage with same dimensions as input Mat
QImage img(qImageBuffer, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
//en
img.bits();
return img.rgbSwapped();
have function converting QImage to cv::Mat
Mat QImageToMat(const QImage& src){
cv::Mat tmp(src.height(),src.width(),CV_8UC3,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp ; // deep copy just in case (my lack of knowledge with open cv)
for (int i=0;i<src.height();i++) {
memcpy(result.ptr(i),src.scanLine(i),src.bytesPerLine());
}
cvtColor(result, result,CV_RGB2BGR);
return result;
}
I have been searching for about 2 days how to convert QImage to cv::Mat but with no luck non of the code snippet works for me. I don't know why, the image after conversion looks bad. you can see the image to left.
Does someone have any idea, about what could be cause the problem? Thanks in advance.
LEFT:image after converted from QImage to Mat RIGHT: the origranl Image which is in QImage format
I am trying to capture video into a Mat type from two or more MSFT LifeCam HD-3000s using the videoInput library, OpenCV 2.4.3, and VS2010 Express.
I followed the example at: Most efficient way to capture and send images from a webcam in a network and it worked great.
Now I want to replace the IplImage type with a c++ Mat type. I tried to follow the example at: opencv create mat from camera data
That gave me the following:
VI = new videoInput;
int CurrentCam = 0;
VI->setupDevice(CurrentCam,WIDTH,HEIGHT);
int width = VI->getWidth(CurrentCam);
int height = VI->getHeight(CurrentCam);
unsigned char* yourBuffer = new unsigned char[VI->getSize(CurrentCam)];
cvNamedWindow("test",1);
while(1)
{
VI->getPixels(CurrentCam, yourBuffer, false, true);
cv::Mat image(width, height, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
imshow("test", image);
if(cvWaitKey(15)==27) break;
}
The output is a lined image (i.e., it looks like the first line is correct but the second line seems off, third correct, fourth off, etc). That suggests that either the step part is wrong or there is some difference between the IplImage type and the Mat type that I am not getting. I have tried looking at/altering all the parameters, but I can't find anything.
Hopefully, an answer will help those facing what appears to be a fairly common issue with loading an image form the videoInput library to the Mat type.
Thanks in advance!
Try
cv::Mat image(height, width, CV_8UC3, yourBuffer, Mat::AUTO_STEP);
I need to process images which i get from OpenCV.
I wrote so far:
IplImage* img=0;
img=cvLoadImage("paket2.tif");
api.SetRectangle(0,0,img->width, img->height);
api.SetImage((uchar*)img->imageData,img->width,img->height,img->depth/8,img->width*(img->depth/8));
//i tried also below line
//api.SetImage((uchar*)img->imageData,img->width,img->height,img->depth/8,img->widthStep);
int left,top,right,bottom;
left=0;top=0;right=0;bottom=0;
api.Recognize(NULL);
tesseract::ResultIterator *ri=api.GetIterator();
char * sonuc=(*ri).GetUTF8Text(tesseract::RIL_SYMBOL);
if((*ri).BoundingBox(tesseract::RIL_SYMBOL,&left,&top,&right,&bottom))
{printf("bb dogru\n");printf("%d,%d,%d,%d",left,top,right,bottom);}
printf("sonuc:%s",sonuc);
if i pass IplImage->widthStep to bytes perline, i have "wrong" boundingBox in left and right values and can not read all the text in the image.
if i pass IplImage->width*(IplImage->depth/8), boundingBox function returns false.
I hope you have some idea.
Thanks in advance.
Copy your submatrix to a new IplImage. Create a tesseract image header with the correct info(width, height, step). Link the tesseract data pointer to the iplImage data pointer.
I can't remember how to access tesseract pointer, but for opencv is image->data.ptr
This code here worked for me:
tesseract::TessBaseAPI tess;
tess.Init(argv[0], "eng", tesseract::OEM_DEFAULT);
cv::Mat image = cv::imread("...");
tess.SetImage((uchar*)image .data, image.size().width, image.size().height, image.channels(), image.step1());
tess.Recognize(0);
const char* out = tess.GetUTF8Text();
Is there a way to convert IplImage pointer to float pointer? Basically converting the imagedata to float.
Appreciate any help on this.
Use cvConvert(src,dst) where src is the source image and dst is the preallocated floating point image.
E.g.
dst = cvCreateImage(cvSize(src->width,src->height),IPL_DEPTH_32F,1);
cvConvert(src,dst);
// Original image gets loaded as IPL_DEPTH_8U
IplImage* colored = cvLoadImage("coins.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("cvLoadImage failed!\n");
return;
}
// Allocate a new IPL_DEPTH_32F image with the same dimensions as the original
IplImage* img_32f = cvCreateImage(cvGetSize(colored),
IPL_DEPTH_32F,
colored->nChannels);
if (!img_32f)
{
printf("cvCreateImage failed!\n");
return;
}
cvConvertScale(colored, img_32f);
// quantization for 32bit. Without it, this img would not be displayed properly
cvScale(img_32f, img_32f, 1.0/255);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage ("test", img_32f);
You can't convert the image to float by simply casting the pointer. You need to loop over every pixel and calculate the new value.
Note that most float image types assume a range of 0-1 so you need to divide each pixel by whatever you want the maximum to be.