OpenCV IplImage data to float - opencv

Is there a way to convert IplImage pointer to float pointer? Basically converting the imagedata to float.
Appreciate any help on this.

Use cvConvert(src,dst) where src is the source image and dst is the preallocated floating point image.
E.g.
dst = cvCreateImage(cvSize(src->width,src->height),IPL_DEPTH_32F,1);
cvConvert(src,dst);

// Original image gets loaded as IPL_DEPTH_8U
IplImage* colored = cvLoadImage("coins.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("cvLoadImage failed!\n");
return;
}
// Allocate a new IPL_DEPTH_32F image with the same dimensions as the original
IplImage* img_32f = cvCreateImage(cvGetSize(colored),
IPL_DEPTH_32F,
colored->nChannels);
if (!img_32f)
{
printf("cvCreateImage failed!\n");
return;
}
cvConvertScale(colored, img_32f);
// quantization for 32bit. Without it, this img would not be displayed properly
cvScale(img_32f, img_32f, 1.0/255);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage ("test", img_32f);

You can't convert the image to float by simply casting the pointer. You need to loop over every pixel and calculate the new value.
Note that most float image types assume a range of 0-1 so you need to divide each pixel by whatever you want the maximum to be.

Related

ITKImageToCVMat returns white image

I am now doing image registration with ITK library. I read source images with OpenCV, then convert them to ITKImage; after registration, I convert the result to CVMat and use imwrite to store it.
However, ITKOmageToCVMat always gives a white image (show by imshow), and after imwrite, the result isn't stored in the folder. That's so strange...
Below is my code:
cv::Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
cv::Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
typedef float PixelType;
const unsigned int Dimension = 2;
typedef itk::Image< PixelType, Dimension > FixedImageType;
typedef itk::Image< PixelType, Dimension > MovingImageType;
typedef itk::OpenCVImageBridge BridgeType;
FixedImageType::Pointer fixedImage = BridgeType::CVMatToITKImage<FixedImageType>(img1);
MovingImageType::Pointer movingImage = BridgeType::CVMatToITKImage<MovingImageType>(img2);
Mat img3 = itk::OpenCVImageBridge::ITKImageToCVMat<MovingImageType>(movingImage);
display("moving image", img3);
string filename3 = "img3";
imwrite(filename3, img3);
Even without registration, just convert an image from CVMat to ITKImage, then convert back, it doesn't work.... Do you have any idea? Thank you :)
Your code is almost fine and it should work but you have to consider 2 things. One is your images' type. When you read an image from hard disk, pixels' values are between 0 and 255 in "uchar" type but you defined the ITK's images in float type { typedef float PixelType; }, so when you convert them back to cv::Mat , they're still float, but their values are more than 1 (0~255) and the maximum value of a float image for "imshow" command, has to be "1", so you just need to divide your image to 255:
imshow("moving image", img3/255);
The second problem is the filename: string filename3 = "img3"; you have to determine the image's format to save, like string filename3 = "img3.bmp";

Convert gray image to binary image in OpenCV

I would like to know what is the problem in below code, since it only appears only part of the Gray image as Binary image!
cv::Mat gry = cv::imread("image_gray.jpg");
cv::Mat bin(gry.size(), gry.type());
for (int i=0; i<gry.rows ;i++)
{
for (int j=0; j<gry.cols ;j++)
{
if (gry.at<uchar>(i,j)>=100)
bin.at<uchar>(i,j)=255;
else
bin.at<uchar>(i,j)=0;
}
}
cv::namedWindow("After", cv::WINDOW_AUTOSIZE);
cv::imshow("After",bin);
waitKey(0);
cvDestroyWindow( "After" );
imwrite("binary_image.bmp", bin);
Your problem is in cv::imread.
The function assumes it should load the image as a color image, if you want to load it as a garyscale image, you should call the function as follows:
cv::imread(fileName, CV_LOAD_IMAGE_GRAYSCALE)
By the way, the reason you only see part of the image, is because the image is simply bigger than a uchar for each pixel. (and you end up iterating only over part of it).
it would be easier if you use use the OpenCV function:
cv::threshold(image_src, image_dst, 200, 255, cv::THRESH_BINARY);
This piece of code set as black value (255) all those pixels which have as original value 200.

Converting CvMat 8 to CvMat

Is there a way where I can converts a CvMat * to a CvMat? I am stuck in the code at a place where I have to clone a CvMat using cvCloneMat(). This gives me CvMat * where as I need it as a CvMat.
I have tried that dereferencing thing but somehow it doesnt work. I am writing a jitter/max external that has a matrix of image as an input and a matrix as output. here is the piece of code
//Convert input and output matrices to OpenCV matrices
cvJitter2CvMat(in_matrix, &source);
cvJitter2CvMat(out_matrix, &edges);
//Calculate threshold values
thresh1 = x->threshold - x->range;
thresh2 = x->threshold + x->range;
CLIP(thresh1,0,255);
CLIP(thresh2,0,255);
//calculate
//cvCanny( &source, &edges, thresh1, thresh2, 3 );
tempo = cvCloneMat(&source);
edges = (*tempo);
} else {
return JIT_ERR_INVALID_PTR;
}
out:
jit_object_method(out_matrix,gensym("lock"),out_savelock);
jit_object_method(in_matrix,gensym("lock"),in_savelock);
return err;
}
The problem is that when I use "cvCanny()" instead of cvCloneMat() it works. the output is displayed as edges of the video stream. but if I use cvCloneMat(), it displays a blank image.
This is true for any pointer-related stuff:
CvMat* pMat = cvCloneMat(...);
CvMat mat = (*pMat);
functionThatNeedsMat(*pMat);
otherFunctionThatNeedsMat(mat);
Check also this article about pointer dereferencing

OpenCV C++/Obj-C: goodFeaturesToTrack inside specific blob

Is there a quick solution to specify the ROI only within the contours of the blob I'm intereseted in?
My ideas so far:
Using the boundingRect, but it contains too much stuff I don't want to analyse.
Applying goodFeaturesToTrack to the whole image and then loop through the output coordinates to eliminate the once outside my blobs contour
Thanks in advance!
EDIT
I found what I need: cv::pointPolygonTest() seems to be the right thing, but I'm not sure how to implement it …
Here's some code:
// ...
IplImage forground_ipl = result;
IplImage *labelImg = cvCreateImage(forground.size(), IPL_DEPTH_LABEL, 1);
CvBlobs blobs;
bool found = cvb::cvLabel(&forground_ipl, labelImg, blobs);
IplImage *imgOut = cvCreateImage(cvGetSize(&forground_ipl), IPL_DEPTH_8U, 3);
if (found) {
vb::CvBlob *greaterBlob = blobs[cvb::cvGreaterBlob(blobs)];
cvb::cvRenderBlob(labelImg, greaterBlob, &forground_ipl, imgOut);
CvContourPolygon *polygon = cvConvertChainCodesToPolygon(&greaterBlob->contour);
}
"polygon" contains the contour I need.
goodFeaturesToTrack is implemented this way:
- (std::vector<cv::Point2f>)pointsFromGoodFeaturesToTrack:(cv::Mat &)_image
{
std::vector<cv::Point2f> corners;
cv::goodFeaturesToTrack(_image,corners, 100, 0.01, 10);
return corners;
}
So next I need to loop through the corners and check each point with cv::pointPolygonTest(), right?
You can create a mask over your interest region:
EDIT
How to make a mask:
Make a mask;
Mat mask(origImg.size(), CV_8UC1);
mask.setTo(Scalar::all(0));
// here I assume your contour is extracted with findContours,
// and is stored in a vector<vector<Point>>
// and that you know which contour is the blob
// if it's not the case, use fillPoly instead of drawContour();
Scalar color(255,255,255); // white. actually, it's monchannel.
drawContours(mask, contours, contourIdx, color );
// fillPoly(Mat& img, const Point** pts, const int* npts,
// int ncontours, const Scalar& color)
And now you're ready to use it. BUT, look carefully at the result - I have heard about some bugs in OpenCV regarding the mask parameter for feature extractors, and I am not sure if it's about this one.
// note the mask parameter:
void goodFeaturesToTrack(InputArray image, OutputArray corners, int maxCorners,
double qualityLevel, double minDistance,
InputArray mask=noArray(), int blockSize=3,
bool useHarrisDetector=false, double k=0.04 )
This will also improve the speed of your aplication - goodFeaturesToTrack eats a hoge amount of time, and if you apply it only on a smaller image, the overall gain is significant.

OpenCV Mat to IplImage* conversion

I have a pointer to image:
IplImage *img;
which has been converted to Mat
Mat mt(img);
Then, the Mat is sent to a function that gets a reference to Mat as input void f(Mat &m);
f(mt);
Now I want to copy back the Mat data to the original image.
Do you have any suggestion?
Best
Ali
Your answer can be found in the documentation here: http://opencv.willowgarage.com/documentation/cpp/c++_cheatsheet.html
Edit:
The first half of the first code area indeed talks about the copy constructor which you already have.
The second half of the first code area answers your question. Reproduced below for clarity.
//Convert to IplImage or CvMat, no data copying
IplImage ipl_img = img;
CvMat cvmat = img; // convert cv::Mat -> CvMat
For the following case:
double algorithm(IplImage* imgin)
{
//blabla
return erg;
}
I use the following way to call the function:
cv::Mat image = cv::imread("image.bmp");
double erg = algorithm(&image.operator IplImage());
I have made some tests and how it looks the image object will manage the memory. The operator IplImage() will only construct the header for IplImage. Maybe this could be useful?
You can use this form:
Your Code:
plImage *img;
Mat mt(img);
f(mt);
Now copy back the Mat data to the original image.
img->imageData = (char *) mt.data;
You can also copy the data instead of pointer:
memcpy(mt.data, img->imageData, (mt.rows*mt.cols));
(mt.rows*mt.cols) is the size that you should use for copy all data the mt to img.
Hope I helped

Resources