Is there a way where I can converts a CvMat * to a CvMat? I am stuck in the code at a place where I have to clone a CvMat using cvCloneMat(). This gives me CvMat * where as I need it as a CvMat.
I have tried that dereferencing thing but somehow it doesnt work. I am writing a jitter/max external that has a matrix of image as an input and a matrix as output. here is the piece of code
//Convert input and output matrices to OpenCV matrices
cvJitter2CvMat(in_matrix, &source);
cvJitter2CvMat(out_matrix, &edges);
//Calculate threshold values
thresh1 = x->threshold - x->range;
thresh2 = x->threshold + x->range;
CLIP(thresh1,0,255);
CLIP(thresh2,0,255);
//calculate
//cvCanny( &source, &edges, thresh1, thresh2, 3 );
tempo = cvCloneMat(&source);
edges = (*tempo);
} else {
return JIT_ERR_INVALID_PTR;
}
out:
jit_object_method(out_matrix,gensym("lock"),out_savelock);
jit_object_method(in_matrix,gensym("lock"),in_savelock);
return err;
}
The problem is that when I use "cvCanny()" instead of cvCloneMat() it works. the output is displayed as edges of the video stream. but if I use cvCloneMat(), it displays a blank image.
This is true for any pointer-related stuff:
CvMat* pMat = cvCloneMat(...);
CvMat mat = (*pMat);
functionThatNeedsMat(*pMat);
otherFunctionThatNeedsMat(mat);
Check also this article about pointer dereferencing
Related
I am now doing image registration with ITK library. I read source images with OpenCV, then convert them to ITKImage; after registration, I convert the result to CVMat and use imwrite to store it.
However, ITKOmageToCVMat always gives a white image (show by imshow), and after imwrite, the result isn't stored in the folder. That's so strange...
Below is my code:
cv::Mat img1 = imread(argv[1], IMREAD_GRAYSCALE);
cv::Mat img2 = imread(argv[2], IMREAD_GRAYSCALE);
typedef float PixelType;
const unsigned int Dimension = 2;
typedef itk::Image< PixelType, Dimension > FixedImageType;
typedef itk::Image< PixelType, Dimension > MovingImageType;
typedef itk::OpenCVImageBridge BridgeType;
FixedImageType::Pointer fixedImage = BridgeType::CVMatToITKImage<FixedImageType>(img1);
MovingImageType::Pointer movingImage = BridgeType::CVMatToITKImage<MovingImageType>(img2);
Mat img3 = itk::OpenCVImageBridge::ITKImageToCVMat<MovingImageType>(movingImage);
display("moving image", img3);
string filename3 = "img3";
imwrite(filename3, img3);
Even without registration, just convert an image from CVMat to ITKImage, then convert back, it doesn't work.... Do you have any idea? Thank you :)
Your code is almost fine and it should work but you have to consider 2 things. One is your images' type. When you read an image from hard disk, pixels' values are between 0 and 255 in "uchar" type but you defined the ITK's images in float type { typedef float PixelType; }, so when you convert them back to cv::Mat , they're still float, but their values are more than 1 (0~255) and the maximum value of a float image for "imshow" command, has to be "1", so you just need to divide your image to 255:
imshow("moving image", img3/255);
The second problem is the filename: string filename3 = "img3"; you have to determine the image's format to save, like string filename3 = "img3.bmp";
Hi I'm trying to write some camera calibration code and I'm having a hard time using MatVectors in JavaCV that should be the equivalents of std::vec in C++.
This is how i generate my image and object points:
Mat objectPoints = new Mat(allImagePoints.rows(),1,opencv_core.CV_32FC3);
float x = 0;
float y = 0;
for (int h=0;h<patternHeight;h++) {
y = h*rectangleSize;
for (int w=0;w<patternWidth;w++) {
x = w*rectangleSize;
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w), x);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+1, y);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+2, 0);
}
}
MatVector allObjectPointsVec = new MatVector(allImagePoints.cols());
MatVector allImagePointsVec = new MatVector(allImagePoints.cols());
for (int i=0;i<allImagePoints.cols();i++) {
allObjectPointsVec.put(i,objectPoints);
allImagePointsVec.put(i,allImagePoints.col(i));
}
My image points are given in the Mat allImagePoints and as you can see I create corresponding vectors allObjectPointsVec and allImagePointsVec accordingly. When i try to do a camera calibration with these points i get the following error:
OpenCV Error: Assertion failed (ni > 0 && ni == ni1) in cv::collectCalibrationData, file ..\..\..\..\opencv\modules\calib3d\src\calibration.cpp, line 3193
java.lang.reflect.InvocationTargetException
...
which seems like the lengths of the image and object points don't coincide but i'm pretty sure that i got this right. Printing the MatVector objects gives
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237b8a0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#4d353a7a]
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237acd0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#772f4d0]
which also confuses me as I would have expected that the capacity should correspond to the length (number of matrices in the vector). If I print the size field I get the expected value. If i access a random element in the vector (e.g. allObjectPointsVec.get(i)) and print it to a string, I reveive the following:
AbstractArray[width=1,height=77,depth=32,channels=3] (for object points)
AbstractArray[width=1,height=77,depth=32,channels=2] (for image points)
which is what I would expect... Any ideas? To me this seems sort of a bug, also because I don't understand what the capacity represents if not the vector length...
I have tried the cvMatchTemplate function to compare two images(a template and an image).
IplImage img = cvLoadImage("thumbnail.jpg");
IplImage template = cvLoadImage("temp.jpg");
IplImage result = cvCreateImage(cvSize(img.width()-template.width()+1, img.height()-template.height()+1), IPL_DEPTH_32F, 1);
int method = CV_TM_SQDIFF;
cvMatchTemplate(img,template,result,method);
cvShowImage("res",result);
double[] min_val = new double[2];
double[] max_val = new double[2];
//Where are located our max and min correlation points
CvPoint minLoc = new CvPoint();
CvPoint maxLoc = new CvPoint();
cvMinMaxLoc(result, min_val, max_val, minLoc, maxLoc, null); //the last null it's for optional mask mat()
CvPoint point = new CvPoint();
point.x(minLoc.x()+template.width());
point.y(minLoc.y()+template.height());
cvRectangle(img, minLoc, point, CvScalar.WHITE, 2, 8, 0); //Draw the rectangle result in original img.
cvShowImage("Image", img);
cvWaitKey(0);
//Release
cvReleaseImage(img);
cvReleaseImage(template);
cvReleaseImage(result);
I got the desired result but could not find a way of comparing two and more images with a template.
I converted the result image that is obtained to a matrix using asCvMat and got the matrix of probability of match on every pixel of original image.
I came across the determinant function in OpenCv to compare the two matrices to understand which of the images is a closer match to the template but could not find a corresponding function in JavaCv.
Is there any way by which I could compare the results and determine that which image is a closer match. I did come across ObjectFinder but could not find proper documentation of how to use it.
Please point out certain links or examples which may help me solve my problem.
You can compare image matching results by compering the max_val
I would even change the method to CV_TM_SQDIFF_NORMED and then you can set a threshold for max_val that is somewhere between 0 to 1.
I'm trying to convert frames captured from a Basler camera to OpenCV's Mat format. There isn't a lot of information from the Basler API documentation, but these are the two lines in the Basler example that should be useful in determining what the format of the output is:
// Get the pointer to the image buffer
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
cout << "Gray value of first pixel: " << (uint32_t) pImageBuffer[0] << endl << endl;
I know what the image format is (currently set to mono 8-bit), and have tried doing:
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
img = cv::Mat(964, 1294, CV_8UC1, Result.Buffer());
Neither of which works. Any suggestions/advices would be much appreciated, thanks!
EDIT: I can access the pixels in the Basler image by:
for (int i=0; i<1294*964; i++)
(uint8_t) pImageBuffer[i];
If that helps with converting it to OpenCV's Mat format.
You are creating the cv images to use the camera's memory - rather than the images owning their own memory. The problem may be that the camera is locking that pointer - or perhaps expects to reallocate and move it on each new image
Try creating the images without the last parameter and then copy the pixel data from the camera to the image using memcpy().
// Danger! Result.Buffer() may be changed by the Basler driver without your knowing
const uint8_t *pImageBuffer = (uint8_t *) Result.Buffer();
// This is using memory that you have no control over - inside the Result object
img = cv::Mat(964, 1294, CV_8UC1, &pImageBuffer);
// Instead do this
img = cv::Mat(964, 1294, CV_8UC1); // manages it's own memory
// copies from Result.Buffer into img
memcpy(img.ptr(),Result.Buffer(),1294*964);
// edit: cvImage stores it's rows aligned on a 4byte boundary
// so if the source data isn't aligned you will have to do
for (int irow=0;irow<964;irow++) {
memcpy(img.ptr(irow),Result.Buffer()+(irow*1294),1294);
}
C++ code to get a Mat frame from a Pylon cam
Pylon::DeviceInfoList_t devices;
... get pylon devices if you have more than a camera connected ...
pylonCam = new CInstantCamera(tlFactory->CreateDevice(devices[selectedCamId]));
Pylon::CGrabResultPtr ptrGrabResult;
Pylon::CImageFormatConverter formatConverter;
formatConverter.OutputPixelFormat = Pylon::PixelType_BGR8packed;
pylonCam->MaxNumBuffer = 30;
pylonCam->StartGrabbing(GrabStrategy_LatestImageOnly);
std::cout << " trying to get width and height from pylon device " << std::endl;
pylonCam->RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
formatConverter.Convert(pylonImage, ptrGrabResult);
Mat temp = Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t*)pylonImage.GetBuffer());
Is there a way to convert IplImage pointer to float pointer? Basically converting the imagedata to float.
Appreciate any help on this.
Use cvConvert(src,dst) where src is the source image and dst is the preallocated floating point image.
E.g.
dst = cvCreateImage(cvSize(src->width,src->height),IPL_DEPTH_32F,1);
cvConvert(src,dst);
// Original image gets loaded as IPL_DEPTH_8U
IplImage* colored = cvLoadImage("coins.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("cvLoadImage failed!\n");
return;
}
// Allocate a new IPL_DEPTH_32F image with the same dimensions as the original
IplImage* img_32f = cvCreateImage(cvGetSize(colored),
IPL_DEPTH_32F,
colored->nChannels);
if (!img_32f)
{
printf("cvCreateImage failed!\n");
return;
}
cvConvertScale(colored, img_32f);
// quantization for 32bit. Without it, this img would not be displayed properly
cvScale(img_32f, img_32f, 1.0/255);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage ("test", img_32f);
You can't convert the image to float by simply casting the pointer. You need to loop over every pixel and calculate the new value.
Note that most float image types assume a range of 0-1 so you need to divide each pixel by whatever you want the maximum to be.