Convert IPL_DEPTH_16S image in to IPL_DEPTH_8U in JavaCV - opencv

I have a One Image with depth of IPL_DEPTH_16S
IplImage result = cvCreateImage(cvGetSize(smoothImage), IPL_DEPTH_16S, 1);
cvSobel(smoothImage, result, 0, 1, 3);
and i want to pass that result image to other object which needs an IPL_DEPTH_8U image. So Is there any way to convert IPL_DEPTH_16S to IPL_DEPTH_8U in JavaCV.
I already try to use cvConvertScale() method. But i can't find what are the exact parametrs for that method.
Thankx..

Using the same style as your code, this should work:
IplImage i = cvCreateImage(cvGetSize(result), IPL_DEPTH_8U, 1);
cvConvertScale(result, i, 1, 0);

Related

OpenCV 2.4 : C API for converting cvMat to IplImage

I'm having an YUYV image buffer in cvMat object (snippet shown below). I had to convert this cvMat object to IplImage for color conversion.
CvMat cvmat = cvMat(480, 640, CV_8UC2, yuyv_buff);
I tried the below options to convert this cvmat object to IplImage object (src: https://medium.com/#zixuan.wang/mat-cvmat-iplimage-2f9603b43909 ).
//cvGetImage()
CvMat M;
IplImage* img = cvCreateImageHeader(M.size(), M.depth(), M.channels());
cvGetImage(&M, img); //Deep Copy
//Or
CvMat M;
IplImage* img = cvGetImage(&M, cvCreateImageHeader(M.size(), M.depth(), M.channels()));
//cvConvert()
CvMat M;
IplImage* img = cvCreateImage(M.size(), M.depth(), M.channels());
cvConvert(&M, img); //Deep Copy
But nothing worked. cvGetImage(), cvConvert() expects cvArr* as input. Passing &cvmat to them throws exception.
Is there any other way to convert an CvMat object to IplImage object in OpenCV 2.4 ?
Note: I cannot use C++ API or any other version of OpenCV. I'm limited to use only OpenCV 2.4
Edit 1: My objective is to convert this YUYV buffer to an RGB image object.
Instead of creating cvMat, I was able to create an IplImage directly from the yuyv image buffer like below:
IplImage* frame = cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 2);
frame->imageData=(char*)yuyv_buffer;

Passing OpenCV mat to OpenCL kernel for execution

I am trying to combine Opencv with OpenCL for creating image buffer and pass it to GPU.
I have imx6 which uses vivante core (GPU).
Do not support OCL feature of opencv.
I am using OpenCV for reading an image which is in Mat and then want to convert it to float array and pass to kernel for execution.
But I'm getting an error segment fault while running cl program.
Probably I am not able to convert cv::mat to cl_float2, please help.
Snippet of code:
/* Load Image using opencv in opencl*/
cv::Mat shore_mask = cv::imread("1.jpg", 1);
cl_float2 *im = (cl_float2*)shore_mask.data;
cl_int h = shore_mask.rows;
cl_int w = shore_mask.cols;
/* Transfer data to memory buffer */
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
How do I convert mat to float matrix and pass it to opencl kernel for execution?
cv::Mat shore_mask = cv::imread("1.jpg", 1);
This gives you BGR image of size 3*width*height*8bit. Now, you do
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
Read size is width*height*64 bit - you are getting over the array bounds. Convert your cv::Mat to proper type or change read size.

Convert CvSeq to vector<cv::Point>?

I am new to openCV. i am working for the image processing application. i need to convert the CvSeq to vector<cv::Point>.
void find_squares( IplImage* img , cv::vector<cv::vector<cv::Point>>&squares){
IplImage* newimg = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
IplImage* cannyimg = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
IplImage* greyimg = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
IplImage* testimg = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, 1);
// convert the loaded image to a canny image
cvCvtColor(img, greyimg, CV_BGR2GRAY);
cvCanny(greyimg, cannyimg, 50, 150, 3);
// necessary to convert loaded image to an image with channel depth of 1
cvConvertImage(cannyimg, newimg);
cvConvertImage(img, testimg);
// allocate necessary memory to store the contours
CvMemStorage* storage = cvCreateMemStorage(0);
CvMemStorage* canny_storage = cvCreateMemStorage(0);
// find the contours in both the loaded image and the canny filtered image
cvFindContours(testimg, storage, &contours, sizeof(CvContour),
CV_RETR_EXTERNAL, CV_CHAIN_CODE);
cvFindContours(newimg, canny_storage, &canny_contours, sizeof(CvContour),
CV_RETR_EXTERNAL, CV_CHAIN_CODE);
// draw the contours on both the loaded image and the canny filtered image
cvDrawContours(testimg, contours, cvScalar(255,255,255), cvScalarAll(255), 100);
cvDrawContours( newimg, canny_contours, cvScalar(255,255,255), cvScalarAll(255),100);
}
I want to convert the contours to cv::vector<cv::vector<cv::Point>>. i don't know want to do next.
Please give me any idea.
The answer to your question is too long to be written here. It took whole chapter in a book to describe how CvSeq works and why ("Learning OpenCV" by Gary Bradski and Adrian Kaehler, chapter 8).
More importantly, you shouldn't learn this now. C interface is already deprecated and when OpenCV 3.0 (that currently under development) will be released this interface will be removed completely. That means using Mat instead of IplImage* and using functions without 'cv' prefix in their name. See documentation of findContours. Your code will look like this:
vector<vector<cv::Point>> contours;
cv::findContours(testimg, contours, CV_RETR_EXTERNAL, CV_CHAIN_CODE);
Edit (answer to comment):
Your drawing function will be:
drawContours(testimg, contours, -1, 255, CV_FILLED);
See documentation of drawContours.

Convert SDL_Surface to IplImage

I converted IplImage to SDL_Surface
With the reference of this link
SDL_Surface *single_channel_ipl_to_surface (IplImage *opencvimg)
{
SDL_Surface *surface = SDL_CreateRGBSurfaceFrom((void*)opencvimg->imageData,
opencvimg->width,
opencvimg->height,
opencvimg->depth*opencvimg->nChannels,
opencvimg->widthStep,
1, 1, 1 ,0);
return surface;
}
How can I convert SDL_Surface to IplImage
There are 4 pieces of information you'll have to retrieve from SDL_Surface, so investigate the API to know how you can retrieve:
The size of the image (width/height);
The bit depth of the image;
The number of channels;
And the data (pixels) of the image;
After that you can create an IplImage from scratch with cvCreateImageHeader() followed by cvSetData().

OpenCv Image From RGB Array

I have an array of RGB values from ImageMagick and I would like to create an IplImage structure in opencv. I was wondering how can I set the contents of the IplImage to be the rgb array values without writing the ImageMagick image to the hdd and then rereading it.
so you want to convert an ImageMagick Image object to an IplImage. It is just a matter of writing it to a buffer and then create an IplImage out of that buffer. The code from here (http://www.imagemagick.org/discourse-server/viewtopic.php?f=23&t=18183):
void Magick2Ipl(Image magicImage, IplImage* cvImage)
{
int width= magicImage.size().width();
int height = magicImage.size().height();
byte* blob= new byte[cvImage->imageSize];
magicImage.write(0,0, width, height, "BGRA", MagickCore::CharPixel, blob);
memcpy(cvImage->imageData, blob, cvImage->imageSize);
delete [] blob;
}
Best regards,
Daniel
You can do it by this way also,
Image image("Image_Path");
int width = image.size().width();
int height = image.size().height();
IplImage* mat = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, 3);
image.write(0, 0, width, height, "BGR", Magick::CharPixel, (char*)mat->imageData);
cvShowImage("image", mat);

Resources