How can i create Emgu.Cv.Image<,> from pointer - opencv

I simply want to convert an Emgu.Cv.Image<,> from a pointer, I am using the following code:
Size img = CvInvoke.cvGetSize(frame);
Image<Bgr, Byte> tImg = new Image<Bgr, byte>(img.Width, img.Height, 0, frame);
I don't know what value to give in 3rd parameter of Image<,> constructor that takes a pointer. It says Size of aligned image row in bytes what does that mean?

Note that image width has to be a multiple of 4 since some OpenCV code optimization is based on this assumption when CVImage is constructed from a memory 1D array.

Related

Problem Understanding OpenCV Convert Mat to BufferedImage

I am a new user of JAVA OpenCV, and I am just learning through the official tutorial today about how to convert a Mat object to BufferedImage.
From the demo code, I can understand that the input image source is a Matrix form, and then sourcePixels seems going to be an array of bytes representation of the image, so we need to get the values from the original matrix to the sourcePixels. Here the sourcePixels has the length of the whole image bytes length (with size: w * h * channels), so it would take the whole image byte values at once.
Then it comes this which is not intuitive to me. The System.arraycopy() seems copying the values from the sourcePixels to the targetPixels, but what actaully returns is image. I can guess from the code that targetPixels has relationship with image, but I don't see how we copy values from sourcePixels to targetPixels, but it actually affects values of image?
Here's the demo code. Thanks!
private static BufferedImage matToBufferedImage(Mat original)
{
BufferedImage image = null;
int width = original.width(), height = original.height(), channels = original.channels();
byte[] sourcePixels = new byte[width * height * channels];
original.get(0, 0, sourcePixels);
if (original.channels() > 1)
{
image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
}
else
{
image = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
}
final byte[] targetPixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
System.arraycopy(sourcePixels, 0, targetPixels, 0, sourcePixels.length);
return image;
}
Each BufferedImage is backed by a byte array just like the Mat class from OpenCV, the call to ((DataBufferByte) image.getRaster().getDataBuffer()).getData(); returns this underlying byte array and assigns it to targetPixels, in other words, targetPixels points to this underlying byte array that the BufferedImage image is currently wrapping around, so when you call System.arraycopy you are actually copying from the source byte array into the byte array of the BufferedImage, that's why the image is being returned, because at that point, the underlying byte array that image encapsulates contains the pixel data from original, it's like this smal example, where after making b points to a, modifications to b will also reflect in a, just like tagetPixels, because it points to the byte array image is encapsulating, copying from sourcePixels into targetPixels will also change the image
int[] a = new int[1];
int[] b = a;
// Because b references the same array that a does
// Modifying b will actually change the array a is pointing to
b[0] = 1;
System.out.println(a[0] == 1);

Converting contours found using EMGU.CV

I am new to EMGU.CV and I am struggling a bit. Let me start by giving some background of the project, i am trying to track a users fingers, i.e. calculate the users finger tips, but i am struggling a bit. I have created a set of code which filters the depth information to only a certain range and I generate a Bitmap image, tempBitmap, i then convert this image to a greyscale image using EMGU.CV which can be used by cvCanny. Once this is done i apply dilate filter to the canny image to thicken up the outline of the hand to better improve the chance of generating a successful contour, I then try to get the contours of the hand. Now what i have managed to do is to draw a box around the hand, but i am struggling to find a way to convert the contours generated by FindContours to a set of Points i can use to draw the contour with. the variable depthImage2 is a Bitmap image variable i use to draw on before assinging it to the picturebox variable on my C# form based application. any information or guidance you can provide me with will be greatly appreciated, also if my code isnt correct maybe guiding me in a direction where i can calculate the finger tips. I think i am almost there i am just missing something, so any help of any kind will be appreciated.
Image<Bgr, Byte> currentFrame = new Image<Bgr, Byte>(tempBitmap);
Image<Gray, Byte> grayImage = currentFrame.Convert<Gray, Byte>().PyrDown().PyrUp();
Image<Gray, Byte> cannyImage = new Image<Gray, Byte>(grayImage.Size);
CvInvoke.cvCanny(grayImage, cannyImage, 10, 60, 3);
StructuringElementEx kernel = new StructuringElementEx(
3, 3, 1, 1, Emgu.CV.CvEnum.CV_ELEMENT_SHAPE.CV_SHAPE_ELLIPSE);
CvInvoke.cvDilate(cannyImage, cannyImage, kernel, 1);
IntPtr cont = IntPtr.Zero;
Graphics graphicsBitmap = Graphics.FromImage(depthImage2);
using (MemStorage storage = new MemStorage()) //allocate storage for contour approximation
for (Contour<Point> contours =
cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_EXTERNAL);
contours != null; contours = contours.HNext)
{
IntPtr seq = CvInvoke.cvConvexHull2(contours, storage.Ptr, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE, 0);
IntPtr defects = CvInvoke.cvConvexityDefects(contours, seq, storage);
Seq<Point> tr = contours.GetConvexHull(Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
Seq<Emgu.CV.Structure.MCvConvexityDefect> te = contours.GetConvexityDefacts(
storage, Emgu.CV.CvEnum.ORIENTATION.CV_CLOCKWISE);
graphicsBitmap.DrawRectangle(
new Pen(new SolidBrush(Color.Red)), tr.BoundingRectangle);
}
Contour contours = cannyImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE) //to return all points
then:
List<Point[]> convertedContours = new List<Point[]>();
while(cotours!=null)
{
var contourPoints = contours.ToArray(); //put Seq<Point> to Point[], ToList() is also available ?
convertedContours.Add(contourpoints);
contours = contours.HNext;
}
you can draw contour by image Draw functon overload.
just find signature that contains parameter Seq<>
....

Processing an IplImage(OpenCV) data in Tesseract

I need to process images which i get from OpenCV.
I wrote so far:
IplImage* img=0;
img=cvLoadImage("paket2.tif");
api.SetRectangle(0,0,img->width, img->height);
api.SetImage((uchar*)img->imageData,img->width,img->height,img->depth/8,img->width*(img->depth/8));
//i tried also below line
//api.SetImage((uchar*)img->imageData,img->width,img->height,img->depth/8,img->widthStep);
int left,top,right,bottom;
left=0;top=0;right=0;bottom=0;
api.Recognize(NULL);
tesseract::ResultIterator *ri=api.GetIterator();
char * sonuc=(*ri).GetUTF8Text(tesseract::RIL_SYMBOL);
if((*ri).BoundingBox(tesseract::RIL_SYMBOL,&left,&top,&right,&bottom))
{printf("bb dogru\n");printf("%d,%d,%d,%d",left,top,right,bottom);}
printf("sonuc:%s",sonuc);
if i pass IplImage->widthStep to bytes perline, i have "wrong" boundingBox in left and right values and can not read all the text in the image.
if i pass IplImage->width*(IplImage->depth/8), boundingBox function returns false.
I hope you have some idea.
Thanks in advance.
Copy your submatrix to a new IplImage. Create a tesseract image header with the correct info(width, height, step). Link the tesseract data pointer to the iplImage data pointer.
I can't remember how to access tesseract pointer, but for opencv is image->data.ptr
This code here worked for me:
tesseract::TessBaseAPI tess;
tess.Init(argv[0], "eng", tesseract::OEM_DEFAULT);
cv::Mat image = cv::imread("...");
tess.SetImage((uchar*)image .data, image.size().width, image.size().height, image.channels(), image.step1());
tess.Recognize(0);
const char* out = tess.GetUTF8Text();

OpenCV IplImage data to float

Is there a way to convert IplImage pointer to float pointer? Basically converting the imagedata to float.
Appreciate any help on this.
Use cvConvert(src,dst) where src is the source image and dst is the preallocated floating point image.
E.g.
dst = cvCreateImage(cvSize(src->width,src->height),IPL_DEPTH_32F,1);
cvConvert(src,dst);
// Original image gets loaded as IPL_DEPTH_8U
IplImage* colored = cvLoadImage("coins.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("cvLoadImage failed!\n");
return;
}
// Allocate a new IPL_DEPTH_32F image with the same dimensions as the original
IplImage* img_32f = cvCreateImage(cvGetSize(colored),
IPL_DEPTH_32F,
colored->nChannels);
if (!img_32f)
{
printf("cvCreateImage failed!\n");
return;
}
cvConvertScale(colored, img_32f);
// quantization for 32bit. Without it, this img would not be displayed properly
cvScale(img_32f, img_32f, 1.0/255);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage ("test", img_32f);
You can't convert the image to float by simply casting the pointer. You need to loop over every pixel and calculate the new value.
Note that most float image types assume a range of 0-1 so you need to divide each pixel by whatever you want the maximum to be.

Using cvGet2D OpenCV function

I'm trying to get information from an image using the function cvGet2D in OpenCV.
I created an array of 10 IplImage pointers:
IplImage *imageArray[10];
and I'm saving 10 images from my webcam:
imageArray[numPicture] = cvQueryFrame(capture);
when I call the function:
info = cvGet2D(imageArray[0], 250, 100);
where info:
CvScalar info;
I got the error:
OpenCV Error: Bad argument (unrecognized or unsupported array type) in cvPtr2D, file /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp, line 1824
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp:1824: error: (-5) unrecognized or unsupported array type in function cvPtr2D
If I use the function cvLoadImage to initialize an IplImage pointer and then I pass it to the cvGet2D function, the code works properly:
IplImage* imagen = cvLoadImage("test0.jpg");
info = cvGet2D(imagen, 250, 100);
however, I want to use the information already stored in my array.
Do you know how can I solve it?
Even though its a very late response, but I guess someone might be still searching for the solution with CvGet2D. Here it is.
For CvGet2D, we need to pass the arguments in the order of Y first and then X.
Example:
CvScalar s = cvGet2D(img, Y, X);
Its not mentioned anywhere in the documentation, but you find it only inside core.h/ core_c.h. Try to go to the declaration of CvGet2D(), and above the function prototypes, there are few comments that explain this.
Yeah the message is correct.
If you want to store a pixel value you need to do something like this.
int value = 0;
value = ((uchar *)(img->imageData + i*img->widthStep))[j*img->nChannels +0];
cout << "pixel value for Blue Channel and (i,j) coordinates: " << value << endl;
Summarizing, to plot or store data you must create an integer value (pixel value varies between 0 and 255). But if you only want to test pixel value (like in an if closure or something similar) you can access directly to pixel value without using an integer value.
I think thats a little bit weird when you start but when you work with it 2 o 3 times you will work without difficulties.
Sorry, cvGet2D is not the best way to obtain pixel value. I know its the shortest and clear way because you in only one line of code and knowing coordinates obtain the pixel value.
I suggest you this option. When you see this code you you wiil think that is so complicated but is more effecient.
int main()
{
// Acquire the image (I'm reading it from a file);
IplImage* img = cvLoadImage("image.bmp",1);
int i,j,k;
// Variables to store image properties
int height,width,step,channels;
uchar *data;
// Variables to store the number of white pixels and a flag
int WhiteCount,bWhite;
// Acquire image unfo
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
// Begin
WhiteCount = 0;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{ // Go through each channel of the image (R,G, and B) to see if it's equal to 255
bWhite = 0;
for(k=0;k<channels;k++)
{ // This checks if the pixel's kth channel is 255 - it can be faster.
if (data[i*step+j*channels+k]==255) bWhite = 1;
else
{
bWhite = 0;
break;
}
}
if(bWhite == 1) WhiteCount++;
}
}
printf("Percentage: %f%%",100.0*WhiteCount/(height*width));
return 0;
This code count white pixels and gives you a percetage of white pixels in the image.

Resources