I want to convert between Matrix and Image in EmguCV 3.0.0.
I saw in this video (https://www.youtube.com/watch?v=DfTS5a9xmwo) that you can do this with the CvInvoke.cvConvert method. But it seems this method doesn't exist anymore in EmguCV 3.0.0. I did find the method CVInvoke.ConvertMaps , but this method requires two input and two output arrays. Is this method equivalent if I use empty arrays as the second arrays?
Try the (.ToImage) method. It operates to convert to a Matrix to an image. a working example in C# is:
Image<Bgr,Byte> img1 = imgMat.ToImage<Bgr, Byte>();
You can also change it to a grayscale by using (gray,byte)
You can also find an example in VB at (http://www.emgu.com/forum/viewtopic.php?t=5209).
Related
In OpenCV we have access to the CV_XX types which allow you to create a matrix with, for example, CV_32SC1. How do I do this in EmguCV?
The reason for asking is:
I am currently using EmguCV and getting an error where I need to create a specific type of Image and am unable to find those values.
Here is my code:
Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>^ mask = gcnew Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>(gray->Size);
try { CvInvoke::cvDistTransform(255-gray, tmp, CvEnum::DIST_TYPE::CV_DIST_L1, 3, nullptr, mask); }
Which gives the error:
OpenCV: the output array of labels must be 32sC1
So I believe I need to change the byte type to 32sC1, how do I do this?
I am using EmguCV 2.0
From the Working with images page, specifically the section on EmguCV 2.0, it provides the following clarification on image depth:
Image Depth Image Depth is specified using the second generic
parameter Depth. The types of depth supported in Emgu CV 1.4.0.0
include
Byte
SByte
Single (float)
Double
UInt16
Int16
Int32 (int)
I believe this means it does not use the CV_XXX types at all and only the above.
For my issue i set the type to Int32 and it seemed to stop erroring.
The recent version 7 of IM gives strange results when doing order dithering. The Posterized Ordered Dither Expansion example: magick convert gradient.png -ordered-dither o8x8,6 od_o8x8_6.gif yields just 2bpp bitmap.
Input:
Preffered output:
Actual output:
Is this an error or there is some syntax change?
It seems that the OrderedPosterizeImage feature hasn't yet been ported from IM6 to IM7. IM7 falls back on the original bi-level OrderedDitherImage method, ignoring the ",6" part of the specification.
I've posted a feature request on the ImageMagick discourse server.
Recently I'm developing android app using OpenCV. Now I encounter a problem:
Imgproc.findContours(grayMat, contours1, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
After this function, I want to call Imgproc.MatchShapes to detect whether 2 images are matched. but under Java edition MatchShapes requires parameters of type Mat.
How can I convert List<MatOfPoint> to Mat?
The function you use to detect contours returns a list of MatOfPoints. Each contour - because there can be many - has its own MatOfPoints.
You have to find a way to choose which contour you want to use with Imgproc.MatchShapes. If you know there's only one, then just use the first entry in the List<MatOfPoints>. If you want the biggest one, use some contour properties to find the biggest contour. If you have time, you can check every single contour.
Then, once you've found the single contour you want to compare, you can use that MatOfPoints. According to this StackOverflow question, they are perfectly compatible.
what is the proper way of using cvSplit function? I saw different version of it.
should it be
cvSplit(oriImg, r,g,b, NULL);
or
cvSplit(oriImg, b,g,r, NULL);
Both of them are ok, it depends on the channel ordering. By default OpenCV uses BGR, so in this case it would be cvSplit(oriImg, b,g,r, NULL);, but you can convert it to RGB and then use the other one.
It is exactly the same thing I was puzzled by when I started using OpenCV. OpenCV uses BGR instead of RGB so you should use
cvSplit(img,b,g,r,NULL);
The newer OpenCV documentation here says you can convert an IplImage to a Numpy array just like this:
arr = numpy.asarray( im )
but that doesn't work for my needs, because it apparently doesn't support math:
x = arr/0.01
TypeError: unsupported operand type(s) for /: 'cv2.cv.iplimage' and 'float'
If I try to specify data type, I can't even get that far:
arr = numpy.asarray( im, dtype=num.float32 )
TypeError: float() argument must be a string or a number
So I'm using the code provided in the older documentation here. Basically, it does this:
arr = numpy.fromstring( im.tostring(), dtype=numpy.float32 )
But the tostring call is really slow, perhaps because it's copying the data? I need this conversion to be really fast and not copy any buffers it doesn't need to. I don't think the data are inherently incompatible; I'm creating my IplImage with cv.fromarray in the first place, which is extremely fast and accepted by the OpenCV functions.
Is there a way I can make the newer asarray method work for me, or else can I get direct access to the data pointer in the IplImage in a way that numpy.fromstring will accept it? I'm using OpenCV 2.3.1 prepackaged for Ubuntu Precise.
Fun Fact:
Say you call:
import cv2.cv as cv #Just a formality!
Capture = cv.CaptureFromCAM(0)
Img = cv.QueryFrame(Capture)
The object Img is an ipimage, and numpy.asarray(Img) is erratic at best. However! Img[:,:] is a cvmat type, and numpy.asarray(Img[:,:]) works fantastically, and more important: quickly!
This is by far the fastest way I've found to grab a frame and make it an ndarray for numpy processing.
That page does not say about IplImage. It says about CvMat which is different.
Anyway you'd better use wrappers from newer cv2 namespace. It natively uses numpy arrays instead of own image containers. Also the whole cv module is considered deprecated and will be completely dropped in the nearest major release.