I am convert a numpy array to a cvMat using fromArray() function. Now when I try to apply Threshold on it I get the below error
OpenCV Error: Unsupported format or combination of formats () in threshold
On checking on stackoverflow I see that it might be an issue with the channel or depth of my image. But I am not sure how can I check that for a cvMat. Could somebody tell me how to check the depth and number of channels for a cvMat in python.
Well, you can't directly get it from a cvMat because cvMats have types instead of depth/channels, so
print mymat.type
returns the type code.
If you want to get the depth and channel number, the easiest way I've found is to generate the IplImage header with cv.GetImage like
print cv.GetImage(mymat).depth,cv.GetImage(mymat).nChannels
I believe cv2 does away with all of that IplImage/cvMat stuff and rolls it all into Mat though.
Related
I am using ROS-Kinetic. I have a pointcloud of type PointCloud. I have projected the same pointcloud on a plane. I would like to convert the planar pointcloud to an image of type sensor_msgs/Image.
toROSMsg(cloud, image);
enter code hereis throwing an error as
error: ‘const struct pcl::PointXYZI’ has no member named ‘rgb’
memcpy (pixel, &cloud (x, y).rgb, 3 * sizeof(uint8_t));
Kindly enlighten me in this regard. If possible along with a code snippet.
Thanks in advance
If toROSMsg() is complaining that your input cloud does not have an 'rgb' member, try to input a cloud of type pcl::PointXYZRGB. This is another type of point cloud handled by PCL. You can look at the documentation of PCL point types.
Convert to type pcl::PointXYZRGB with these lines:
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudrgb (new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::copyPointCloud(*cloud, *cloudrgb);
Then call your function as:
toROSMsg(cloudrgb, image);
What you try to achieve is some 2D voxelization. And I assume that you want to implement some "inverse sensor model" (ISM) as explained by Thrun, right?
This approach is commonly directly implemented into a mapping algorithm to circumvent the exhaustive calculation of the plain ISM.
Therefore, you'll hardly find an out of the box solution.
Anyway, you could do it in multiple ways like this:
Use pointcloud_to_laserscan for 2D projection (but you have it anyway)
Use the ISM alg. explained in the book
or
Transform the PCL to an octree
Downsample to a quadtree and convert it to an imge
In OpenCV we have access to the CV_XX types which allow you to create a matrix with, for example, CV_32SC1. How do I do this in EmguCV?
The reason for asking is:
I am currently using EmguCV and getting an error where I need to create a specific type of Image and am unable to find those values.
Here is my code:
Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>^ mask = gcnew Emgu::CV::Image<Emgu::CV::Structure::Gray, byte>(gray->Size);
try { CvInvoke::cvDistTransform(255-gray, tmp, CvEnum::DIST_TYPE::CV_DIST_L1, 3, nullptr, mask); }
Which gives the error:
OpenCV: the output array of labels must be 32sC1
So I believe I need to change the byte type to 32sC1, how do I do this?
I am using EmguCV 2.0
From the Working with images page, specifically the section on EmguCV 2.0, it provides the following clarification on image depth:
Image Depth Image Depth is specified using the second generic
parameter Depth. The types of depth supported in Emgu CV 1.4.0.0
include
Byte
SByte
Single (float)
Double
UInt16
Int16
Int32 (int)
I believe this means it does not use the CV_XXX types at all and only the above.
For my issue i set the type to Int32 and it seemed to stop erroring.
I used CPU version as follows.
vector<float> descriptors;
cv::HOGDescriptor hog(cv::Size(24,32),cv::Size(12,12),cv::Size(4,4),cv::Size(6,6),6);
hog.compute(img, descriptors,cv::Size(8,8), cv::Size(0,0));
My questions is how can get the 'descriptors' using GPU?
I tried the following code. (doesn't work)
cv::gpu::GpuMat gpu_value, gpu_descriptors;
cv::gpu::HOGDescriptor hog_gpu(Size(24,32),Size(12,12),Size(4,4),Size(6,6),6);
gpu_value.upload(img);
hog_gpu.getDescriptors(gpu_value,cv::Size(8,8),gpu_descriptors);
how can I get the 'descriptors' from 'gpu_descriptors'?
Any one can help me to solve this? Many thanks!
You can download gpu_descriptors to CPU memory using gpu::GpuMat memeber function download(), as follows:
Mat cpu_descriptors;
gpu_descriptors.download(cpu_descriptors);
However, the descriptors may be stored differently on the GPU than on CPU, that is cpu_descriptors may not be exactly the same as descriptors computed in your code above. But you can give it a try.
Edit
There doesn't seem to be a method to download descriptors to CPU memory in vector<float> format for gpu::HOGDescriptor. As a side note, I know that you can download descriptors for gpu::SURF_GPU feature detector, using it's member function
void downloadDescriptors(const GpuMat& descriptorsGPU,
vector<float>& descriptors);
which is exactly what you want. But, unfortunately, for some reason this function doesn't exist for cv::gpu::HOGDescriptor. You can attempt to figure out how the data is stored in vector<float> type of descriptors and then try to convert from Mat to vector<float> format.
I have the code for computing the histogram for hsv and yuv images. As am trying to obtain values corresponding to brightness alone, I want the 'v' channel value from hsv image and luma ('y') channel value from yuv image. this is the code I have used.
int channels[] = {0};
calcHist(&src_yuv,1,channels,Mat(),hist,1,histSize,ranges,true,false);
This sample code is for yuv. I just change {0} to {2} to obtain 'v' channel values from HSV. I am getting certain results, but am not sure if am choosing the right channels. could you please help me, to know if those numbers choose the exact channels I want to? Thanks in advance
To be absolute sure that the channel number X corresponds to the channel you are after, consult the channelSeq attribute of the IPL Image structure. If channelSeq[X] gives the name (a character) of the channel you are after, then you found it.
But, given how this attribute is documented (along other interesting ones), even if you were always using IPLImage, there is no guarantee that the information contained there would be accurate. Thus, to be absolutely sure about the channel sequence in your image you have to trust the conversion specification and remember that yourself. So, if you start with an image in BGR and convert using BGR2YUV, then you trust that the Y channel is the first one, and so on. If OpenCV ever changes BGR2YUV to mean that Y goes to the last channel, and so on, then too bad for you.
The newer OpenCV documentation here says you can convert an IplImage to a Numpy array just like this:
arr = numpy.asarray( im )
but that doesn't work for my needs, because it apparently doesn't support math:
x = arr/0.01
TypeError: unsupported operand type(s) for /: 'cv2.cv.iplimage' and 'float'
If I try to specify data type, I can't even get that far:
arr = numpy.asarray( im, dtype=num.float32 )
TypeError: float() argument must be a string or a number
So I'm using the code provided in the older documentation here. Basically, it does this:
arr = numpy.fromstring( im.tostring(), dtype=numpy.float32 )
But the tostring call is really slow, perhaps because it's copying the data? I need this conversion to be really fast and not copy any buffers it doesn't need to. I don't think the data are inherently incompatible; I'm creating my IplImage with cv.fromarray in the first place, which is extremely fast and accepted by the OpenCV functions.
Is there a way I can make the newer asarray method work for me, or else can I get direct access to the data pointer in the IplImage in a way that numpy.fromstring will accept it? I'm using OpenCV 2.3.1 prepackaged for Ubuntu Precise.
Fun Fact:
Say you call:
import cv2.cv as cv #Just a formality!
Capture = cv.CaptureFromCAM(0)
Img = cv.QueryFrame(Capture)
The object Img is an ipimage, and numpy.asarray(Img) is erratic at best. However! Img[:,:] is a cvmat type, and numpy.asarray(Img[:,:]) works fantastically, and more important: quickly!
This is by far the fastest way I've found to grab a frame and make it an ndarray for numpy processing.
That page does not say about IplImage. It says about CvMat which is different.
Anyway you'd better use wrappers from newer cv2 namespace. It natively uses numpy arrays instead of own image containers. Also the whole cv module is considered deprecated and will be completely dropped in the nearest major release.