I meet a problem when I want to scan through H channel and print its pixel values after splitting a HSV image.The problem is that the outputs are not numbers but messy codes.
Following is my code(using Opencv):
Mat hsv;
cvtColor(saveImage,hsv,CV_BGR2HSV);// convert BRG to HSV
vector<cv::Mat> v_channel;
split(hsv,v_channel); //split into three channels
if (v_channel[0].data==0) //channel[0] is Hue
{
cout<<"Error getting the Hue***********"<<endl;
}
for (int i=0;i<hue.rows;i++) //scan through Hue
{
for (int j=0;j<hue.cols;j++)
{
cout<<v_channel[0].at<uchar>(i,j)<<endl;
}
}
Hope anyone could help . Thanks very much!
The data is stored as bytes, ie chars, the output is interpreting the chars as, well, chars and trying to print the symbol. Simply tell it they are integers
cout<< (int) v_channel[0].at<uchar>(i,j)<<endl;
Related
How do I use value from OpenCV matchShapes output? We implemented OpenCV matchShapes function to compare two images, particularly, shapes. But when we obtained the answer we are confused how to use these values?
The code is
- (bool) someMethod:(UIImage *)image :(UIImage *)temp {
RNG rng(12345);
cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;
src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];
int thresh=150;
double ans=0, result=0;
Mat imageresult1, imageresult2;
cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);
std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;
Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);
findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
//cout<<contours1[i]<<endl;
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}
findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}
for(int i=0;i<contours1.size();i++)
{
ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
cout<<" "<<ans<<endl;
}
std::cout<<"The answer is "<<ans<<endl;
if (ans<=20) {
return true;
}
return false;
}
The output values are
0.225069
0.234417
0
7.63599
0
7.06392
0.335966
0.211358
0.327552
0.842969
0.761659
0.614039
The image is
See my comment on imoutidi's answer. Here is a visual explanation:
The first col are the two original images,the second the canny edges. The 3. col are an arbitrary selection of detected shapes with the same index in both images. As you see, it is not even guaranteed that they correspond to the same image parts as a human would see them. What you end up comparing are different triangles in this case, which say little about the overall shape similarity. The two shape arrays are not even of the same size, since there are more structures in the bottom drawing for example(like small shapes between a thick line). in The 4. col is the last shape in the array. This is the best bet you can make to compare the images. In this example, I get a value of 0.0920794532771 for their similarity.
If I understand correctly your question, you want to know what the return value of matchShapes() stands for.
In your case given the two contours (shapes) the function returns a similarity metric (value). A small value indicates that the two shapes are similar and a big value that they are not.
A good explanation is here: http://docs.opencv.org/3.1.0/d5/d45/tutorial_py_contours_more_functions.html (check the third paragraph).
Also check out the documentation: http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317
Having a strange problem. I wrote a couple of functions to convert from an mat into a 2D int array and vice versa. I first wrote 3 channel 8 bit versions which work fine, but the 16-bit grayscale versions seem to be skipping indices on one of the dimensions.
Basically every second row is blank. (Only every second one is written to.) The only thing I can think is that it has something to do with the 16 bit representation.
The following is the code:
// Convert a Mat image to a standard int array
void matToArrayGS(cv::Mat imgIn, unsigned int **array)
{
int i, j;
for(i=0; i<imgIn.rows; i++)
{
for(j=0; j<imgIn.cols; j++)
array[i][j]=imgIn.at<unsigned int>(i,j);
}
}
// Convert an array into a Greyscale Mat image
void arrayToMatGS(unsigned int **arrayin, cv::Mat imgIn)
{
int i, j;
for(i=0; i<imgIn.rows; i++)
{
for(j=0; j<imgIn.cols; j++)
imgIn.at<unsigned int>(i,j)=arrayin[i][j];
}
}
I can't help thinking it has something to do with the 16 bit representation in Mat but I can't find info on this. It's strange also that it works fine in one dimension and not the other....
Anyone have an idea?
Thanks in advance
I think this is caused by "unsigned int" usage. Try "unsigned short" for 16 bits grayscale image.
Is there a way to convert IplImage pointer to float pointer? Basically converting the imagedata to float.
Appreciate any help on this.
Use cvConvert(src,dst) where src is the source image and dst is the preallocated floating point image.
E.g.
dst = cvCreateImage(cvSize(src->width,src->height),IPL_DEPTH_32F,1);
cvConvert(src,dst);
// Original image gets loaded as IPL_DEPTH_8U
IplImage* colored = cvLoadImage("coins.jpg", CV_LOAD_IMAGE_UNCHANGED);
if (!colored)
{
printf("cvLoadImage failed!\n");
return;
}
// Allocate a new IPL_DEPTH_32F image with the same dimensions as the original
IplImage* img_32f = cvCreateImage(cvGetSize(colored),
IPL_DEPTH_32F,
colored->nChannels);
if (!img_32f)
{
printf("cvCreateImage failed!\n");
return;
}
cvConvertScale(colored, img_32f);
// quantization for 32bit. Without it, this img would not be displayed properly
cvScale(img_32f, img_32f, 1.0/255);
cvNamedWindow("test", CV_WINDOW_AUTOSIZE);
cvShowImage ("test", img_32f);
You can't convert the image to float by simply casting the pointer. You need to loop over every pixel and calculate the new value.
Note that most float image types assume a range of 0-1 so you need to divide each pixel by whatever you want the maximum to be.
I'm trying to get information from an image using the function cvGet2D in OpenCV.
I created an array of 10 IplImage pointers:
IplImage *imageArray[10];
and I'm saving 10 images from my webcam:
imageArray[numPicture] = cvQueryFrame(capture);
when I call the function:
info = cvGet2D(imageArray[0], 250, 100);
where info:
CvScalar info;
I got the error:
OpenCV Error: Bad argument (unrecognized or unsupported array type) in cvPtr2D, file /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp, line 1824
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp:1824: error: (-5) unrecognized or unsupported array type in function cvPtr2D
If I use the function cvLoadImage to initialize an IplImage pointer and then I pass it to the cvGet2D function, the code works properly:
IplImage* imagen = cvLoadImage("test0.jpg");
info = cvGet2D(imagen, 250, 100);
however, I want to use the information already stored in my array.
Do you know how can I solve it?
Even though its a very late response, but I guess someone might be still searching for the solution with CvGet2D. Here it is.
For CvGet2D, we need to pass the arguments in the order of Y first and then X.
Example:
CvScalar s = cvGet2D(img, Y, X);
Its not mentioned anywhere in the documentation, but you find it only inside core.h/ core_c.h. Try to go to the declaration of CvGet2D(), and above the function prototypes, there are few comments that explain this.
Yeah the message is correct.
If you want to store a pixel value you need to do something like this.
int value = 0;
value = ((uchar *)(img->imageData + i*img->widthStep))[j*img->nChannels +0];
cout << "pixel value for Blue Channel and (i,j) coordinates: " << value << endl;
Summarizing, to plot or store data you must create an integer value (pixel value varies between 0 and 255). But if you only want to test pixel value (like in an if closure or something similar) you can access directly to pixel value without using an integer value.
I think thats a little bit weird when you start but when you work with it 2 o 3 times you will work without difficulties.
Sorry, cvGet2D is not the best way to obtain pixel value. I know its the shortest and clear way because you in only one line of code and knowing coordinates obtain the pixel value.
I suggest you this option. When you see this code you you wiil think that is so complicated but is more effecient.
int main()
{
// Acquire the image (I'm reading it from a file);
IplImage* img = cvLoadImage("image.bmp",1);
int i,j,k;
// Variables to store image properties
int height,width,step,channels;
uchar *data;
// Variables to store the number of white pixels and a flag
int WhiteCount,bWhite;
// Acquire image unfo
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
// Begin
WhiteCount = 0;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{ // Go through each channel of the image (R,G, and B) to see if it's equal to 255
bWhite = 0;
for(k=0;k<channels;k++)
{ // This checks if the pixel's kth channel is 255 - it can be faster.
if (data[i*step+j*channels+k]==255) bWhite = 1;
else
{
bWhite = 0;
break;
}
}
if(bWhite == 1) WhiteCount++;
}
}
printf("Percentage: %f%%",100.0*WhiteCount/(height*width));
return 0;
This code count white pixels and gives you a percetage of white pixels in the image.
I would like to extract the most used colors inside an image, or at least the primary tones
Could you recommend me how can I start with this task? or point me to a similar code? I have being looking for it but no success.
You can get very good results using an Octree Color Quantization algorithm. Other quantization algorithms can be found on Wikipedia.
I agree with the comments - a programming solution would definitely need more information. But till then, assuming you'll obtain the RGB values of each pixel in your image, you should consider the HSV colorspace where the Hue can be said to represent the "tone" of each pixel. You can then use a histogram to identify the most used tones in your image.
Well, I assume you can access to each pixel RGB color. There are two ways you can so depending on how you want it.
First you may simply create some of all pixel's R, G and B. Like this.
A pseudo code.
int Red = 0;
int Green = 0;
int Blue = 0;
foreach (Pixels as aPixel) {
Red += aPixel.getRed();
Green += aPixel.getGreen();
Blue += aPixel.getBlue();
}
Then see which is more.
This give you only the picture is more red, green or blue.
Another way will give you static of combined color too (like orange) by simply create histogram of each RGB combination.
A pseudo code.
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = aPixel.getRGB();
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then see which one has more count.
You may also reduce the color-resolution as a regular RGB coloring will give you up to 6.7 million colors.
This can be done easily by given the RGB to ranges of color. For example, let say, RGB is 8 step not 256.
A pseudo code.
function Reduce(Color) {
return (Color/32)*32; // 32 is 256/8 as for 8 ranges.
}
function ReduceRGB(RGB) {
return new RGB(Reduce(RGB.getRed()),Reduce(RGB.getGreen() Reduce(RGB.getBlue()));
}
Map ColorCounts = new();
foreach (Pixels as aPixel) {
const aRGB = ReduceRGB(aPixel.getRGB());
var aCount = ColorCounts.get(aRGB);
aCount++;
ColorCounts.put(aRGB, aCount);
}
Then you can see which range have the most count.
I hope these technique makes sense to you.