Hi I'm trying to write some camera calibration code and I'm having a hard time using MatVectors in JavaCV that should be the equivalents of std::vec in C++.
This is how i generate my image and object points:
Mat objectPoints = new Mat(allImagePoints.rows(),1,opencv_core.CV_32FC3);
float x = 0;
float y = 0;
for (int h=0;h<patternHeight;h++) {
y = h*rectangleSize;
for (int w=0;w<patternWidth;w++) {
x = w*rectangleSize;
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w), x);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+1, y);
objectPoints.getFloatBuffer().put(3*(patternWidth*h+w)+2, 0);
}
}
MatVector allObjectPointsVec = new MatVector(allImagePoints.cols());
MatVector allImagePointsVec = new MatVector(allImagePoints.cols());
for (int i=0;i<allImagePoints.cols();i++) {
allObjectPointsVec.put(i,objectPoints);
allImagePointsVec.put(i,allImagePoints.col(i));
}
My image points are given in the Mat allImagePoints and as you can see I create corresponding vectors allObjectPointsVec and allImagePointsVec accordingly. When i try to do a camera calibration with these points i get the following error:
OpenCV Error: Assertion failed (ni > 0 && ni == ni1) in cv::collectCalibrationData, file ..\..\..\..\opencv\modules\calib3d\src\calibration.cpp, line 3193
java.lang.reflect.InvocationTargetException
...
which seems like the lengths of the image and object points don't coincide but i'm pretty sure that i got this right. Printing the MatVector objects gives
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237b8a0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#4d353a7a]
org.bytedeco.javacpp.opencv_core$MatVector[address=0x2237acd0,position=0,limit=1,capacity=1,deallocator=org.bytedeco.javacpp.Pointer$NativeDeallocator#772f4d0]
which also confuses me as I would have expected that the capacity should correspond to the length (number of matrices in the vector). If I print the size field I get the expected value. If i access a random element in the vector (e.g. allObjectPointsVec.get(i)) and print it to a string, I reveive the following:
AbstractArray[width=1,height=77,depth=32,channels=3] (for object points)
AbstractArray[width=1,height=77,depth=32,channels=2] (for image points)
which is what I would expect... Any ideas? To me this seems sort of a bug, also because I don't understand what the capacity represents if not the vector length...
Related
picture histogram
I'm quite new to the processing language. I am trying to create an image comparison tool.
The idea is to get a histogram of a picture (see screenshot below, size is 600x400), which is then compared to 10 other histograms of similar pictures (all size 600x400). The histogram shows the frequency distribution of the gray levels with the number of pure black values displayed on the left and number of pure white values on the right.
In the end I should get a "winning" picture (the one that has the most similar histogram).
Below you can see the code for the image histogram, similar to the processing tutorial example.
My idea was to create a PImage [] for the 10 other pictures to create histograms and then an if statement, but I'm not sure how to code it.
Does anyone have a tip on how to proceed or where to look? I couldn't find a similar post.
Thanks in advance and sorry if the question is very basic!
size(600, 400);
// Load an image from the data directory
// Load a different image by modifying the comments
PImage img = loadImage("image4.jpg");
image(img, 0, 0);
int[] hist = new int[256];
// Calculate the histogram
for (int i = 0; i < img.width; i++) {
for (int j = 0; j < img.height; j++) {
int bright = int(brightness(get(i, j)));
hist[bright]++;
}
}
// Find the largest value in the histogram
int histMax = max(hist);
stroke(255);
// Draw half of the histogram (skip every second value)
for (int i = 0; i < img.width; i += 2) {
// Map i (from 0..img.width) to a location in the histogram (0..255)
int which = int(map(i, 0, img.width, 0, 255));
// Convert the histogram value to a location between
// the bottom and the top of the picture
int y = int(map(hist[which], 0, histMax, img.height, 0));
line(i, img.height, i, y);
}
Not sure if your problem is the implementation in processing or if you don't know how to compare histograms. I assume it is the comparison as the rest is pretty straight forward. Calculate the similarity for every candidate and pick the winner.
Search the web for histogram comparison and among others you will find:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
OpenCV implements four measures for histogram similarity.
Correlation
where and N is the number of histogram bins
or
Chi-Square
or
Intersection
or
Bhattacharyya-Distance
You can use these measures, but I'm sure you'll find something else as well.
I have extracted some feature points of an image using the following code
vector<Point2f> cornersFrame1;
goodFeaturesToTrack( frame1, cornersFrame1, maxCorners, qualityLevel, minDistance, Mat(), blockSize, useHarrisDetector, k );
After that i want to read the values of present at these feature points. So, i am using the following code:
for(int i=0; i<cornersFrame1.size(); i++)
{
float frame1 = calculatedU.at<float>( cornersFrame1[i].x, cornersFrame1[i].y );
}
then i get Segmentation fault.
But if i use the following code in "For loop" then it work.
float frame1 = calculatedU.at<float>( cornersFrame1[i].y, cornersFrame1[i].x );
I am confused because i think that "Point2f" stores pixel information as (row , col). Isn't it?
No, it is not. All types of points in OpenCV are just normal points that you can think about: (x,y). When it comes to coordinate in image this means that 'x' is a column and 'y' is a row. On the other hand at<> requires as input (row, column). This is why you had to provide (y,x) instead of (x,y).
Just to prevent future confusion, one of the ways of using at<> is this one:
float frame1 = calculatedU.at<float>( cornersFrame1[i] );
This way you don't need to think whether you should provide (x,y) or (y,x).
What I am doing is trying to implement an Skin Probability Maps algorithm for skin detection in OpenCV.
I've stuck in a place where I should compare SkinHistValue / NonSkinHistValue probability of each pixel with Theta threshold according to http://www.cse.unsw.edu.au/~icml2002/workshops/MLCV02/MLCV02-Morales.pdf and this tutorial http://www.morethantechnical.com/2013/03/05/skin-detection-with-probability-maps-and-elliptical-boundaries-opencv-wcode/
My problems lies in calculating the coords for hist value:
From the tutorial:
calcHist(&nRGB_frame,1,channels,mask,skin_Histogram,2,histSize,ranges,uniform,accumulate);
calcHist(&nRGB_frame,1,channels,~mask,non_skin_Histogram,2,histSize,ranges,uniform,accumulate);
Calculates the histograms. Than i normalize them.
And after that:
for (int i=0; i<nrgb.rows; i++) {
int gbin = cvRound((nrgb(i)[1] - 0)/range_dist[0] * hist_bins[0]);
int rbin = cvRound((nrgb(i)[2] - low_range[1])/range_dist[1] * hist_bins[1]);
float skin_hist_val = skin_Histogram.at<float>(gbin,rbin);
};
Where nrgb is my image, and im trying to get skin_hist_value for that. But the gbin and rbin are probably calculated wrong and it throws an exception (i run outside of array?) when it comes to
skin_Histogram.at<float>(gbin,rbin);
I have totally no idea how to calculate it correctly. Any help?
Using the new API for OpenCV 2.3, I am having trouble assigning values to a Mat array (or say image) inside a loop. Here is the code snippet which I am using;
int paddedHeight = 256 + 2*padSize;
int paddedWidth = 256 + 2*padSize;
int n = 266; // padded height or width
cv::Mat fx = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
cv::Mat fy = cv::Mat(paddedHeight,paddedWidth,CV_64FC1);
float value = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fx.at<cv::Vec2d>(i,j) = value++;
value = -n/2.0f;
}
meshElement = -n/2.0f;
for(int i=0;i<n;i++)
{
for(int j=0;j<n;j++)
fy.at<cv::Vec2d>(i,j) = value;
value++;
}
Now in the first loop as soon as j = 133, I get an exception which seems to be related to depth of the image, I cant figure out what I am doing wrong here.
Please Advise! Thanks!
You are accessing the data as 2-component double vector (using .at<cv::Vec2d>()), but you created the matrices to contain only 1 component doubles (using CV_64FC1). Either create the matrices to contain two components per element (with CV_64FC2) or, what seems more appropriate to your code, access the values as simple doubles, using .at<double>(). This explodes exactly at j=133 because that is half the size of your image and when treated as containing 2-component vectors when it only contains 1, it is only half as wide.
Or maybe you can merge these two matrices into one, containing two components per element, but this depends on the way you are going to use these matrices in the future. In this case you can also merge the two loops together and really set a 2-component vector:
cv::Mat f = cv::Mat(paddedHeight,paddedWidth,CV_64FC2);
float yValue = -n/2.0f;
for(int i=0;i<n;i++)
{
float xValue = -n/2.0f;
for(int j=0;j<n;j++)
{
f.at<cv::Vec2d>(i,j)[0] = xValue++;
f.at<cv::Vec2d>(i,j)[1] = yValue;
}
++yValue;
}
This might produce a better memory accessing scheme if you always need both values, the one from fx and the one from fy, for the same element.
I'm trying to get information from an image using the function cvGet2D in OpenCV.
I created an array of 10 IplImage pointers:
IplImage *imageArray[10];
and I'm saving 10 images from my webcam:
imageArray[numPicture] = cvQueryFrame(capture);
when I call the function:
info = cvGet2D(imageArray[0], 250, 100);
where info:
CvScalar info;
I got the error:
OpenCV Error: Bad argument (unrecognized or unsupported array type) in cvPtr2D, file /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp, line 1824
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp:1824: error: (-5) unrecognized or unsupported array type in function cvPtr2D
If I use the function cvLoadImage to initialize an IplImage pointer and then I pass it to the cvGet2D function, the code works properly:
IplImage* imagen = cvLoadImage("test0.jpg");
info = cvGet2D(imagen, 250, 100);
however, I want to use the information already stored in my array.
Do you know how can I solve it?
Even though its a very late response, but I guess someone might be still searching for the solution with CvGet2D. Here it is.
For CvGet2D, we need to pass the arguments in the order of Y first and then X.
Example:
CvScalar s = cvGet2D(img, Y, X);
Its not mentioned anywhere in the documentation, but you find it only inside core.h/ core_c.h. Try to go to the declaration of CvGet2D(), and above the function prototypes, there are few comments that explain this.
Yeah the message is correct.
If you want to store a pixel value you need to do something like this.
int value = 0;
value = ((uchar *)(img->imageData + i*img->widthStep))[j*img->nChannels +0];
cout << "pixel value for Blue Channel and (i,j) coordinates: " << value << endl;
Summarizing, to plot or store data you must create an integer value (pixel value varies between 0 and 255). But if you only want to test pixel value (like in an if closure or something similar) you can access directly to pixel value without using an integer value.
I think thats a little bit weird when you start but when you work with it 2 o 3 times you will work without difficulties.
Sorry, cvGet2D is not the best way to obtain pixel value. I know its the shortest and clear way because you in only one line of code and knowing coordinates obtain the pixel value.
I suggest you this option. When you see this code you you wiil think that is so complicated but is more effecient.
int main()
{
// Acquire the image (I'm reading it from a file);
IplImage* img = cvLoadImage("image.bmp",1);
int i,j,k;
// Variables to store image properties
int height,width,step,channels;
uchar *data;
// Variables to store the number of white pixels and a flag
int WhiteCount,bWhite;
// Acquire image unfo
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
// Begin
WhiteCount = 0;
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{ // Go through each channel of the image (R,G, and B) to see if it's equal to 255
bWhite = 0;
for(k=0;k<channels;k++)
{ // This checks if the pixel's kth channel is 255 - it can be faster.
if (data[i*step+j*channels+k]==255) bWhite = 1;
else
{
bWhite = 0;
break;
}
}
if(bWhite == 1) WhiteCount++;
}
}
printf("Percentage: %f%%",100.0*WhiteCount/(height*width));
return 0;
This code count white pixels and gives you a percetage of white pixels in the image.