opencv pixels have different coordinates than shown pixels - opencv

EDIT: solved, i had to access the pixel by using
(int)bw.at<uchar>(r,c)
For a project at school i have to detect a ball and calculate it's position. I do this with color segmentation: with inRange i check for the color of the ball and i get a binary image as result. At least i assume it's binary because the picture is black and white when i display it.
I now try to get the position of the ball by simply taking the average of the x and the y coordinates of all the detected pixels. The strange thing is that the y values are correct but the x values are completely wrong.
Here is my code:
int k = 0;
int x = 0;
int y = 0;
ofstream myfile;
myfile.open ("example.txt");
for(int c = 0; c < bw.cols; c++){
for(int r = 0; r < bw.rows; r++){
if(bw.at<int>(r,c) != 0){
x += c;
y += r;
cout << "x: " << c << " y: " << r << endl;
k++;
myfile << 1;
}else{
myfile << 0;
}
}
myfile << endl;
}
myfile.close();
bal.set_pos(x/k,y/k);
I print each pixel's x and y coordinate and the y coordinates are right but the x coordinates are grouped in 4 different groups, the first group has values around 88, second group values around 248, 3rd group 408, 4th group 569. They should be between 350 and 360 however.

Related

Count length of black pixels on image

I am using OpenCV to manipulate some images.
Suppose that image.png is black/white image (only B or W for pixel colors).
For example, if we print the colors for 3rd line, it could be:
WWWWWBBBWWWWWWBBBBBBWWWWWBBWWWW
I'd like to save info on each sequence of black pixels, I mean, I'd like to be able to compute, for each row, the values:
number of black sequences in row i: (3 on example above)
x-coordinate of end pixels for each black sequence in row i: (6,8 and 15,20 and 26,27 on example above)
length of each black sequence on row i: (l1=3,l2=6,l3=2 on example above) (this is easy assuming item above is done)
I'm using some for loop and testing if color is black. When it is black, I save the x coordinate and start other loop inside to run from this coordinate to the end of line, testing if the color is white. When it finds white color, it stops and save the previous coordinate.
This works to compute only the length of first sequence of black pixels. I don't know how to go to next (I even don't know how many there are).
Here is the main part of code (with some trash code):
for(int y=0;y<img.rows;y++) //img.rows
{
for(int x=0;x<img.cols;x++)
{
Vec3b color = image.at<Vec3b>(Point(x,y));
printf("(%d,%d,%d)\n",color[1],color[2],color[3]);
if(color[0] == 0 && color[1] == 0 && color[2] == 0)
{
cor[0]='B';
ymax = y;
if (ymin == -1) { ymin = y; }
int xmin = x;
int diam_esq = img.cols/2-xmin;
double dist_esq = sqrt( (x-img.cols/2)*(x-img.cols/2) + (y-img.rows/2)*(y-img.rows/2) );
for(int z=x;z<img.cols;z++)
{
Vec3b colorz = image.at<Vec3b>(Point(z,y));
if(colorz[0] == 255 && colorz[1] == 255 && colorz[2] == 255)
{
int xmax = z-1;
int diam_dir = xmax-img.cols/2;
double dist_dir = sqrt( (z-1-img.cols/2)*(z-1-img.cols/2) + (y-img.rows/2)*(y-img.rows/2) );
int diam = xmax - xmin;
//printf("y=%*d, xmin=%*d, xmax=%*d, esq=%*d, dir=%*d, diam=%*d\n",5,y,5,xmin,5,xmax,5,diam_esq,5,diam_dir,5,diam);
printf("%*d%*d%*d%*d%*d%*d%*f%*f\n",5,y,5,xmin,5,xmax,5,diam_esq,5,diam_dir,5,diam,13,dist_esq,13,dist_dir);
break;
}
}
break;
}
}
//break; // only y=0
}
Here is some code that does what you want. It only prints out results. It doesn't save them anywhere but I assume you'll know how to do this.
To find black sequences in each row, there is no need to do nested for loops for each sequence, just keep track of whether or not you're inside a black sequence and if yes, of where it began. Also, I use Mat::ptr to allow for a more efficient traversal of your image, row by row.
for(int i = 0; i < image.rows; i++) {
Vec3b* ptr = image.ptr<Vec3b>(i);
bool blackSequence = false;
int blackSequenceStart = -1;
int numberOfBlackSequences = 0;
for(int j = 0; j < image.cols; j++) {
Vec3b color = ptr[j];
if(color[0] == 0 && !blackSequence) { // this is assuming that all pixels are either white or black
blackSequence = true;
blackSequenceStart = j;
numberOfBlackSequences++;
}
if(color[0] == 255 && blackSequence) {
blackSequence = false;
cout << "Row " << i << ": Sequence " << numberOfBlackSequences << " starts at " << blackSequenceStart
<< " and finishes at " << j - 1 << ". Total length: " << j - blackSequenceStart << endl;
}
if(j == image.cols - 1 && blackSequence) {
blackSequence = false;
cout << "Row " << i << ": Sequence " << numberOfBlackSequences << " starts at " << blackSequenceStart
<< " and finishes at " << j << ". Total length: " << j - blackSequenceStart + 1 << endl;
}
}
}

OpenCV - get coordinates of top of object in contour

Given a contour such as the one seen below, is there a way to get the X,Y coordinates of the top point in the contour? I'm using Python, but other language examples are fine.
Since every pixel needs to be checked, I'm afraid you will have to iterate linewise over the image and see which is the first white pixel.
You can iterate over the image until you encounter a pixel that isn't black.
I will write an example in C++.
cv::Mat image; // your binary image with type CV_8UC1 (8-bit 1-channel image)
int topRow(-1), topCol(-1);
for(int i = 0; i < image.rows; i++) {
uchar* ptr = image.ptr<uchar>(i);
for(int j = 0; j < image.cols; j++) {
if(ptr[j] != 0) {
topRow = i;
topCol = j;
std::cout << "Top point: " << i << ", " << j << std::endl;
break;
}
}
if(topRow != -1)
break;
}

How tu put B, G and R component value straight into a pixel of cv::Mat? [duplicate]

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}

cv::Mat matrix, HOW TO Reduce digits to the right of the decimal point in cv::Mat?

I have an app that prints a 3x3 cv::Mat on the iPhone screen. I need to reduce the decimals, as the screen is not so big, see:
[1.004596557012473, -0.003116992336797859, 5.936915104939593;
-0.007241746117066327, 0.9973985665720294, -0.2118670500989478;
1.477734234970711e-05, -1.03363734495053e-05, 1.000089074805124]
so I would like to reduce the decimals .4 or .6 or six decimals. Any ideas?
Cheers
direct OpenCV version
select the base formatter as you wish, according to
http://docs.opencv.org/3.1.0/d3/da1/classcv_1_1Formatter.html
then adapt the function set64fPrecision (or set32fPrecision)
and then cout:
cv::Ptr<cv::Formatter> formatMat=Formatter::get(cv::Formatter::FMT_DEFAULT);
formatMat->set64fPrecision(3);
formatMat->set32fPrecision(3);
std::cout << "matrix:" << std::endl << formatMat->format( sim ) << std::endl;
If you were using printf
cv::Mat data(3, 3, CV_64FC1);
for (int y = 0; y < data.rows; ++y) {
for (int x = 0;x < data.cols; ++x) {
printf("%.6f ", data.at<double>(y, x));
}
}
If you were using std::cout
cv::Mat data(3, 3, CV_64FC1);
std::cout.setf(std::ios::fixed, std:: ios::floatfield);
std::cout.precision(6);
for (int y = 0; y < data.rows; ++y) {
for (int x = 0;x < data.cols; ++x) {
std::cout<<data.at<double>(y, x)<<" ";
}
}

HSV values using openCV or javaCV

I want to track a color within an image. I use the following code (javaCV):
//Load initial image.
iplRGB = cvLoadImage(imageFile, CV_LOAD_IMAGE_UNCHANGED);
//Prepare for HSV
iplHSV = cvCreateImage(iplRGB.cvSize(), iplRGB.depth(), iplRGB.nChannels());
//Transform RGB to HSV
cvCvtColor(iplRGB, iplHSV, CV_BGR2HSV);
//Define a region of interest.
//minRow = 0; maxRow = iplHSV.height();
//minCol = 0; maxCol = iplHSV.width();
minRow = 197; minCol = 0; maxRow = 210; maxCol = 70;
//Print each HSV for each pixel of the region.
for (int y = minRow; y < maxRow; y++){
for (int x = minCol; x < maxCol; x++) {
CvScalar pixelHsv = cvGet2D(iplHSV, y, x);
double h = pixelHsv.val(0);
double s = pixelHsv.val(1);
double v = pixelHsv.val(2);
String line = y + "," + x + "," + h + "," + s + "," + v;
System.out.println(line);
}
}
I can easily find out the minimum and maximum for HUE and SAT from the output. Let's call then minHue, minSat, maxHue and maxSat (not fancy hey !). Then I execute this code:
iplMask = cvCreateImage(iplHSV.cvSize(), iplHSV.depth(), 1);
CvScalar min = cvScalar(minHue, minSat, 0, 0);
CvScalar max = cvScalar(maxHue, maxSat, 255 ,0);
cvInRangeS(iplHSV, min, max, iplMask);
When I show the iplMask, should not I see the region of interest entirely white ? I don't, I see the contour being white but the inside of the rectangle is black. I must mess with something but I do not understand what.
I know that Hue is in [0..179] with OpenCV and Sat and Val are in [0..255] but since I use the values displayed by openCV I would think I do not have to rescale...
Anyway, I am lost. Can somebody explain ? Thanks.

Resources