I try to get the center of circles using Hough Circle algorithm from
https://github.com/Itseez/opencv/blob/master/samples/cpp/houghcircles.cpp
but I need more accurate coordinates.
When I get those coordinates like
this
for( size_t i = 0; i < circles.size(); i++ )
{
Vec3i c = circles[i];
cout<<c[0]<<" "<<c[1]<<endl;
}
it prints just the integer part.
Is there any posibility to get the center more precise(4 decimals or more)?
You are explicitly converting the coordinates to integers by assigning them to an integer vector (Vec3i). If you print them like this, you will print the values as you get them from OpenCV:
cout<<circles[0]<<" "<<circles[1]<<endl;
However, these results might not be as accurate as you desire. In that case, you are out of luck with your current approach as OpenCV does not provide more accurate results.
Related
I need to write an own implementation of computing the fundamental matrix between two images based on the corresponding image coordinates without using OpenCV.
Is it possible to describe this algorithm in its simplest form in accordance with the following function? a simple and straightforward formula.
FMatrixEightPoint()
Input Arguments:
points1(x,y)−pixel coordinates in the first image ,
corresponding to points2 in the second image
points2(x,y)−pixel coordinates in the second image ,
corresponding to points1 in the first image
Output :
F − the fundamental matrix between the first image and the second image
Yes, it is possible to describe the algorithm in the mentioned form.
If you would use OpenCV, you could just use findFundamentalMat. This also provides the 8-point method for computing the fundamental matrix.
The example (in C++) taken from the OpenCV documentation, but adapted (using the RANSAC algorithm for computing the fundamental matrix):
// Example. Estimation of fundamental matrix using the 8-point algorithm
int point_count = 8; // must be >= 8
vector<Point2f> points1(point_count);
vector<Point2f> points2(point_count);
// initialize the points here ... */
for( int i = 0; i < point_count; i++ )
{
points1[i] = ...;
points2[i] = ...;
}
Mat fundamental_matrix =
findFundamentalMat(points1, points2, CV_FM_8POINT);
If you want to write your own function, it would look like this (no valid code)
Matrix findFundamentalMat(Array points1, Array points2)
{
Matrix fundamentalMatrix;
// compute fundamental matrix based on input points1 and points2 or call OpenCV's findFundamentalMat
return fundamentalMatrix;
}
There are images with perspectively distorted barcodes in them.
They are located and decoded using ZBar.
Now I do not only need the rough location, but the four real corner points of the barcode, that define the enclosing 4-point polygon.
I tried different approaches, but did not yet get the desired result.
One of them was:
convert image to grayscale
threshold image
erode image
floodFill beginning with a pixel known to be part of barcode
obtain the contour around the floodFill result
But around this contour I now would need to find the minimum best fitting 4-point polygon, which seems to be not that easy.
Do you have ideas for better approaches?
You could use the following code and try to reduce your contour to 4-point polygon via approxPoly
vector approx;
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(Mat(contours[i]), approx,
arcLength(Mat(contours[i]), true)*0.02, true);
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
http://opencv-code.com/tutorials/detecting-simple-shapes-in-an-image/
You can also try the following methods, maybe they will produce good enough results for you:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect
or
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=convexhull#convexhull
OK, I found a solution that works good enough for my use case.
First a scanline is generated from the ZBar result.
Now the first and the last black pixels are found in verion of the image resulting from cv::adaptivethreshold with a large enough blockSize.
From there on the first and the last bar are segmented using cv::findContours.
Now for both end bars the two contour points with the most distance to each others are searched.
They finally define the enclosing 4-point-polygon.
Which is not exactly what I posted in my question, but the additional size due to the elongated guard patterns does not matter in my case.
I am currently using a Project Tango tablet for robotic obstacle avoidance. I want to create a matrix of z-values as they would appear on the Tango screen, so that I can use OpenCV to process the matrix. When I say z-values, I mean the distance each point is from the Tango. However, I don't know how to extract the z-values from the TangoXyzIjData and organize the values into a matrix. This is the code I have so far:
public void action(TangoPoseData poseData, TangoXyzIjData depthData) {
byte[] buffer = new byte[depthData.xyzCount * 3 * 4];
FileInputStream fileStream = new FileInputStream(
depthData.xyzParcelFileDescriptor.getFileDescriptor());
try {
fileStream.read(buffer, depthData.xyzParcelFileDescriptorOffset, buffer.length);
fileStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Mat m = new Mat(depthData.ijRows, depthData.ijCols, CvType.CV_8UC1);
m.put(0, 0, buffer);
}
Does anyone know how to do this? I would really appreciate help.
The short answer is it can't be done, at least not simply. The XYZij struct in the Tango API does not work completely yet. There is no "ij" data. Your retrieval of buffer will work as you have it coded. The contents are a set of X, Y, Z values for measured depth points, roughly 10000+ each callback. Each X, Y, and Z value is of type float, so not CV_8UC1. The problem is that the points are not ordered in any way, so they do not correspond to an "image" or xy raster. They are a random list of depth points. There are ways to get them into some xy order, but it is not straightforward. I have done both of these:
render them to an image, with the depth encoded as color, and pull out the image as pixels
use the model/view/perspective from OpenGL and multiply out the locations of each point and then figure out their screen space location (like OpenGL would during rendering). Sort the points by their xy screen space. Instead of the calculated screen-space depth just keep the Z value from the original buffer.
or
wait until (if) the XYZij struct is fixed so that it returns ij values.
I too wish to use Tango for object avoidance for robotics. I've had some success by simplifying the use case to be only interested in the distance of any object located at the center view of the Tango device.
In Java:
private Double centerCoordinateMax = 0.020;
private TangoXyzIjData xyzIjData;
final FloatBuffer xyz = xyzIjData.xyz;
double cumulativeZ = 0.0;
int numberOfPoints = 0;
for (int i = 0; i < xyzIjData.xyzCount; i += 3) {
float x = xyz.get(i);
float y = xyz.get(i + 1);
if (Math.abs(x) < centerCoordinateMax &&
Math.abs(y) < centerCoordinateMax) {
float z = xyz.get(i + 2);
cumulativeZ += z;
numberOfPoints++;
}
}
Double distanceInMeters;
if (numberOfPoints > 0) {
distanceInMeters = cumulativeZ / numberOfPoints;
} else {
distanceInMeters = null;
}
Said simply this code is taking the average distance of a small square located at the origin of x and y axes.
centerCoordinateMax = 0.020 was determined to work based on observation and testing. The square typically contains 50 points in ideal conditions and fewer when held close to the floor.
I've tested this using version 2 of my tango-caminada application and the depth measuring seems quite accurate. Standing 1/2 meter from a doorway I slid towards the open door and the distance changed form 0.5 meters to 2.5 meters which is the wall at the end of the hallway.
Simulating a robot being navigated I moved the device towards a trash can in the path until 0.5 meters separation and then rotated left until the distance was more than 0.5 meters and proceeded forward. An oversimplified simulation, but the basis for object avoidance using Tango depth perception.
You can do this by using camera intrinsics to convert XY coordinates to normalized values -- see this post - Google Tango: Aligning Depth and Color Frames - it's talking about texture coordinates but it's exactly the same problem
Once normalized, move to screen space x[1280,720] and then the Z coordinate can be used to generate a pixel value for openCV to chew on. You'll need to decide how to color pixels that don't correspond to depth points on your own, and advisedly, before you use the depth information to further colorize pixels.
The main thing is to remember that the raw coordinates returned are already using the basis vectors you want, i.e. you do not want the pose attitude or location
I have extracted some feature points of an image using the following code
vector<Point2f> cornersFrame1;
goodFeaturesToTrack( frame1, cornersFrame1, maxCorners, qualityLevel, minDistance, Mat(), blockSize, useHarrisDetector, k );
After that i want to read the values of present at these feature points. So, i am using the following code:
for(int i=0; i<cornersFrame1.size(); i++)
{
float frame1 = calculatedU.at<float>( cornersFrame1[i].x, cornersFrame1[i].y );
}
then i get Segmentation fault.
But if i use the following code in "For loop" then it work.
float frame1 = calculatedU.at<float>( cornersFrame1[i].y, cornersFrame1[i].x );
I am confused because i think that "Point2f" stores pixel information as (row , col). Isn't it?
No, it is not. All types of points in OpenCV are just normal points that you can think about: (x,y). When it comes to coordinate in image this means that 'x' is a column and 'y' is a row. On the other hand at<> requires as input (row, column). This is why you had to provide (y,x) instead of (x,y).
Just to prevent future confusion, one of the ways of using at<> is this one:
float frame1 = calculatedU.at<float>( cornersFrame1[i] );
This way you don't need to think whether you should provide (x,y) or (y,x).
What I am doing is trying to implement an Skin Probability Maps algorithm for skin detection in OpenCV.
I've stuck in a place where I should compare SkinHistValue / NonSkinHistValue probability of each pixel with Theta threshold according to http://www.cse.unsw.edu.au/~icml2002/workshops/MLCV02/MLCV02-Morales.pdf and this tutorial http://www.morethantechnical.com/2013/03/05/skin-detection-with-probability-maps-and-elliptical-boundaries-opencv-wcode/
My problems lies in calculating the coords for hist value:
From the tutorial:
calcHist(&nRGB_frame,1,channels,mask,skin_Histogram,2,histSize,ranges,uniform,accumulate);
calcHist(&nRGB_frame,1,channels,~mask,non_skin_Histogram,2,histSize,ranges,uniform,accumulate);
Calculates the histograms. Than i normalize them.
And after that:
for (int i=0; i<nrgb.rows; i++) {
int gbin = cvRound((nrgb(i)[1] - 0)/range_dist[0] * hist_bins[0]);
int rbin = cvRound((nrgb(i)[2] - low_range[1])/range_dist[1] * hist_bins[1]);
float skin_hist_val = skin_Histogram.at<float>(gbin,rbin);
};
Where nrgb is my image, and im trying to get skin_hist_value for that. But the gbin and rbin are probably calculated wrong and it throws an exception (i run outside of array?) when it comes to
skin_Histogram.at<float>(gbin,rbin);
I have totally no idea how to calculate it correctly. Any help?