hough circle opencv coordinates of circles' center - opencv

I am using this page to identify circles and their centers on my images. I know that top left hand corner is 0,0 point. But then i noticed that x and y coordinates returned for my circles' center are all positive. Why is that? shouldn't y coordinate be negative? I am talking about circles[i][1] values from the original code.

No. It is correct.
In computer vision, usually, the y-axis is inverted.
OpenCV follows this coordinate system:
Image from here.

Related

OpenCV Stereo Photogrammetry- why my Z axis not in line with the principal point?

As I understand OpenCV's coordinate system, as in this diagram.
The left camera of a calibrated stereo pair is located at the origin facing the Z direction.
I have a pair of 2464x2056 pixel cameras that I have calibrated (with a stereo rms of around 0.35), computed the disparity on a pair of images and reprojected this to get the 3D pointcloud. However, I've noticed that the Z axis is not in line with the optical centre of the camera.
This does kind of mess with some of the pointcloud manipulation I'm hoping to do- is this expected, or does it indicate that that something has gone wrong along the way?
Below is the point I've generated, plus the axis- the red green and blue lines indicate the x,y and z axes respectively, coming out from the origin.
As you can see, the Z axis intercepts the pointcloud between the head and the post- this corresponds to a pixel coordinate of approximately x = 637, y = 1028 when I fix the principal point during calibration to cx = 1232,y=1028. When I remove the CV_FIX_PRINCIPAL_POINT flag, this is calculated as approximatly cx = 1310, cy=1074, and the Z axis intercepts at around x=310,y=1050.
Compared to the rectified image here where the midpoint x = 1232,y=1028 is marked by a yellow cross, the centre of the image is over the mannequin had, the intersection between the Z axis is significantly off from where I would expect.
Does anyone have any idea as to why this could be occuring? Any help would be greatly appreciated.

Camera pose and reflections using OpenCV's SovePNP

I'm trying to use the function SolvePNP to estimate the relative position of a camera. Mi question is this, when choosing world coordinates, do I need to be careful in choosing them so that there can be no reflections when transforming them to camera coordinates? Or will OpenCV correct that for me?
Details: I'm filming a tennis court and was originally setting the world coordinate origin to be the centre of the court, with the x-axis pointing parallel to the net towards the left, the y-axis pointing forwards vertically on the court, and the z-axis pointing upwards. If I've understood correctly, SolvePNP will transform these coordinates to a system with origin at some point behind the top left corner of an image, with x-axis pointing downwards on the image, y-axis pointing to the right, and z-axis pointing forwards to the scene. However this transformation would definitely involve a reflection, must I swap the x and y axis of my world coordinates to avoid this or is it fine to leave them as they are? (Also, let me know if I'm making a big mistake and SolvePnp actually puts the origin at a point behind the centre of the image rather than one the top left corner...)
Assuming that you have a camera calibration matrix (and that such calibration was done assuming a right hand coordinate system all along), and correct correspondences between the tennis field features in the image and the CAD-features:
You need to select the reference frame in the tennis court such that is a right hand coordinate system, so that your solution from solvePNP provides the pose and position of the tennis field reference frame with respect to the camera coordinate system (by default a right hand coordinate system).
Hope it helps

How to rectify a detected ellipse

I am trying to find circles in images and warp them back to a canonical view (i.e. as if looking into the center). However, circles in general project to ellipses under perspective transformations. So I am first detecting ellipses, roughly doing the following (in OpenCV):
1. Find contours in the image
2. Estimate area of contour
3. Fitting a bounded box to contour and estimating area by width/2 * height/2 * PI (area of ellipse)
4. checking if area of contour and estimated area of ellipse is < a threhsold
Assuming I have found an ellipse by this method, how can I rectify it back to a circle such that I "undo" the perspective transform (although not in plane rotation as this cannot be done I guess). For example, if it was a rectangle I would just compute the homography from the 4 corners of an uprigh rectangle to the detected projected one.
I have no idea how to do this with an ellipse, any help is much appreciated.
Thanks
A circle is indeed transformed into an ellipse by a perspective transformation, however its axes are not the same as the axes of the initial circle, as shown in this illustration:
(source: brian-curtis.com)
You can refer to this link for a detailled demonstration. As a consequence, the bounding rectangle of the ellipse is not the image of the initial square by the perspective tranformation.
EDIT:
This means that the center and the axes of the ellipse you observe are not the images, by the perspective mapping, of the center and axes of the original circle. I tried to make a clearer illustration:
On this image, I drew in green the axes and center of the original circle, after perspective transformation, and the axes and center of the ellipse in red. On this specific example, the vertical axis is not deformed by the perspective mapping, but it would be deformed in the general case. Hence, deforming a circle by a perspective transformation gives an ellipse, but the axes and center that you see are not the axes and center of the original circle.
As a consequence, you cannot simply use the top, bottom, left and right points on the ellipse (the red points, which can easily be detected from the ellipse) to map these onto the top, bottom, left and right points of the circle because they do not correspond under the perspective mapping (the green points do, but they cannot be detected easily from the ellipse).
In the end, I don't think that it is at all possible to estimate the perspective mapping from a single detected ellipse.
This looks like an indeterminate problem.
The projection of a rectangle supplies 8 equations in 8 unknowns (homography coefficients).
With an ellipse, you can only retrieve the center coordinates (2 DOF), the axis (2 DOF) and the axis orientation (1 DOF).

OpenCV: measuring distance between two balls in millimeters - how to improve accuracy

I also posted this topic in the Q&A forum at opencv.org but I don't know how many experts from here are reading this forum - so forgive me that I'm also trying it here.
I'm currently learning OpenCV and my current task is to measure the distance between two balls which are lying on a plate. My next step is to compare several cameras and resolutions to get a feeling how important resolution, noise, distortion etc. is and how heavy these parameters affect the accuracy. If the community is interested in the results I'm happy to share the results when they are ready! The camera is placed above the plate using a wide-angle lens. The width and height of the plate (1500 x 700 mm) and the radius of the balls (40 mm) are known.
My steps so far:
camera calibration
undistorting the image (the distortion is high due to the wide-angle lens)
findHomography: I use the corner points of the plate as input (4 points in pixels in the undistorted image) and the corner points in millimeters (starting with 0,0 in the lower left corner, up to 1500,700 in the upper right corner)
using HoughCircles to find the balls in the undistorted image
applying perspectiveTransform on the circle center points => circle center points now exist in millimeters
calculation the distance of the two center points: d = sqrt((x1-x2)^2+(y1-y2)^2)
The results: an error of around 4 mm at a distance of 300 mm, an error of around 25 mm at a distance of 1000 mm But if I measure are rectangle which is printed on the plate the error is smaller than 0.2 mm, so I guess the calibration and undistortion is working good.
I thought about this and figured out three possible reasons:
findHomography was applied to points lying directly on the plate whereas the center points of the balls should be measured in the equatorial height => how can I change the result of findHomography to change this, i.e. to "move" the plane? The radius in mm is known.
the error increases with increasing distance of the ball to the optical center because the camera will not see the ball from the top, so the center point in the 2D projection of the image is not the same as in the 3D world - I will we projected further to the borders of the image. => are there any geometrical operations which I can apply on the found center to correct the value?
during undistortion there's probably a loss of information, because I produce a new undistorted image and go back to pixel accuracy although I have many floating point values in the distortion matrix. Shall I search for the balls in the distorted image and tranform only the center points with the distortion matrix? But I don't know what's the code for this task.
I hope someone can help me to improve this and I hope this topic is interesting for other OpenCV-starters.
Thanks and best regards!
Here are some thoughts to help you along... By no means "the answer", though.
First a simple one. If you have calibrated your image in mm at a particular plane that is distance D away, then points that are r closer will appear larger than they are. To get from measured coordinates to actual coordinates, you use
Actual = measured * (D-r)/D
So since the centers of the spheres are radius r above the plane, the above formula should answer part 1 of your question.
Regarding the second question: if you think about it, the center of the sphere that you see should be in the right place "in the plane of the center of the sphere", even though you look at it from an angle. Draw yourself a picture to convince yourself this is so.
Third question: if you find the coordinates of the spheres in the distorted image, you should be able to transform them to the corrected image using perspectiveTransform. This may improve accuracy a little bit - but I am surprised at the size of errors you see. How large is a single pixel at the largest distance (1000mm)?
EDIT
You asked about elliptical projections etc. Basically, if you think of the optical center of the camera as a light source, and look at the shadow of the ball onto the plane as your "2D image", you can draw a picture of the rays that just hit the sides of the ball, and determine the different angles:
It is easy to see that P (the mid point of A and B) is not the same as C (the projection of the center of the sphere). A bit more trig will show you that the error C - (A+B)/2 increases with x and decreases with D. If you know A and B you can calculate the correct position of C (given D) from:
C = D * tan( (atan(B/D) + atan(A/D)) / 2 )
The error becomes larger as D is smaller and/or x is larger. Note D is the perpendicular (shortest) distance from the lens to the object plane.
This only works if the camera is acting like a "true lens" - in other words, there is no pincushion distortion, and a rectangle in the image plane maps into a rectangle on the sensor. The above combined with your own idea to fit in the uncorrected ('pixel') space, then transform the centers found with perspectiveTransform, ought to get you all the way there.
See what you can do with that!

Why does DirectX use a flipped Y axis?

I am saving my driven X/Y coordinates, and then using a function that convert the coordinates to meters, and add 1280 to each point (so it will fit nicely into a 2560x2560 image), and then draw a polygon between the 'points', resulting in a some sort of racing line. But once I have generated the polygon and saved it as an image, it is vertically flipped somehow. Flipping the image vertically will make it match the track bitmaps perfectly. I was told this is due to DirectX internally has the Y axis flipped. Why does DirectX use a flipped Y axis?
Well, the question is, does DirectX have a flipped Y-axis or does the image?
DirectX uses a 3D/4D coordinate system where the X-axis points to the right and Y-axis points upwards when no transformation is applied. This is because the screen (where Y-axis points downwards) is the last instance that has to process the image. Every step before that uses the coordinate system with the upward Y-axis. Since Direct3D is designed for 3D worlds, a coordinate system that is aligned like the world and like most coordinate system in maths is much more convenient for the programmer and designer. Imagine, you would create a 3D model. Wouldn't it be kind of weird, if you design it so that the Y-axis is pointing downwards?
When you have no transformation at all that would allow perspective and so on, you have the same coordinate system. Ignoring the Z-axis, the top left corner is (-1 | 1), the bottom right corner is (1, -1). This is equal to the coordinate systems used in e.g. maths. In the end, this coordinate system is transformed with the viewport which will result in the top left corner to be (0 | 0) and the bottom right corner to be (ResolutionX | ResolutionY).
So all in all, the reason why the Y-axis points upwards is that Direct3D's main purpose is to describe worlds in a convenient way independently of the screen's physical attributes.

Resources