I have to detect the pattern of 6 circles using opencv. I have detected the circles and their centroids by using thresholding and contour function in opencv.
Now I have to define the relation between these circles in a way that should be invariant to scale and rotation. With this I would be able to detect this pattern in various views. I have to use this pattern for determining the object pose.
How can I achieve scale/rotation invariance? Do you have any reference I could read about it?
To make your pattern invariant toward rotation & scale, you have to normalize the direction and the scale when detecting your pattern. Here is a simple algorithm to achieve this
detect centers and circle size (you say you have already achieved this - good!)
compute the average center using a simple mean. Express all the centers from this mean
find the farthest center using a simple norm (euclidian is good enough)
scale the center position and the circle sizes so that this maximum distance is 1.0
rotate the centers so that coordinates of the farthest one is (1.0, 0)
you're done. You are now the proud owner of a scale/rotation invariant pattern detector!! Congratulations!
Now you can find patterns, transform them as suggested, and compare center position & circle sizes.
It is not entirely clear to me if you need to find the rotation, or merely get rid of it, or detect if the circles actually form the pattern you linked. Either way, the answer is much the same.
I would start by finding the two circles that have only one neighbour. For each circle centroid calculate the distance to the closest two neighbours. If the distances differ in more than say 10%, the centroid belongs to an "end" circle (one of the top ones in your link).
Now that you have found the two end circles, rotate them so that they are horizontal to each other. If the other centroids are now above them, rotate another 180 degrees so that the pattern ends up in the orientation you want.
Now you can calculate the scaling from the average inter-centroid distance.
Hope that helps.
Your question sounds exactly like what the SURF algorithm does. It finds groups of interest and groups them together in a way invarant to rotation and scale, and can find the same object in other pictures.
Just search for OpenCV and SURF.
Related
For a project, I need to store circles detected on some photos. The problem is that some of these photos are taken from an angle, meaning the circles are ellipses. Is it possible to somehow turn the ellipses into circles?
I thought of rectifying the ellipse, then transforming the rectangle to a square. Indeterminate problem comes to my mind, meaning there are too many possible variations for my approach, and the results are different for each approach.
To find perspective transform, you need to have 4 pairs of corresponding coordinates: points at distorted picture and their ideal positions after correction of perspective.
In this case you can calculate matrix of perspective transform with getPerspectiveTransform function and apply it to correct all the picture. Example
Given a photo containing a circle, for example this photo of a fountain:
is it possible to define the 3D position and rotation of the fountain in relation to the camera?
I realise we have to define the scale, so lets say the fountain is 2m wide (the diameter of the circle consisting of the inner rim of the fountain is 2m).
So assuming the circle is a perfect circle, and defining the diameter to 2m, is it possible to determine how the circle and the camera relate spatially? I dont know any camera matrix or anything, the only information i have is the picture.
I specifically want to determine the 3D coordinates of a given pixel on the rim of the fountain.
What would be the math and/or OpenCV code to do this?
Circle with perspective is an ellipse. So you basicly you need an ellipse detector.
This algorithm should work:
Detect all ellipses in the given image.
Filter ellipses that you think they are not a circles in origin. (This is not possible using just 1 Camera so you have to depend on previous knowledge. Something like that you knows that you are taking a photo for a circle).
mmm I stopped typing here and bring a paper&pen and started figuring how to estimate the Homography and it is not that easy! you should deal with the circle a special case of an ellipse and then try to construct a linear system of equations. However, I made quick googling :
https://www.researchgate.net/publication/265212988_Homography_estimation_using_one_ellipse_correspondence_and_minimal_additional_information
http://www.macs.hw.ac.uk/bmvc2006/papers/306.pdf
Seems very interesting topic, I am going to spare sometimes on it later!
OpenCV has capapabilities to compensate for distortion in patterns, such as a this board, for example:
Every example I ever saw for this process does it with grids or squares. I would like to know if something similar exists for a single circle. My practical case is that I detect an ellipse, and I need to calculate the angle between the plane of this ellipse and the projection plane where the ellipse is projected as a circle. I managed to achieve that in my own code, but I would like to know if there is something built into the library to that purpose.
Use the ellipse axes to your advantage
I don't know of any "circular projection" as you name it, but I'm thinking that you can rephrase your problem into having the solution already.
Images make any answer SO cool.
Forget the ellipse, take the axes
A circle can be thought of as 2 vectors with unit norm defining a plane.
The projected circle's axes you estimate are the projection of the unit referential into the 3D plane
Then for projecting back and forth is just an affair of applying the transformation described by the estimated axes vectors
Given two rectangles, and we know the position of four corners, widths, heights, angles.
How to compute the overlapping ratio of these two rectangles?
Can you please help me out?
A convenient way is by the Sutherland-Hodgman polygon clipping algorithm. It works by clipping one of the polygons with the four supporting lines (half-planes) of the other. In the end you get the intersection polygon (at worst an octagon) and find its area by the polygon area formula.
You'll make clipping easier by counter-rotating the polygons around the origin so that one of them becomes axis parallel. This won't change the area.
Note that this approach generalizes easily to two general convex polygons, taking O(N.M) operations. G.T. Toussaint, using the Rotating Caliper principle, reduced the workload to O(N+M), and B. Chazelle & D. P. Dobkin showed that a nonempty intersection can be detected in O(Log(N+M)) operations. This shows that there is probably a little room for improvement for the S-H clipping approach, even though N=M=4 is a tiny problem.
Use rotatedRectangleIntersection function to get contour and use contourArea function to get area and find the ratios
https://docs.opencv.org/3.0-beta/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#rotatedrectangleintersection
Lets say you have rectangle A and B the you can use the operation:
intersection_area = (A & B).area();
from this area you can calculate de respective ratio towards one of the rectangles. there will be harder more dynamic ways to do this as well.
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.