OpenCV Radon Checkerboard - opencv

Are there any OpenCV camera calibration methods where the target can be larger than the current camera's FoV?
In particular, I use cv2.findChessboardCornersSB with the Radon Checkerboard (openCV calib pattern). I thought it uses the three points in center of the board to determining the board's center. So you the center has to be in the image and not all squares. Therefore we have patterns at the edge of the image resulting in a better distortion model at the boundaries.
Please, tell me if I am right, or if that should not work at all, because I get the checkerboard corner if the full calibration target is in the image, but no corner detection the only the three points are in the image.
Edit:
I use the function find corners as follows:\
img = cv2.imread(path)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCornersSB(gray, (3,3), cv2.CALIB_CB_LARGER + cv2.CALIB_CB_MARKER)
Edit2:
Full pattern - works.
Not full pattern - doesn't work
Thanks

Try adding CALIB_CB_LARGER and CALIB_CB_MARKER to flags when calling findChessboardCornersSB().
According to the doc, CALIB_CB_MARKER means that the pattern must have a marker, and makes calibration more accurate. CALIB_CB_LARGER means the detected pattern is allowed to be larger than patternSize.
Then, you can set the parameter patternSize to the minimum visible size, like 5x6, even if the true size is 10x10 or larger. The pattern will be detected if the marker and 5x6 corners are visible, while corners exceeding the 5x6 zone will also be detected.
As an alternative solution, you can have a look at ChArUco Boards.
Edit: I tested with your image, flags set to CALIB_CB_MARKER|CALIB_CB_LARGER, patternSize set to 3x3, color inverted, then 78 corners are detected:
It seems that the color of marker is the problem.

Related

Rectangle detection in noisy contours

I'm trying to build an algorithm that calculates the dimensions of slabs (in pixel units as of now). I tried masking, but there is no one HSV color range that will work for all the test cases, as the slabs are of varying colors. I tried Otsu thresholding as well but it didn't work quite well...
Now I'm trying my hand with canny edge detection. The original image, and the image after canny-edge look like this:
I used dilation to make the central region a uniform white region, and then used contour detection. I identified the contour having the maximum area as the contour of interest. The resulting contours are a bit noisy, because the canny edge detection also included some background stuff that was irrelevant:
I used cv2.boundingRect() to estimate the height and width of the rectangle, but it keeps returning the height and width of the entire image. I presume this is because it works by calculating (max(x)-min(x),max(y)-min(y)) for each (x,y) in the contour, and in my case the resulting contour has some pixels touching the edges of the image, and so this calculation simply results in (image width, image height).
I am trying to get better images to work with, but assuming all images are like this only, i.e. have noisy contours, what can be an alternate approach to detect the dimensions of the white rectangular region obtained after dilating?
To get the right points of the rectangle use this:
p = cv2.arcLength(cnt True) # cnt is the rect Contours
appr = cv2.approxPolyDP(cnt , 0.01 * p, True) # appr contains the 4 points
# draw the rect
cv2.drawContours(img, [appr], 0, (0, 255, 0), 2)
The appr var contains the turning point of the rect. You still need to do some more cleaning to get better results, but cv2.boundingRect() is not a good solution for your case.

How does meanshift tracking work? (using histograms)

I know, that the Meanshift-Algorithm calculates the mean of a pixel density and checks, if the center of the roi is equal with this point. If not, it moves the new ROI center to the mean center and checks again... . Like in this picture:
For density, it is clear how to find the mean point. But it can't simply calculate the mean of a histogram and get the new position by this point. How can this algorithm work using color histogram?
The feature space in your image is 2D.
Say you have an intensity image (so it's 1D) then you would just have a line (e.g. from 0 to 255) on which the points are located. The circles shown above would just be line segments on that [0,255] line. Depending on their means, these line segments would then shift, just like the circles do in 2D.
You talked about color histograms, so I assume you are talking about RGB.
In that case your feature space is 3D, so you have a sphere instead of a line segment or circle. Your axes are R,G,B and pixels from your image are points in that 3D feature space. You then still look where the mean of a sphere is, to then shift the center towards that mean.

How do I change the default checkerboard blocksize in OpenCV

I'm currently experimenting with OpenCV's calibration toolbox and I'm using a default checkerboard pattern to calibrate a camera. I want to use larger checkerboard blocks so that I can stand farther away from the camera without affecting OpenCV's ability to detect the corners.
As I understand it, OpenCV is pre-programmed with default block-size values. My question is: is there a way to change this default block-size value in the code? And where would I change this? TIA
OpenCV does not make any assumption on the physical size or the pattern size of your pattern.
That is, you can have any pattern with R rows and C columns.
It also doesn't matter if each block is 1 cm or 1 m.
The only thing you give to the calibrateCamera function is the objectPoints and imagePoints.
The array-dimensions (sizes) of these parameters corresponds to the number of corners of your pattern.
objectPoints should contain 3D coordinates (well, planar coordinates in your case, by setting Z=0) of your checkboard corners. These corners should be scaled to the physical size of your checkerboard. That is, if a corner has row-column index (3,1) and each block side is 3 cm then the 3D coordinate would be (0.09, 0.03, 0.00).

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

rotated crop in opencv

I am trying to crop a picture on right on along the contour. The object is detected using surf features and than i want to crop the image of extactly as detected.
When using crop some outside boundaries of other object is includes. I want to crop along the green line below. OpenCV has RotatedRect but i am unsure if its good for cropping.
Is there way to perfectly crop along the green line
I assume you get you get your example from http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html, so what you can do is to find the minimum axis aligned bounding box around the green bounding box, crop it from the image, use the inverted homography (H.inv()) matrix to transform that sub image into a new image (call cv::warpPerspective), and then crop your green bounding box (it should be axis aligned in your new image).
You can get the equations of the lines from the end points for each. Use these equations to check whether any given pixel lies within the green box or not i.e. does it lie between the left and right lines and between the top and bottom lines. Run this over the entire image and reset anything that doesn't lie within the box to black.
Not sure about in-built functionality to do this, but this simple methodology is guaranteed to work. For higher accuracy, you may want to consider sub-pixel checks.

Resources