opencv transform tilt view to plan view - opencv

Here is the picture I take with my USB camera. My camera has an angle with horizontal line, the target is on the bottom, with parallel and orthogonal lines delimiting rectangles. Post-it is a control marker of the center-rectangle.
Then I process several step-by-step processing in order to adjust the 'tilt' of the view and to extract lines.
Here is the line extraction without transform :
{"type":"toGray"} => mat.cvtColor( cv4.COLOR_BGR2GRAY);
{"type":"toBlur","size":10} => mat.gaussianBlur( new cv4.Size( size, size),0);
{"type":"toCanny","low":50,"high":150} => mat.canny( low_threshold, high_threshold);
{"type":"getLines","rho":1,"theta":0.017453292222222222,"threshold":15,"min_line_length":50,"max_line_gap":20 }] => let lines = mat.houghLinesP( rho, theta, threshold, min_line_length, max_line_gap);
Result is :
Now, I want to correct the tilt of view, using 'warpAffine' function, before extracting lines.
I select four points of the centered rectangle, in order to build two "three points array" (src, dst):
matTransf = cv4.getAffineTransform( srcPoints, dstPoints);
resultMat = mat.warpAffine( matTransf, new cv4.Size( mat.cols, mat.rows));
The result is the following:
Where is the mistake ?
I have tried too :
// four points at each corner of the rectangle, srcPoints for the picture, and dstPoints for the theoric shape
// With getPerspectiveTransform
matTransf = cv4.getPerspectiveTransform( srcPoints, dstPoints);
resultMat = mat.warpPerspective( matTransf, new cv4.Size( mat.cols, mat.rows));
// With findHomography
let result = cv4.findHomography( srcPoints, dstPoints);
matTransf = result.homography;
resultMat = mat.warpPerspective( matTransf, new cv4.Size( mat.cols, mat.rows));
Result is :
Best regards.

The transformation is not an affinity, it is a perspective described by a homography. Select in the image four corners of a physical rectangle, map them to points in a rectangle with the same aspect ratio as the physical one, estimate the homography from them (findHomography), finally warp (warpPerspective).

Related

SCNBox – Map a texture onto five of six sides

I'm trying to create something like canvas in SceneKit using an SCNBox, with a UIImage "wrapped" around from one surface and onto the four others adjacent to it.
The only way I can currently think to do this would be to chop up the UIImage into five separate images and put those onto the sides as materials, but I'm sure there must be an easier way.
Can anyone steer me in the right direction here? The box will have a separate texture/material on the side opposite the "front".
The easiest way would probably be to create a custom geometry with matching texture coordinates using +geometryWithSources:elements:
You can use contentsTransform property from SCNMaterialProperty, for adjust needed texture coordinates from your image to SCNBox
Some explanations with simplified example:
Lets suppose that you are using cube and you have a texture like this
By dividing it into rectangles, you will have
You want to skip rectangles 1, 3, 7, 9 and cover your cube with this texture.
For this just normalize the size of side from your SCNBox between 0 and 1, and use it to set the scale and transform in contentsTransform matrix.
I have a cube with equal sides in my example - so it will be the third part of the whole texture. For taking the 5 rectangle from the texture
let normalizedWidth = 1/3
let normilizedHeight = 1/3
let xOffset = 1 //skip 1,4,7 line
let yOffset = 1 //skip 1,2,3 line
let sideMaterial = SCNMaterial()
sideMaterial.diffuse.contents = textureImage
let scaleMatrix = SCNMatrix4MakeScale(normalizedWidth, normilizedHeight, 0.0)
sideMaterial.diffuse.contentsTransform = SCNMatrix4Translate(scaleMatrix,
normalizedWidth * xOffset, yOffset * yOffset, 0.0)
You can fill 5 sides with configured materials, and the last on (on the back) just with the color and set them to materials property of your SCNBox.
In the result you will have

how to mark and plot boundries using the threshold on an image using skimage

I have an image of these coins
I have tried the algorithms using the skimage.filters on the grayscale version of this. I want to know that how do I find the lines around these coins and plot contours over coins in the original image.
I have used
img = io.imread('coins.jpg')
img = color.rgb2gray(img)
f,ax = filters.try_all_threshold(img,verbose=False,figsize=(8,15))
_ = ax[0].set_title('Grayscale version of Original')
There are quite a few different ways to find out the outer boundaries of the coins. But which method to pick depends on what you want to achieve with that result.
#Belal Homaidan has already pointed one such solution. You can also use canny edge detector and then apply Hough transform. Here's an example:
Circular and Elliptical Hough Transforms
EDIT:
The most relevant part for you as per your question, perhaps, is following:
# Draw them
fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 4))
image = color.gray2rgb(image)
for center_y, center_x, radius in zip(cy, cx, radii):
circy, circx = circle_perimeter(center_y, center_x, radius,
shape=image.shape)
image[circy, circx] = (220, 20, 20)
ax.imshow(image, cmap=plt.cm.gray)
plt.show()

OpenCV : How to replace a bigger rect with a smaller rect

I am doing some operation where I need to replace the bigger rectangle with the smaller rectangle.
Most answers suggested to use smallerRectMat.copyTo(biggerRectMat) but it didn't give me the require output. The submat is changed but the original image is as it is.
And when I try to see the submat both became same of same smaller rectangle size.
Mat rectNose = testBuffer.submat(rectA.y,rectA.y+rectA.height,rectA.x,rectC.x+rectC.width);
Rect biggerRect = getHeadContour(testBuffer);
Mat rectHead = testBuffer.submat(biggerRect.y+1,biggerRect.y+biggerRect.height,biggerRect.x+1,biggerRect.x+biggerRect.width);
rectNose.copyTo(rectHead);
Imgcodecs.imwrite("/Users/test.jpg",rectHead);
Imgcodecs.imwrite("/Users/test1.jpg",rectNose);
Imgcodecs.imwrite("/Users/test1.jpg",testBuffer);
Basically I want to copy the rectangle near the nose region to the rectangle with blue boundary at forehead.
You can try ROI(Region Of Image) scaling
smallRect = img[rectA.y:rectA.y+rectA.height, rectA.x:rectC.x+rectC.width]
upscaledRegion = cv2.resize(smallRect , (biggerRect.width, biggerRect.height), interpolation=cv2.INTER_AREA)
img[biggerRect.y:biggerRect.y+biggerRect.height, biggerRect.x:biggerRect.x+biggerRect.width] = upscaledRegion

Emgu CV:using HoughCircles detect many nonexistent circles,and waiting for long time

original picture
result picture
Here is my code:
private void button3_Click(object sender, EventArgs e)
{
string strFileName = string.Empty;
OpenFileDialog ofd = new OpenFileDialog();
if (ofd.ShowDialog() == DialogResult.OK)
{
Image<Bgr, byte> img1 = new Image<Bgr, byte>(ofd.FileName);
pictureBox1.Image = img1.ToBitmap();
Image<Gray, Byte> gray1 = img1.Convert<Gray, Byte>().PyrUp().PyrDown();
CircleF[] circles = gray1.HoughCircles(
new Gray(150),
new Gray(100),
2,
10,
0,
0)[0];
Image<Bgr, byte> imageCircles = img1.CopyBlank();
foreach (CircleF circle in circles)
{
imageCircles.Draw(circle, new Bgr(Color.Yellow), 5);
}
pictureBox4.Image = imageCircles.ToBitmap();
}
}
Are my parameters set correctly? Is there something I'm not understanding correctly?
Thank you!
EmguCV wraps the OpenCV stuff, so you can get mor information about how to use Emgu methods by looking at the OpenCV Doku. Here is he Hough transformation explained (could be that the parameter count or order varies but in most cases the parameter names match. At the explanation point 4 (Proceed to apply Hough Circle Transform:) you get all parameters explained:
dp = 1: The inverse ratio of resolution
min_dist = src_gray.rows/8: Minimum distance between detected centers
param_1 = 200: Upper threshold for the internal Canny edge detector
param_2 = 100*: Threshold for center detection.
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default
According to the Emgu Doku these values are used:
cannyThreshold
Type: TColor
The higher threshold of the two passed to Canny edge detector (the lower one will be twice smaller).
accumulatorThreshold
Type: TColor
Accumulator threshold at the center detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first
dp
Type: SystemDouble
Resolution of the accumulator used to detect centers of the circles. For example, if it is 1, the accumulator will have the same resolution as the input image, if it is 2 - accumulator will have twice smaller width and height, etc
minDist
Type: SystemDouble
Minimum distance between centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed
minRadius (Optional)
Type: SystemInt32
Minimal radius of the circles to search for
maxRadius (Optional)
Type: SystemInt32
Maximal radius of the circles to search for
So try to use a higher accumulatorThreshold (like 150 or 180) because your circles are pretty obvious. On large images a higher dp could help too, decreasing your waiting time.
Your dp is 10 ( I think it means 10pixils between detected centers) just take a much higher number depending on your image size. the OpenCV example uses an 8th of images height.
A minRadius of 0 should be ok since your image shows only large (yellow) circles.
A max Radios should depend on your image size too - this would remove those really large circles in lower image part.
Just try 5% of your image height as minRadius and ca 90% as maxRadius - or simply use sliders and an apply button to try it yourself

aruco::detectMarkers is not finding true edges of markers

I'm using ArUco markers to correct perspective and calculate sizes in an image. In this image I know the exact distance between the outer edges of the markers and am using that to calculate the sizes of the black rectangles.
My problem is that aruco::detectMarkers doesn't always identify the true edges of the markers (as shown in the detail image). When I correct the perspective based on the corners of the markers, it causes distortion that affects the size calculations of the objects in the image.
Is there a way to improve the edge detection accuracy of aruco::detectMarkers?
Here's a scaled-down photo of the entire board:
Here's the detail of the lower-left marker showing the inaccuracy of the edge detection:
Here's the detail of the upper-right marker showing an accurate edge detection of the same marker ID:
It's hard to see in this shrunken image but the upper-left marker is accurate and the lower-right marker is inaccurate.
My function that calls detectMarkers:
bool findMarkers(const Mat image, Point2d outerMarkerCoordinates[], Point2d innerMarkerCoordinates[], Size2d *boardSize) {
Ptr<aruco::Dictionary> theDictionary = aruco::getPredefinedDictionary(aruco::DICT_4X4_1000);
vector<vector<Point2f> > markers;
vector<int> ids;
aruco::detectMarkers(image, theDictionary, markers, ids);
aruco::drawDetectedMarkers(image, markers, ids);
return true; //There's actually more code here that makes sure there are four markers.
}
Examination of the optional detectorParameters argument to detectMarkers showed a parameter called doCornerRefinement. Its description is "do subpixel refinement or not". Since the error I'm seeing is larger than a pixel, I didn't think this was applicable to my situation. I gave it a try anyway and experimented with the cornerRefinementWinSize value and found that it did indeed solve my problem. Now I'm thinking that "pixel" in the ArUco sense is the size of one of the squares within the marker, not an image pixel.
The modified call to detectMarkers:
bool findMarkers(const Mat image, Point2d outerMarkerCoordinates[], Point2d innerMarkerCoordinates[], Size2d *boardSize) {
Ptr<aruco::Dictionary> theDictionary = aruco::getPredefinedDictionary(aruco::DICT_4X4_1000);
vector<vector<Point2f> > markers;
vector<int> ids;
Ptr<aruco::DetectorParameters> detectorParameters = new aruco::DetectorParameters;
detectorParameters->doCornerRefinement = true;
detectorParameters->cornerRefinementWinSize = 11;
aruco::detectMarkers(image, theDictionary, markers, ids, detectorParameters);
aruco::drawDetectedMarkers(image, markers, ids);
return true; //There's actually more code here that makes sure there are four markers.
}
Success!

Resources