Decrease noise while detecting lines from webcam - opencv

I'm trying to implement an online document scanner that automatically detects the edges of it and takes a photo when the area of the rectangle is bigger enough. I implemented the following pipeline in opencvjs:
// Grayscale image
var imgray = new cv.Mat();
cv.cvtColor(srcMat, imgray, cv.COLOR_RGBA2GRAY, 0);
// Blurring
let blurred = new cv.Mat();
let ksize = new cv.Size(7, 7);
cv.GaussianBlur(imgray, blurred, ksize, 0, 0, cv.BORDER_DEFAULT);
// Canny
var canny = new cv.Mat();
low_threshold = 50;
high_threshold = 100;
cv.Canny(blurred, canny, 50, 150, 3, false);
// Hough
rho = 1 // distance resolution in pixels of the Hough grid
theta = Math.PI / 180 // angular resolution in radians of the Hough grid
threshold = 2 // minimum number of votes (intersections in Hough grid cell)
min_line_length = 100 // minimum number of pixels making up a line
max_line_gap = 10 // maximum gap in pixels between connectable line segments
let lines = new cv.Mat();
// Run Hough on edge detected image
// Output "lines" is an array containing endpoints of detected line segments
cv.HoughLinesP(canny, lines, rho, theta, threshold, min_line_length, max_line_gap);
// draw lines
for (let i = 0; i < lines.rows; ++i) {
let startPoint = new cv.Point(lines.data32S[i * 4], lines.data32S[i * 4 + 1]);
let endPoint = new cv.Point(lines.data32S[i * 4 + 2], lines.data32S[i * 4 + 3]);
cv.line(canny, startPoint, endPoint, new cv.Scalar(255, 255, 255, 0), 5);
}
document.getElementById("lines").textContent=lines.rows;
imgray.delete();
blurred.delete();
lines.delete();
return canny;
The result is what you see in the video footage:
The problem is that while Canny processed edges are quite "stable", lines detected by HoughLinesP change continuously position and they are not easy to track. Where am I wrong? Can you suggest some enhancements to this pipeline?

Related

python segment an image of text line by line [duplicate]

I am trying to find a way to break the split the lines of text in a scanned document that has been adaptive thresholded. Right now, I am storing the pixel values of the document as unsigned ints from 0 to 255, and I am taking the average of the pixels in each line, and I split the lines into ranges based on whether the average of the pixels values is larger than 250, and then I take the median of each range of lines for which this holds. However, this methods sometimes fails, as there can be black splotches on the image.
Is there a more noise-resistant way to do this task?
EDIT: Here is some code. "warped" is the name of the original image, "cuts" is where I want to split the image.
warped = threshold_adaptive(warped, 250, offset = 10)
warped = warped.astype("uint8") * 255
# get areas where we can split image on whitespace to make OCR more accurate
color_level = np.array([np.sum(line) / len(line) for line in warped])
cuts = []
i = 0
while(i < len(color_level)):
if color_level[i] > 250:
begin = i
while(color_level[i] > 250):
i += 1
cuts.append((i + begin)/2) # middle of the whitespace region
else:
i += 1
EDIT 2: Sample image added
From your input image, you need to make text as white, and background as black
You need then to compute the rotation angle of your bill. A simple approach is to find the minAreaRect of all white points (findNonZero), and you get:
Then you can rotate your bill, so that text is horizontal:
Now you can compute horizontal projection (reduce). You can take the average value in each line. Apply a threshold th on the histogram to account for some noise in the image (here I used 0, i.e. no noise). Lines with only background will have a value >0, text lines will have value 0 in the histogram. Then take the average bin coordinate of each continuous sequence of white bins in the histogram. That will be the y coordinate of your lines:
Here the code. It's in C++, but since most of the work is with OpenCV functions, it should be easy convertible to Python. At least, you can use this as a reference:
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
// Read image
Mat3b img = imread("path_to_image");
// Binarize image. Text is white, background is black
Mat1b bin;
cvtColor(img, bin, COLOR_BGR2GRAY);
bin = bin < 200;
// Find all white pixels
vector<Point> pts;
findNonZero(bin, pts);
// Get rotated rect of white pixels
RotatedRect box = minAreaRect(pts);
if (box.size.width > box.size.height)
{
swap(box.size.width, box.size.height);
box.angle += 90.f;
}
Point2f vertices[4];
box.points(vertices);
for (int i = 0; i < 4; ++i)
{
line(img, vertices[i], vertices[(i + 1) % 4], Scalar(0, 255, 0));
}
// Rotate the image according to the found angle
Mat1b rotated;
Mat M = getRotationMatrix2D(box.center, box.angle, 1.0);
warpAffine(bin, rotated, M, bin.size());
// Compute horizontal projections
Mat1f horProj;
reduce(rotated, horProj, 1, CV_REDUCE_AVG);
// Remove noise in histogram. White bins identify space lines, black bins identify text lines
float th = 0;
Mat1b hist = horProj <= th;
// Get mean coordinate of white white pixels groups
vector<int> ycoords;
int y = 0;
int count = 0;
bool isSpace = false;
for (int i = 0; i < rotated.rows; ++i)
{
if (!isSpace)
{
if (hist(i))
{
isSpace = true;
count = 1;
y = i;
}
}
else
{
if (!hist(i))
{
isSpace = false;
ycoords.push_back(y / count);
}
else
{
y += i;
count++;
}
}
}
// Draw line as final result
Mat3b result;
cvtColor(rotated, result, COLOR_GRAY2BGR);
for (int i = 0; i < ycoords.size(); ++i)
{
line(result, Point(0, ycoords[i]), Point(result.cols, ycoords[i]), Scalar(0, 255, 0));
}
return 0;
}
Basic steps as #Miki,
read the source
threshed
find minAreaRect
warp by the rotated matrix
find and draw upper and lower bounds
While code in Python:
#!/usr/bin/python3
# 2018.01.16 01:11:49 CST
# 2018.01.16 01:55:01 CST
import cv2
import numpy as np
## (1) read
img = cv2.imread("img02.jpg")
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
## (2) threshold
th, threshed = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
## (3) minAreaRect on the nozeros
pts = cv2.findNonZero(threshed)
ret = cv2.minAreaRect(pts)
(cx,cy), (w,h), ang = ret
if w>h:
w,h = h,w
ang += 90
## (4) Find rotated matrix, do rotation
M = cv2.getRotationMatrix2D((cx,cy), ang, 1.0)
rotated = cv2.warpAffine(threshed, M, (img.shape[1], img.shape[0]))
## (5) find and draw the upper and lower boundary of each lines
hist = cv2.reduce(rotated,1, cv2.REDUCE_AVG).reshape(-1)
th = 2
H,W = img.shape[:2]
uppers = [y for y in range(H-1) if hist[y]<=th and hist[y+1]>th]
lowers = [y for y in range(H-1) if hist[y]>th and hist[y+1]<=th]
rotated = cv2.cvtColor(rotated, cv2.COLOR_GRAY2BGR)
for y in uppers:
cv2.line(rotated, (0,y), (W, y), (255,0,0), 1)
for y in lowers:
cv2.line(rotated, (0,y), (W, y), (0,255,0), 1)
cv2.imwrite("result.png", rotated)
Finally result:

How can I filter out points of an edge-detected circle that are extremely noisy?

I am working on detecting the center and radius of a circular aperture that is illuminated by a laser beam. The algorithm will be fed images from a system that I have no physical control over (i.e. dimming the source or adjusting the laser position.) I need to do this with C++, and have chosen to use openCV.
In some regions the edge of the aperture is well defined, but in others it is very noisy. I currently am trying to isolate the "good" points to do a RANSAC fit, but I have taken other steps along the way. Below are two original images for reference:
I first began by trying to do a Hough fit. I performed a median blur to remove the salt and pepper noise, then a Gaussian blur, and then fed the image to the HoughCircle function in openCV, with sliders controlling the Hough parameters 1 and 2 defined here. The results were disastrous:
I then decided to try to process the image some more before sending it to the HoughCircle. I started with the original image, median blurred, Gaussian blurred, thresholded, dilated, did a Canny edge detection, and then fed the Canny image to the function.
I was eventually able to get a reasonable estimate of my circle, but it was about the 15th circle to show up when manually decreasing the Hough parameters. I manually drew the purple outline, with the green circles representing Hough outputs that were near my manual estimate. The below images are:
Canny output without dilation
Canny output with dilation
Hough output of the dilated Canny image drawn on the original image.
As you can see, the number of invalid circles vastly outnumbers the correct circle, and I'm not quite sure how to isolate the good circles given that the Hough transform returns so many other invalid circles with parameters that are more strict.
I currently have some code I implemented that works OK for all of the test images I was given, but the code is a convoluted mess with many tunable parameters that seems very fragile. The driving logic behind what I did was from noticing that regions of the aperture edges that were well-illuminated by the laser were relatively constant across several threshold levels (image shown below).
I did edge detection at two threshold levels and stored points that overlapped in both images. Currently there is also some inaccuracy with the result because the aperture edge does still shift slightly with the different threshold levels. I can post the very long code for this if necessary, but the pseudo-code behind it is:
1. Perform a median blur, followed by a Gaussian blur. Kernels are 9x9.
2. Threshold the image until 35% of the image is white. (~intensities > 30)
3. Take the Canny edges of this thresholded image and store (Canny1)
4. Take the original image, perform the same median and Gaussian blurs, but threshold with a 50% larger value, giving a smaller spot (~intensities > 45)
5. Perform the "Closing" morphology operation to further erode the spot and remove any smaller contours.
6. Perform another Canny to get the edges, and store this image (Canny2)
7. Blur both the Canny images with a 7x7 Gaussian blur.
8. Take the regions where the two Canny images overlap and say that these points are likely to be good points.
9. Do a RANSAC circle fit with these points.
I've noticed that there are regions of the edge detected circle that are pretty distinguishable by the human eye as being part of the best circle. Is there a way to isolate these regions for a RANSAC fit?
Code for Hough:
int houghParam1 = 100;
int houghParam2 = 100;
int dp = 10; //divided by 10 later
int x=616;
int y=444;
int radius = 398;
int iterations = 0;
int main()
{
namedWindow("Circled Orig");
namedWindow("Processed", 1);
namedWindow("Circles");
namedWindow("Parameters");
namedWindow("Canny");
createTrackbar("Param1", "Parameters", &houghParam1, 200);
createTrackbar("Param2", "Parameters", &houghParam2, 200);
createTrackbar("dp", "Parameters", &dp, 20);
createTrackbar("x", "Parameters", &x, 1200);
createTrackbar("y", "Parameters", &y, 1200);
createTrackbar("radius", "Parameters", &radius, 900);
createTrackbar("dilate #", "Parameters", &iterations, 20);
std::string directory = "Secret";
std::string suffix = ".pgm";
Mat processedImage;
Mat origImg;
for (int fileCounter = 2; fileCounter < 3; fileCounter++) //1, 12
{
std::string numString = std::to_string(static_cast<long long>(fileCounter));
std::string imageFile = directory + numString + suffix;
testImage = imread(imageFile);
Mat bwImage;
cvtColor(testImage, bwImage, CV_BGR2GRAY);
GaussianBlur(bwImage, processedImage, Size(9, 9), 9);
threshold(processedImage, processedImage, 25, 255, THRESH_BINARY); //THRESH_OTSU
int numberContours = -1;
int iterations = 1;
imshow("Processed", processedImage);
}
vector<Vec3f> circles;
Mat element = getStructuringElement(MORPH_ELLIPSE, Size(5, 5));
float dp2 = dp;
while (true)
{
float dp2 = dp;
Mat circleImage = processedImage.clone();
origImg = testImage.clone();
if (iterations > 0) dilate(circleImage, circleImage, element, Point(-1, -1), iterations);
Mat cannyImage;
Canny(circleImage, cannyImage, 100, 20);
imshow("Canny", cannyImage);
HoughCircles(circleImage, circles, HOUGH_GRADIENT, dp2/10, 5, houghParam1, houghParam2, 300, 5000);
cvtColor(circleImage, circleImage, CV_GRAY2BGR);
for (size_t i = 0; i < circles.size(); i++)
{
Scalar color = Scalar(0, 0, 255);
Point center2(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius2 = cvRound(circles[i][2]);
if (abs(center2.x - x) < 10 && abs((center2.y - y) < 10) && abs(radius - radius2) < 20) color = Scalar(0, 255, 0);
circle(circleImage, center2, 3, color, -1, 8, 0);
circle(circleImage, center2, radius2, color, 3, 8, 0);
circle(origImg, center2, 3, color, -1, 8, 0);
circle(origImg, center2, radius2,color, 3, 8, 0);
}
//Manual circles
circle(circleImage, Point(x, y), 3, Scalar(128, 0, 128), -1, 8, 0);
circle(circleImage, Point(x, y), radius, Scalar(128, 0, 128), 3, 8, 0);
circle(origImg, Point(x, y), 3, Scalar(128, 0, 128), -1, 8, 0);
circle(origImg, Point(x, y), radius, Scalar(128, 0, 128), 3, 8, 0);
imshow("Circles", circleImage);
imshow("Circled Orig", origImg);
int x = waitKey(50);
}
Mat drawnImage;
cvtColor(processedImage, drawnImage, CV_GRAY2BGR);
return 1;
}
Thanks #jalconvolvon - this is an interesting problem. Here's my result:
What I find important on and on is using dynamic parameter adjustment when prototyping, thus I include the function I used to tune Canny detection. The code also uses this answer for the Ransac part.
import cv2
import numpy as np
import auxcv as aux
from skimage import measure, draw
def empty_function(*arg):
pass
# tune canny edge detection. accept with pressing "C"
def CannyTrackbar(img, win_name):
trackbar_name = win_name + "Trackbar"
cv2.namedWindow(win_name)
cv2.resizeWindow(win_name, 500,100)
cv2.createTrackbar("canny_th1", win_name, 0, 255, empty_function)
cv2.createTrackbar("canny_th2", win_name, 0, 255, empty_function)
cv2.createTrackbar("blur_size", win_name, 0, 255, empty_function)
cv2.createTrackbar("blur_amp", win_name, 0, 255, empty_function)
while True:
trackbar_pos1 = cv2.getTrackbarPos("canny_th1", win_name)
trackbar_pos2 = cv2.getTrackbarPos("canny_th2", win_name)
trackbar_pos3 = cv2.getTrackbarPos("blur_size", win_name)
trackbar_pos4 = cv2.getTrackbarPos("blur_amp", win_name)
img_blurred = cv2.GaussianBlur(img.copy(), (trackbar_pos3 * 2 + 1, trackbar_pos3 * 2 + 1), trackbar_pos4)
canny = cv2.Canny(img_blurred, trackbar_pos1, trackbar_pos2)
cv2.imshow(win_name, canny)
key = cv2.waitKey(1) & 0xFF
if key == ord("c"):
break
cv2.destroyAllWindows()
return canny
img = cv2.imread("sphere.jpg")
#resize for convenience
img = cv2.resize(img, None, fx = 0.2, fy = 0.2)
#closing
kernel = np.ones((11,11), np.uint8)
img = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
#sharpening
kernel = np.array([[-1,-1,-1], [-1,9,-1], [-1,-1,-1]])
img = cv2.filter2D(img, -1, kernel)
#test if you use different scale img than 0.2 of the original that I used
#remember that the actual kernel size for GaussianBlur is trackbar_pos3*2+1
#you want to get as full circle as possible here
#canny = CannyTrackbar(img, "canny_trakbar")
#additional blurring to reduce the offset toward brighter region
img_blurred = cv2.GaussianBlur(img.copy(), (8*2+1,8*2+1), 1)
#detect edge. important: make sure this works well with CannyTrackbar()
canny = cv2.Canny(img_blurred, 160, 78)
coords = np.column_stack(np.nonzero(canny))
model, inliers = measure.ransac(coords, measure.CircleModel,
min_samples=3, residual_threshold=1,
max_trials=1000)
rr, cc = draw.circle_perimeter(int(model.params[0]),
int(model.params[1]),
int(model.params[2]),
shape=img.shape)
img[rr, cc] = 1
import matplotlib.pyplot as plt
plt.imshow(img, cmap='gray')
plt.scatter(model.params[1], model.params[0], s=50, c='red')
plt.axis('off')
plt.savefig('sphere_center.png', bbox_inches='tight')
plt.show()
Now I'd probably try to calculate where pixels are statisticaly brigher and where they are dimmer to adjust the laser position (if I understand correctly what you're trying to do)
If the Ransac is still not enough. I'd try tuning Canny to only detect a perfect arc on top of the circle (where it's well outlined) and than try using the following dependencies (I suspect that this should be possible):

OpenCV : wrapPerspective on whole image

I'm detecting markers on images captured by my iPad. Because of that I want to calculate translations and rotations between them, I want to change change perspective on images these image, so it would look like I'm capturing them directly above markers.
Right now I'm using
points2D.push_back(cv::Point2f(0, 0));
points2D.push_back(cv::Point2f(50, 0));
points2D.push_back(cv::Point2f(50, 50));
points2D.push_back(cv::Point2f(0, 50));
Mat perspectiveMat = cv::getPerspectiveTransform(points2D, imagePoints);
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(_image->cols, _image->rows));
Which gives my these results (look at the right-bottom corner for result of warpPerspective):
As you probably see result image contains recognized marker in left-top corner of the result image. My problem is that I want to capture whole image (without cropping) so I could detect other markers on that image later.
How can I do that? Maybe I should use rotation/translation vectors from solvePnP function?
EDIT:
Unfortunatelly changing size of warped image don't help much, because image is still translated so left-top corner of marker is in top-left corner of image.
For example when I've doubled size using:
cv::warpPerspective(*_image, *_undistortedImage, M, cv::Size(2*_image->cols, 2*_image->rows));
I've recieved these images:
Your code doesn't seem to be complete, so it is difficult to say what the problem is.
In any case the warped image might have completely different dimensions compared to the input image so you will have to adjust the size paramter you are using for warpPerspective.
For example try to double the size:
cv::warpPerspective(*_image, *_undistortedImage, M, 2*cv::Size(_image->cols, _image->rows));
Edit:
To make sure the whole image is inside this image, all corners of your original image must be warped to be inside the resulting image. So simply calculate the warped destination for each of the corner points and adjust the destination points accordingly.
To make it more clear some sample code:
// calculate transformation
cv::Matx33f M = cv::getPerspectiveTransform(points2D, imagePoints);
// calculate warped position of all corners
cv::Point3f a = M.inv() * cv::Point3f(0, 0, 1);
a = a * (1.0/a.z);
cv::Point3f b = M.inv() * cv::Point3f(0, _image->rows, 1);
b = b * (1.0/b.z);
cv::Point3f c = M.inv() * cv::Point3f(_image->cols, _image->rows, 1);
c = c * (1.0/c.z);
cv::Point3f d = M.inv() * cv::Point3f(_image->cols, 0, 1);
d = d * (1.0/d.z);
// to make sure all corners are in the image, every position must be > (0, 0)
float x = ceil(abs(min(min(a.x, b.x), min(c.x, d.x))));
float y = ceil(abs(min(min(a.y, b.y), min(c.y, d.y))));
// and also < (width, height)
float width = ceil(abs(max(max(a.x, b.x), max(c.x, d.x)))) + x;
float height = ceil(abs(max(max(a.y, b.y), max(c.y, d.y)))) + y;
// adjust target points accordingly
for (int i=0; i<4; i++) {
points2D[i] += cv::Point2f(x,y);
}
// recalculate transformation
M = cv::getPerspectiveTransform(points2D, imagePoints);
// get result
cv::Mat result;
cv::warpPerspective(*_image, result, M, cv::Size(width, height), cv::WARP_INVERSE_MAP);
I implemented littleimp's answer in python in case anyone needs it. It should be noted that this will not work properly if the vanishing points of the polygons are falling within the image.
import cv2
import numpy as np
from PIL import Image, ImageDraw
import math
def get_transformed_image(src, dst, img):
# calculate the tranformation
mat = cv2.getPerspectiveTransform(src.astype("float32"), dst.astype("float32"))
# new source: image corners
corners = np.array([
[0, img.size[0]],
[0, 0],
[img.size[1], 0],
[img.size[1], img.size[0]]
])
# Transform the corners of the image
corners_tranformed = cv2.perspectiveTransform(
np.array([corners.astype("float32")]), mat)
# These tranformed corners seems completely wrong/inverted x-axis
print(corners_tranformed)
x_mn = math.ceil(min(corners_tranformed[0].T[0]))
y_mn = math.ceil(min(corners_tranformed[0].T[1]))
x_mx = math.ceil(max(corners_tranformed[0].T[0]))
y_mx = math.ceil(max(corners_tranformed[0].T[1]))
width = x_mx - x_mn
height = y_mx - y_mn
analogy = height/1000
n_height = height/analogy
n_width = width/analogy
dst2 = corners_tranformed
dst2 -= np.array([x_mn, y_mn])
dst2 = dst2/analogy
mat2 = cv2.getPerspectiveTransform(corners.astype("float32"),
dst2.astype("float32"))
img_warp = Image.fromarray((
cv2.warpPerspective(np.array(image),
mat2,
(int(n_width),
int(n_height)))))
return img_warp
# image coordingates
src= np.array([[ 789.72, 1187.35],
[ 789.72, 752.75],
[1277.35, 730.66],
[1277.35,1200.65]])
# known coordinates
dst=np.array([[0, 1000],
[0, 0],
[1092, 0],
[1092, 1000]])
# Create the image
image = Image.new('RGB', (img_width, img_height))
image.paste( (200,200,200), [0,0,image.size[0],image.size[1]])
draw = ImageDraw.Draw(image)
draw.line(((src[0][0],src[0][1]),(src[1][0],src[1][1]), (src[2][0],src[2][1]),(src[3][0],src[3][1]), (src[0][0],src[0][1])), width=4, fill="blue")
#image.show()
warped = get_transformed_image(src, dst, image)
warped.show()
There are two things you need to do:
Increase the size of the output of cv2.warpPerspective
Translate the warped source image such that the center of the warped source image matches with the center of cv2.warpPerspective output image
Here is how code will look:
# center of source image
si_c = [x//2 for x in image.shape] + [1]
# find where center of source image will be after warping without comepensating for any offset
wsi_c = np.dot(H, si_c)
wsi_c = [x/wsi_c[2] for x in wsi_c]
# warping output image size
stitched_frame_size = tuple(2*x for x in image.shape)
# center of warping output image
wf_c = image.shape
# calculate offset for translation of warped image
x_offset = wf_c[0] - wsi_c[0]
y_offset = wf_c[1] - wsi_c[1]
# translation matrix
T = np.array([[1, 0, x_offset], [0, 1, y_offset], [0, 0, 1]])
# translate tomography matrix
translated_H = np.dot(T.H)
# warp
stitched = cv2.warpPerspective(image, translated_H, stitched_frame_size)

OpenCV calculate distance (stereo vision)

For my project I am using parts of the next code: link.
To track objects of a specific color I implemented this method:
My question is: How can I calculate the distance to the tracked colored objects?
Thank you in advance!
*The application calls the method for the left and right frame. This is not efficient...
**I need to calculate detectedObject.Zcor
DetectedObject Detect(IplImage *frame)
{
//Track object (left frame and right frame)
//Calculate average position
//Show X,Y,Z coordinate and detected color
color_image = frame;
imgThreshold = cvCreateImage(cvSize(color_image->width,color_image->height), IPL_DEPTH_8U, 1);
cvInitFont(&font, CV_FONT_HERSHEY_PLAIN, 1, 1, 0, 1.4f, CV_AA);
imgdraw = cvCreateImage(cvGetSize(color_image),8,3);
cvSetZero(imgdraw);
cvFlip(color_image, color_image, 1);
cvSmooth(color_image, color_image, CV_GAUSSIAN, 3, 0);
threshold = getThreshold(color_image);
cvErode(threshold, threshold, NULL, 3);
cvDilate(threshold, threshold, NULL, 10);
imgThreshold = cvCloneImage(threshold);
storage = cvCreateMemStorage(0);
contours = cvCreateSeq(0, sizeof(CvSeq), sizeof(CvPoint), storage);
cvFindContours(threshold, storage, &contours, sizeof(CvContour), CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE, cvPoint(0,0));
final = cvCreateImage(cvGetSize(color_image),8,3);
for(; contours!=0; contours = contours->h_next)
{
CvRect rect = cvBoundingRect(contours, 0);
cvRectangle(color_image,
cvPoint(rect.x, rect.y),
cvPoint(rect.x+rect.width, rect.y+rect.height),
cvScalar(0,0,255,0),
2,8,0);
string s = to_string(rect.x) + "," + to_string(rect.y);
char const* pchar = s.c_str();
cvPutText(frame, pchar, cvPoint(rect.x, rect.y), &font, cvScalar(0,0,255,0));
detectedObject.Xcor = rect.x;
detectedObject.Ycor = rect.y;
}
cvShowImage("Threshold", imgThreshold);
cvAdd(final,imgdraw,final);
detectedObject.Zcor = 0;
return detectedObject;
}
For depth estimation you will need a calibrated stereo pair (known camera matrices for both the left and the right cameras). Then, using the camera matrices and corresponding points/contours in the stereo pair, you can compute depth.

How to find corners on a Image using OpenCv

I´m trying to find the corners on a image, I don´t need the contours, only the 4 corners. I will change the perspective using 4 corners.
I´m using Opencv, but I need to know the steps to find the corners and what function I will use.
My images will be like this:(without red points, I will paint the points after)
EDITED:
After suggested steps, I writed the code: (Note: I´m not using pure OpenCv, I´m using javaCV, but the logic it´s the same).
// Load two images and allocate other structures (I´m using other image)
IplImage colored = cvLoadImage(
"res/scanteste.jpg",
CV_LOAD_IMAGE_UNCHANGED);
IplImage gray = cvCreateImage(cvGetSize(colored), IPL_DEPTH_8U, 1);
IplImage smooth = cvCreateImage(cvGetSize(colored), IPL_DEPTH_8U, 1);
//Step 1 - Convert from RGB to grayscale (cvCvtColor)
cvCvtColor(colored, gray, CV_RGB2GRAY);
//2 Smooth (cvSmooth)
cvSmooth( gray, smooth, CV_BLUR, 9, 9, 2, 2);
//3 - cvThreshold - What values?
cvThreshold(gray,gray, 155, 255, CV_THRESH_BINARY);
//4 - Detect edges (cvCanny) -What values?
int N = 7;
int aperature_size = N;
double lowThresh = 20;
double highThresh = 40;
cvCanny( gray, gray, lowThresh*N*N, highThresh*N*N, aperature_size );
//5 - Find contours (cvFindContours)
int total = 0;
CvSeq contour2 = new CvSeq(null);
CvMemStorage storage2 = cvCreateMemStorage(0);
CvMemStorage storageHull = cvCreateMemStorage(0);
total = cvFindContours(gray, storage2, contour2, Loader.sizeof(CvContour.class), CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE);
if(total > 1){
while (contour2 != null && !contour2.isNull()) {
if (contour2.elem_size() > 0) {
//6 - Approximate contours with linear features (cvApproxPoly)
CvSeq points = cvApproxPoly(contour2,Loader.sizeof(CvContour.class), storage2, CV_POLY_APPROX_DP,cvContourPerimeter(contour2)*0.005, 0);
cvDrawContours(gray, points,CvScalar.BLUE, CvScalar.BLUE, -1, 1, CV_AA);
}
contour2 = contour2.h_next();
}
}
So, I want to find the cornes, but I don´t know how to use corners function like cvCornerHarris and others.
First, check out /samples/c/squares.c in your OpenCV distribution. This example provides a square detector, and it should be a pretty good start on how to detect corner-like features. Then, take a look at OpenCV's feature-oriented functions like cvCornerHarris() and cvGoodFeaturesToTrack().
The above methods can return many corner-like features - most will not be the "true corners" you are looking for. In my application, I had to detect squares that had been rotated or skewed (due to perspective). My detection pipeline consisted of:
Convert from RGB to grayscale (cvCvtColor)
Smooth (cvSmooth)
Threshold (cvThreshold)
Detect edges (cvCanny)
Find contours (cvFindContours)
Approximate contours with linear features (cvApproxPoly)
Find "rectangles" which were structures that: had polygonalized contours possessing 4 points, were of sufficient area, had adjacent edges were ~90 degrees, had distance between "opposite" vertices was of sufficient size, etc.
Step 7 was necessary because a slightly noisy image can yield many structures that appear rectangular after polygonalization. In my application, I also had to deal with square-like structures that appeared within, or overlapped the desired square. I found the contour's area property and center of gravity to be helpful in discerning the proper rectangle.
At a first glance, for a human eye there are 4 corners. But in computer vision, a corner is considered to be a point that has large gradient change in intensity across its neighborhood. The neighborhood can be a 4 pixel neighborhood or an 8 pixel neighborhood.
In the equation provided to find the gradient of intensity, it has been considered for 4-pixel neighborhood SEE DOCUMENTATION.
Here is my approach for the image in question. I have the code in python as well:
path = r'C:\Users\selwyn77\Desktop\Stack\corner'
filename = 'env.jpg'
img = cv2.imread(os.path.join(path, filename))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) #--- convert to grayscale
It is a good choice to always blur the image to remove less possible gradient changes and preserve the more intense ones. I opted to choose the bilateral filter which unlike the Gaussian filter doesn't blur all the pixels in the neighborhood. It rather blurs pixels which has similar pixel intensity to that of the central pixel. In short it preserves edges/corners of high gradient change but blurs regions that have minimal gradient changes.
bi = cv2.bilateralFilter(gray, 5, 75, 75)
cv2.imshow('bi',bi)
To a human it is not so much of a difference compared to the original image. But it does matter. Now finding possible corners:
dst = cv2.cornerHarris(bi, 2, 3, 0.04)
dst returns an array (the same 2D shape of the image) with eigen values obtained from the final equation mentioned HERE.
Now a threshold has to be applied to select those corners beyond a certain value. I will use the one in the documentation:
#--- create a black image to see where those corners occur ---
mask = np.zeros_like(gray)
#--- applying a threshold and turning those pixels above the threshold to white ---
mask[dst>0.01*dst.max()] = 255
cv2.imshow('mask', mask)
The white pixels are regions of possible corners. You can find many corners neighboring each other.
To draw the selected corners on the image:
img[dst > 0.01 * dst.max()] = [0, 0, 255] #--- [0, 0, 255] --> Red ---
cv2.imshow('dst', img)
(Red colored pixels are the corners, not so visible)
In order to get an array of all pixels with corners:
coordinates = np.argwhere(mask)
UPDATE
Variable coor is an array of arrays. Converting it to list of lists
coor_list = [l.tolist() for l in list(coor)]
Converting the above to list of tuples
coor_tuples = [tuple(l) for l in coor_list]
I have an easy and rather naive way to find the 4 corners. I simply calculated the distance of each corner to every other corner. I preserved those corners whose distance exceeded a certain threshold.
Here is the code:
thresh = 50
def distance(pt1, pt2):
(x1, y1), (x2, y2) = pt1, pt2
dist = math.sqrt( (x2 - x1)**2 + (y2 - y1)**2 )
return dist
coor_tuples_copy = coor_tuples
i = 1
for pt1 in coor_tuples:
print(' I :', i)
for pt2 in coor_tuples[i::1]:
print(pt1, pt2)
print('Distance :', distance(pt1, pt2))
if(distance(pt1, pt2) < thresh):
coor_tuples_copy.remove(pt2)
i+=1
Prior to running the snippet above coor_tuples had all corner points:
[(4, 42),
(4, 43),
(5, 43),
(5, 44),
(6, 44),
(7, 219),
(133, 36),
(133, 37),
(133, 38),
(134, 37),
(135, 224),
(135, 225),
(136, 225),
(136, 226),
(137, 225),
(137, 226),
(137, 227),
(138, 226)]
After running the snippet I was left with 4 corners:
[(4, 42), (7, 219), (133, 36), (135, 224)]
UPDATE 2
Now all you have to do is just mark these 4 points on a copy of the original image.
img2 = img.copy()
for pt in coor_tuples:
cv2.circle(img2, tuple(reversed(pt)), 3, (0, 0, 255), -1)
cv2.imshow('Image with 4 corners', img2)
Here's an implementation using cv2.goodFeaturesToTrack() to detect corners. The approach is
Convert image to grayscale
Perform canny edge detection
Detect corners
Optionally perform 4-point perspective transform to get top-down view of image
Using this starting image,
After converting to grayscale, we perform canny edge detection
Now that we have a decent binary image, we can use cv2.goodFeaturesToTrack()
corners = cv2.goodFeaturesToTrack(canny, 4, 0.5, 50)
For the parameters, we give it the canny image, set the maximum number of corners to 4 (maxCorners), use a minimum accepted quality of 0.5 (qualityLevel), and set the minimum possible Euclidean distance between the returned corners to 50 (minDistance). Here's the result
Now that we have identified the corners, we can perform a 4-point perspective transform to obtain a top-down view of the object. We first order the points clockwise then draw the result onto a mask.
Note: We could have just found contours on the Canny image instead of doing this step to create the mask, but pretend we only had the 4 corner points to work with
Next we find contours on this mask and filter using cv2.arcLength() and cv2.approxPolyDP(). The idea is that if the contour has 4 points, then it must be our object. Once we have this contour, we perform a perspective transform
Finally we rotate the image depending on the desired orientation. Here's the result
Code for only detecting corners
import cv2
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
canny = cv2.Canny(gray, 120, 255, 1)
corners = cv2.goodFeaturesToTrack(canny,4,0.5,50)
for corner in corners:
x,y = corner.ravel()
cv2.circle(image,(x,y),5,(36,255,12),-1)
cv2.imshow('canny', canny)
cv2.imshow('image', image)
cv2.waitKey()
Code for detecting corners and performing perspective transform
import cv2
import numpy as np
def rotate_image(image, angle):
# Grab the dimensions of the image and then determine the center
(h, w) = image.shape[:2]
(cX, cY) = (w / 2, h / 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# Compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# Adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# Perform the actual rotation and return the image
return cv2.warpAffine(image, M, (nW, nH))
def order_points_clockwise(pts):
# sort the points based on their x-coordinates
xSorted = pts[np.argsort(pts[:, 0]), :]
# grab the left-most and right-most points from the sorted
# x-roodinate points
leftMost = xSorted[:2, :]
rightMost = xSorted[2:, :]
# now, sort the left-most coordinates according to their
# y-coordinates so we can grab the top-left and bottom-left
# points, respectively
leftMost = leftMost[np.argsort(leftMost[:, 1]), :]
(tl, bl) = leftMost
# now, sort the right-most coordinates according to their
# y-coordinates so we can grab the top-right and bottom-right
# points, respectively
rightMost = rightMost[np.argsort(rightMost[:, 1]), :]
(tr, br) = rightMost
# return the coordinates in top-left, top-right,
# bottom-right, and bottom-left order
return np.array([tl, tr, br, bl], dtype="int32")
def perspective_transform(image, corners):
def order_corner_points(corners):
# Separate corners into individual points
# Index 0 - top-right
# 1 - top-left
# 2 - bottom-left
# 3 - bottom-right
corners = [(corner[0][0], corner[0][1]) for corner in corners]
top_r, top_l, bottom_l, bottom_r = corners[0], corners[1], corners[2], corners[3]
return (top_l, top_r, bottom_r, bottom_l)
# Order points in clockwise order
ordered_corners = order_corner_points(corners)
top_l, top_r, bottom_r, bottom_l = ordered_corners
# Determine width of new image which is the max distance between
# (bottom right and bottom left) or (top right and top left) x-coordinates
width_A = np.sqrt(((bottom_r[0] - bottom_l[0]) ** 2) + ((bottom_r[1] - bottom_l[1]) ** 2))
width_B = np.sqrt(((top_r[0] - top_l[0]) ** 2) + ((top_r[1] - top_l[1]) ** 2))
width = max(int(width_A), int(width_B))
# Determine height of new image which is the max distance between
# (top right and bottom right) or (top left and bottom left) y-coordinates
height_A = np.sqrt(((top_r[0] - bottom_r[0]) ** 2) + ((top_r[1] - bottom_r[1]) ** 2))
height_B = np.sqrt(((top_l[0] - bottom_l[0]) ** 2) + ((top_l[1] - bottom_l[1]) ** 2))
height = max(int(height_A), int(height_B))
# Construct new points to obtain top-down view of image in
# top_r, top_l, bottom_l, bottom_r order
dimensions = np.array([[0, 0], [width - 1, 0], [width - 1, height - 1],
[0, height - 1]], dtype = "float32")
# Convert to Numpy format
ordered_corners = np.array(ordered_corners, dtype="float32")
# Find perspective transform matrix
matrix = cv2.getPerspectiveTransform(ordered_corners, dimensions)
# Return the transformed image
return cv2.warpPerspective(image, matrix, (width, height))
image = cv2.imread('1.png')
original = image.copy()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
canny = cv2.Canny(gray, 120, 255, 1)
corners = cv2.goodFeaturesToTrack(canny,4,0.5,50)
c_list = []
for corner in corners:
x,y = corner.ravel()
c_list.append([int(x), int(y)])
cv2.circle(image,(x,y),5,(36,255,12),-1)
corner_points = np.array([c_list[0], c_list[1], c_list[2], c_list[3]])
ordered_corner_points = order_points_clockwise(corner_points)
mask = np.zeros(image.shape, dtype=np.uint8)
cv2.fillPoly(mask, [ordered_corner_points], (255,255,255))
mask = cv2.cvtColor(mask, cv2.COLOR_BGR2GRAY)
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.015 * peri, True)
if len(approx) == 4:
transformed = perspective_transform(original, approx)
result = rotate_image(transformed, -90)
cv2.imshow('canny', canny)
cv2.imshow('image', image)
cv2.imshow('mask', mask)
cv2.imshow('transformed', transformed)
cv2.imshow('result', result)
cv2.waitKey()
find contours with RETR_EXTERNAL option.(gray -> gaussian filter -> canny edge -> find contour)
find the largest size contour -> this will be the edge of the rectangle
find corners with little calculation
Mat m;//image file
findContours(m, contours_, hierachy_, RETR_EXTERNAL);
auto it = max_element(contours_.begin(), contours_.end(),
[](const vector<Point> &a, const vector<Point> &b) {
return a.size() < b.size(); });
Point2f xy[4] = {{9000,9000}, {0, 1000}, {1000, 0}, {0,0}};
for(auto &[x, y] : *it) {
if(x + y < xy[0].x + xy[0].y) xy[0] = {x, y};
if(x - y > xy[1].x - xy[1].y) xy[1] = {x, y};
if(y - x > xy[2].y - xy[2].x) xy[2] = {x, y};
if(x + y > xy[3].x + xy[3].y) xy[3] = {x, y};
}
xy[4] will be the four corners.
I was able to extract four corners this way.
Apply houghlines to the canny image - you will get a list of points
apply convex hull to this set of points

Resources