Finding corners of grid and do homography mapping - opencv

Hi guys, I would want to find the corners of this calibration card, to enable scaling and geometric calibration. The image above is the grid I am referring to.
Shown is the full image, and I want the corners detected for the black and white grid.
However, when I try to run
gray = cv2.cvtColor(image_cal, cv2.COLOR_BGR2GRAY) #image_cal is the image to be calibrated
cv2_imshow(gray)
retval, corners = cv2.findChessboardCorners(gray, (3, 4))
The retval returns false, meaning no chessboard is detected.
I have tried different pictures but it seems they all cannot be detected.
Then I turn to Harrison Corner Detection,
gray = np.float32(gray)
# bi = cv2.bilateralFilter(gray, 5, 75, 75)
# blurred = cv2.filter2D(gray,-1,kernel)
dst = cv2.cornerHarris(gray,2,3,0.04)
dst = cv2.dilate(dst, None)
image_cal[dst>0.01*dst.max()]=[0,0,255]
cv2_imshow(image_cal)
Which gives me many corners, but I cannot accurately just narrow down to only the black and white grid corners.
Also, there is no guarantee the next image to be fed will still have the black and white grid in the same position so I cannot use some location boundaries to limit the search.
Eventually I would want to know the coordinates of the corners and their corresponding mapped coordinates (such that the target coordinates are properly spaced in distance according to the grid e.g. adjacent vertical or horizontal corners are 1cm apart, without distortion), and feed into a findHomography function of opencv.
Appreciate any help!

Related

Rectangle detection in noisy contours

I'm trying to build an algorithm that calculates the dimensions of slabs (in pixel units as of now). I tried masking, but there is no one HSV color range that will work for all the test cases, as the slabs are of varying colors. I tried Otsu thresholding as well but it didn't work quite well...
Now I'm trying my hand with canny edge detection. The original image, and the image after canny-edge look like this:
I used dilation to make the central region a uniform white region, and then used contour detection. I identified the contour having the maximum area as the contour of interest. The resulting contours are a bit noisy, because the canny edge detection also included some background stuff that was irrelevant:
I used cv2.boundingRect() to estimate the height and width of the rectangle, but it keeps returning the height and width of the entire image. I presume this is because it works by calculating (max(x)-min(x),max(y)-min(y)) for each (x,y) in the contour, and in my case the resulting contour has some pixels touching the edges of the image, and so this calculation simply results in (image width, image height).
I am trying to get better images to work with, but assuming all images are like this only, i.e. have noisy contours, what can be an alternate approach to detect the dimensions of the white rectangular region obtained after dilating?
To get the right points of the rectangle use this:
p = cv2.arcLength(cnt True) # cnt is the rect Contours
appr = cv2.approxPolyDP(cnt , 0.01 * p, True) # appr contains the 4 points
# draw the rect
cv2.drawContours(img, [appr], 0, (0, 255, 0), 2)
The appr var contains the turning point of the rect. You still need to do some more cleaning to get better results, but cv2.boundingRect() is not a good solution for your case.

How can I extract the screen of the mobile from an image where there are other rectangular objects also?

I want to extract the screen of the mobile device from an image where mobile is not the largest rectangle. The mobile is placed on a table or mobile image is visible inside a laptop screen. So I am not able to use the largest contour detection algorithm.
If you can help please let me know.
Thanks in advance.
Here I am adding a sample picture:
Sample Image
There are different approaches that you can take:
Probably the most promising method will be to train a deep-learning model with your costume data. Take a look at this article.
You can add some other filters before searching for rectangles. For example, if your phone screen is turned off, you can use HSV color filter for black objects. I would be doing something like that:
blur = cv2.blur(img,(5,5))
hsv = cv2.cvtColor(blur, cv2.COLOR_BGR2HSV)
# Play with these values. They are the HSV lower and upper bounds:
lower_black = np.array([0, 5, 50], np.uint8)
upper_black = np.array([179, 50, 255], np.uint8)
mask = cv2.inRange(hsv, lower_black, upper_black)
# mask = cv2.Canny(mask, 60, 120) - optional
img_res = cv2.bitwise_and(img, img, mask=mask)
(np refers to numpy).
Now try to perform contour detection on img_res. Notice that HSV lower and upper bounds values should be fine-tuned to give you the best results.
If the contour detection doesn't work well on the filtered image, try to apply Canny edge detection on mask, as commented in the code.

openCv Find coordinates of edges/contours

Lets say I have the following image where there is a folder image with a white label on it.
What I want is to detect the coordinates of end points of the folder and the white paper on it (both rectangles).
Using the coordinates, I want to know the exact place of the paper on the folder.
GIVEN :
The inner white paper rectangle is always going to be of the fixed size, so may be we can use this knowledge somewhere?
I am new to opencv and trying to find some guidance around how should I approach this problem?
Problem Statement : We cannot rely on color based solution since this is just an example and color of both the folder as well as the rectangular paper can change.
There can be other noisy papers too but one thing is given, The overall folder and the big rectangular paper would always be the biggest two rectangles at any given time.
I have tried opencv canny for edge detection and it looks like this image.
Now how can I find the coordinates of outer rectangle and inner rectangle.
For this image, there are three domain colors: (1) the background-yellow (2) the folder-blue (3) the paper-white. Use the color info may help, I analysis it in RGB and HSV like this:
As you can see(the second row, the third cell), the regions can be easily seperated in H(HSV) if you find the folder mask first.
We can choose
My steps:
(1) find the folder region mask in HSV using inRange(hsv, (80, 10, 20), (150, 255, 255))
(2) find contours on the mask and filter them by width and height
Here is the result:
Related:
Choosing the correct upper and lower HSV boundaries for color detection with`cv::inRange` (OpenCV)
How to define a threshold value to detect only green colour objects in an image :Opencv
You can opt for (Adaptive Threshold)[https://docs.opencv.org/3.4/d7/d4d/tutorial_py_thresholding.html]
Obtain the hue channel of the image.
Perform adaptive threshold with a certain block size. I used size of 15 for half the size of the image.
This is invariant to color as you expected. Now you can go ahead and extract what you need!!
This solution helps to identify the white paper region of the image.
This is the full code for the solution:
import cv2
import numpy as np
image = cv2.imread('stack2.jpg',-1)
paper = cv2.resize(image,(500,500))
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY),
200, 255, cv2.THRESH_BINARY)
image, contours, hier = cv2.findContours(thresh_gray, cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)
for c in contours:
area = cv2.contourArea(c)
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
# convert all coordinates floating point values to int
box = np.int0(box)
# draw a green 'nghien' rectangle
if area>500:
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
print([box])
cv2.imshow('paper', paper)
cv2.imwrite('paper.jpg',paper)
cv2.waitKey(0)
First using a manual threshold(200) you can detect paper in the image.
ret, thresh_gray = cv2.threshold(cv2.cvtColor(paper, cv2.COLOR_BGR2GRAY), 200, 255, cv2.THRESH_BINARY)
After that you should find contours and get the minAreaRect(). Then you should get coordinates for that rectangle(box) and draw it.
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(paper, [box], 0, (0, 255, 0),1)
In order to avoid small white regions of the image you can use area = cv2.contourArea(c) and check if area>500 and drawContours().
final output:
Console output gives coordinates for the white paper.
console output:
[array([[438, 267],
[199, 256],
[209, 60],
[447, 71]], dtype=int64)]

detecting outer circle using opencv HoughCircles

I am trying to detect two concentric circles using opencv in Android. Big outer circle is red, inner smaller circle is blue. The idea is to detect big circle while distance is long and detect inner circle as the distance becomes short.
Sample picture
I am using simple code:
Mat matRed = new Mat();
Core.inRange(matHsv, getScalar(hue - HUE_D, saturation - SAT_D, brightness - BRIGHT_D), getScalar(hue + HUE_D, saturation + SAT_D, brightness + BRIGHT_D), matRed);
//here we have black-white image
Imgproc.GaussianBlur(matRed, matRed, new Size(0, 0), 6, 6);
Mat matCircles = new Mat();
Imgproc.HoughCircles(matRed, matCircles, CV_HOUGH_GRADIENT, 1, matRed.rows()/8, 100, param2, 0, 0);
After calling inRange we have white ring on black background. HoughCircles function detects only inner black circle.
How can I make it to detect outer white circle instead?
Without seeing a sample image (or being quite sure what you mean by 'detect big circle while distance is long and detect inner circle as the distance becomes short'), this is somewhat of a guess, but I'd suggest using Canny edge detect to get the boundaries of your circles and then using contours to extract the edges. You can use the contour hierarchy to determine which is inside which if you need to extract one or the other.
Additionally, given the circles are different colours, you might want to look at using inRange to segment based on colour; for example, this post from PyImageSearch contains a Python application which does colour-based tracking.

Object detection by color in openCV

I have a simple colorful image taken by camera, and I need to detect some 'Red' circles inside of it very accurate.Circles have different radius and they should be distinguishable. There are some black circles in the photo also.
Here is the procedure I followed:
1 - Convert from RGB to HSV
2 - Determining "red" upper and lower band:
lower_red = np.array([100, 50, 50])
upper_red = np.array([179, 255, 255])
3 - Create a mask.
4 - Applying cv2.GaussianBlur to smoothing the mask and noise reduction.
5 - Detecting remaining circles by using 'cv2.HoughCircles' on 'Mask' functions with different radius. (I have radius range)
Problem: When I create mask, the quality is not good enough, therefore Circles are detected wrong according to their radius.
Attachments include main photo, mask, and detected circles.
Anybody can help to set all pixels to black appart red pixels. Or in the other words, creating a high quality mask.

Resources