I am trying to detect two concentric circles using opencv in Android. Big outer circle is red, inner smaller circle is blue. The idea is to detect big circle while distance is long and detect inner circle as the distance becomes short.
Sample picture
I am using simple code:
Mat matRed = new Mat();
Core.inRange(matHsv, getScalar(hue - HUE_D, saturation - SAT_D, brightness - BRIGHT_D), getScalar(hue + HUE_D, saturation + SAT_D, brightness + BRIGHT_D), matRed);
//here we have black-white image
Imgproc.GaussianBlur(matRed, matRed, new Size(0, 0), 6, 6);
Mat matCircles = new Mat();
Imgproc.HoughCircles(matRed, matCircles, CV_HOUGH_GRADIENT, 1, matRed.rows()/8, 100, param2, 0, 0);
After calling inRange we have white ring on black background. HoughCircles function detects only inner black circle.
How can I make it to detect outer white circle instead?
Without seeing a sample image (or being quite sure what you mean by 'detect big circle while distance is long and detect inner circle as the distance becomes short'), this is somewhat of a guess, but I'd suggest using Canny edge detect to get the boundaries of your circles and then using contours to extract the edges. You can use the contour hierarchy to determine which is inside which if you need to extract one or the other.
Additionally, given the circles are different colours, you might want to look at using inRange to segment based on colour; for example, this post from PyImageSearch contains a Python application which does colour-based tracking.
Related
I'm trying to build an algorithm that calculates the dimensions of slabs (in pixel units as of now). I tried masking, but there is no one HSV color range that will work for all the test cases, as the slabs are of varying colors. I tried Otsu thresholding as well but it didn't work quite well...
Now I'm trying my hand with canny edge detection. The original image, and the image after canny-edge look like this:
I used dilation to make the central region a uniform white region, and then used contour detection. I identified the contour having the maximum area as the contour of interest. The resulting contours are a bit noisy, because the canny edge detection also included some background stuff that was irrelevant:
I used cv2.boundingRect() to estimate the height and width of the rectangle, but it keeps returning the height and width of the entire image. I presume this is because it works by calculating (max(x)-min(x),max(y)-min(y)) for each (x,y) in the contour, and in my case the resulting contour has some pixels touching the edges of the image, and so this calculation simply results in (image width, image height).
I am trying to get better images to work with, but assuming all images are like this only, i.e. have noisy contours, what can be an alternate approach to detect the dimensions of the white rectangular region obtained after dilating?
To get the right points of the rectangle use this:
p = cv2.arcLength(cnt True) # cnt is the rect Contours
appr = cv2.approxPolyDP(cnt , 0.01 * p, True) # appr contains the 4 points
# draw the rect
cv2.drawContours(img, [appr], 0, (0, 255, 0), 2)
The appr var contains the turning point of the rect. You still need to do some more cleaning to get better results, but cv2.boundingRect() is not a good solution for your case.
Hi guys, I would want to find the corners of this calibration card, to enable scaling and geometric calibration. The image above is the grid I am referring to.
Shown is the full image, and I want the corners detected for the black and white grid.
However, when I try to run
gray = cv2.cvtColor(image_cal, cv2.COLOR_BGR2GRAY) #image_cal is the image to be calibrated
cv2_imshow(gray)
retval, corners = cv2.findChessboardCorners(gray, (3, 4))
The retval returns false, meaning no chessboard is detected.
I have tried different pictures but it seems they all cannot be detected.
Then I turn to Harrison Corner Detection,
gray = np.float32(gray)
# bi = cv2.bilateralFilter(gray, 5, 75, 75)
# blurred = cv2.filter2D(gray,-1,kernel)
dst = cv2.cornerHarris(gray,2,3,0.04)
dst = cv2.dilate(dst, None)
image_cal[dst>0.01*dst.max()]=[0,0,255]
cv2_imshow(image_cal)
Which gives me many corners, but I cannot accurately just narrow down to only the black and white grid corners.
Also, there is no guarantee the next image to be fed will still have the black and white grid in the same position so I cannot use some location boundaries to limit the search.
Eventually I would want to know the coordinates of the corners and their corresponding mapped coordinates (such that the target coordinates are properly spaced in distance according to the grid e.g. adjacent vertical or horizontal corners are 1cm apart, without distortion), and feed into a findHomography function of opencv.
Appreciate any help!
I am developing an android application for analyzing chess games based on series of photos. To process images, I am using OpenCV. My question is how can I detect that there is a player's hand on a picture? Because I would like to filter those photos and analyze only the ones with the only chessboard on them.
So far I managed to get the Canny, so from an image like that
original image
I am able to get that canny
.
But I have no idea what can I do next...
The code I used to get Canny:
Mat gray, blur, cannyed;
cvtColor(img, gray, CV_BGR2GRAY);
GaussianBlur(gray, blur, Size(7, 7), 0, 0);
Canny(blur, cannyed, 50, 100, 3);
I would highly appreciate any ideas and advice on what to do next and what OpenCV functions can I use.
You have a very nice spectrum in the chess board. A hand in it messes up the frequencies built up by the regular transitions between the black and white squares. Try moving a bigger square (let's say the size of a 4.5 x 4.5 squares) around and see what happens to the frequencies.
Another approach if you have the sequence of pictures taken as a movie is to analyse the motions. Take the difference of consecutive frames (low pass filter them a bit first) to detect motions. Filter the motions in time (over several frames). Then threshold these motions to get a binary image. Erode the binary shapes to filter out small moving objects (noise, chess figure) be able to detect if any larger moving shape is on the board (e.g. a hand).
Here, After Canny Edge detection the morphological operations of horizontal and vertical lines extraction process i tried.
Mat horizontal = cannyed.clone();
// Specify size on horizontal axis
int horizontalsize = horizontal.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalsize,1));
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1),2);
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1),1);
imshow("horizontal",horizontal);
Mat vertical = cannyed.clone();
// Specify size on horizontal axis
int verticalsize = vertical.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size(1,verticalsize));
erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1),2);
imshow("vertical",vertical);
the results are ,
Horizontal Lines in the chess board
Then, from the figure you can see there is a proper interval in between the lines. The area where hand is present there is more interval in lines.
In that location, if contour is done, the hand (or any object ) over the chess board can be detected.
This helps to solve for any object when placed over chess board.
Thank you all very much for your suggestions.
So I solved the problem mostly using Gowthaman's method. First I use his code to generate vertical and horizontal lines. Then I combine them like this:
Mat combined = vertical + horizontal;
So I get something like that when there is no hand
or like that when there is a hand
.
Next I count white pixels using the code:
int GetPixelCount(Mat image, uchar color)
{
int result = 0;
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
if (image.at<uchar>(Point(j, i)) == color)
result++;
}
}
return result;
}
I do that for every photo in the series. First photo is always without a hand, so I use is as a template. If current photo has less then 98% of template white pixels then I deduce there is hand (or something else) in it.
Most likely this is not an optimal method and has lots of weaknesses, but it is very simple and works for me just fine :)
I can detect rectangles that are separate from each other. However, I am having problems with rectangles in contact such as below:
Two rectangles in contact
I should detect 2 rectangles in the image. I am using findContours as expected and I have tried various modes:CV_RETR_TREE, CV_RETR_LIST. I always get the outermost single contour as shown below:
Outermost contour detected
I have tried with or without canny edge detection. What I do is below:
cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3));
cv::erode(__mat,__mat, element);
cv::dilate(__mat,__mat, element);
// Find contours
std::vector<std::vector<cv::Point> > contours;
cv::Mat coloredMat;
cv::cvtColor(__mat, coloredMat, cv::COLOR_GRAY2BGR);
int thresh = 100;
cv::Mat canny_output;
cv::Canny( __mat, canny_output, thresh, thresh*2, 3 );
cv::findContours(canny_output, contours, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
How can I detect both two rectangles separately?
If you already know the dimensions of the rectangle, you can use generalizedHoughTransform
If the dimensions of the rectangles are not known, you can use distanceTransform. The local maxima will give you the center location as well as the distance from the center to the nearest edge (which will be equal to half the short side of your rect). Further processing with corner detection / watershed and you should be able to find the orientation and dimensions (though this method may fail if the two rectangles overlap each other by a lot)
simple corner detection and brute force search (just try out all possible rectangle combinations given the corner points and see which one best matches the image, note that a rectangle can be defined given only 3 points) might also work
I have a simple colorful image taken by camera, and I need to detect some 'Red' circles inside of it very accurate.Circles have different radius and they should be distinguishable. There are some black circles in the photo also.
Here is the procedure I followed:
1 - Convert from RGB to HSV
2 - Determining "red" upper and lower band:
lower_red = np.array([100, 50, 50])
upper_red = np.array([179, 255, 255])
3 - Create a mask.
4 - Applying cv2.GaussianBlur to smoothing the mask and noise reduction.
5 - Detecting remaining circles by using 'cv2.HoughCircles' on 'Mask' functions with different radius. (I have radius range)
Problem: When I create mask, the quality is not good enough, therefore Circles are detected wrong according to their radius.
Attachments include main photo, mask, and detected circles.
Anybody can help to set all pixels to black appart red pixels. Or in the other words, creating a high quality mask.