Detecting a hand above a chessboard using opencv - opencv

I am developing an android application for analyzing chess games based on series of photos. To process images, I am using OpenCV. My question is how can I detect that there is a player's hand on a picture? Because I would like to filter those photos and analyze only the ones with the only chessboard on them.
So far I managed to get the Canny, so from an image like that
original image
I am able to get that canny
.
But I have no idea what can I do next...
The code I used to get Canny:
Mat gray, blur, cannyed;
cvtColor(img, gray, CV_BGR2GRAY);
GaussianBlur(gray, blur, Size(7, 7), 0, 0);
Canny(blur, cannyed, 50, 100, 3);
I would highly appreciate any ideas and advice on what to do next and what OpenCV functions can I use.

You have a very nice spectrum in the chess board. A hand in it messes up the frequencies built up by the regular transitions between the black and white squares. Try moving a bigger square (let's say the size of a 4.5 x 4.5 squares) around and see what happens to the frequencies.
Another approach if you have the sequence of pictures taken as a movie is to analyse the motions. Take the difference of consecutive frames (low pass filter them a bit first) to detect motions. Filter the motions in time (over several frames). Then threshold these motions to get a binary image. Erode the binary shapes to filter out small moving objects (noise, chess figure) be able to detect if any larger moving shape is on the board (e.g. a hand).

Here, After Canny Edge detection the morphological operations of horizontal and vertical lines extraction process i tried.
Mat horizontal = cannyed.clone();
// Specify size on horizontal axis
int horizontalsize = horizontal.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat horizontalStructure = getStructuringElement(MORPH_RECT, Size(horizontalsize,1));
erode(horizontal, horizontal, horizontalStructure, Point(-1, -1),2);
dilate(horizontal, horizontal, horizontalStructure, Point(-1, -1),1);
imshow("horizontal",horizontal);
Mat vertical = cannyed.clone();
// Specify size on horizontal axis
int verticalsize = vertical.cols / 60;
// Create structure element for extracting horizontal lines through morphology operations
Mat verticalStructure = getStructuringElement(MORPH_RECT, Size(1,verticalsize));
erode(vertical, vertical, verticalStructure, Point(-1, -1));
dilate(vertical, vertical, verticalStructure, Point(-1, -1),2);
imshow("vertical",vertical);
the results are ,
Horizontal Lines in the chess board
Then, from the figure you can see there is a proper interval in between the lines. The area where hand is present there is more interval in lines.
In that location, if contour is done, the hand (or any object ) over the chess board can be detected.
This helps to solve for any object when placed over chess board.

Thank you all very much for your suggestions.
So I solved the problem mostly using Gowthaman's method. First I use his code to generate vertical and horizontal lines. Then I combine them like this:
Mat combined = vertical + horizontal;
So I get something like that when there is no hand
or like that when there is a hand
.
Next I count white pixels using the code:
int GetPixelCount(Mat image, uchar color)
{
int result = 0;
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
if (image.at<uchar>(Point(j, i)) == color)
result++;
}
}
return result;
}
I do that for every photo in the series. First photo is always without a hand, so I use is as a template. If current photo has less then 98% of template white pixels then I deduce there is hand (or something else) in it.
Most likely this is not an optimal method and has lots of weaknesses, but it is very simple and works for me just fine :)

Related

OpenCV: How to ignore background pixels from colour custers

I am trying to find the dominant colors in dresses.
1) First step is to remove the background. I did this using the solution mentioned here. It works perfectly and makes the background black.
2) Now with the result of the first step I am trying to find dominant colors using the solution mentioned here. But I am getting black (the background) as one of the dominant colours.
How can I ignore the background pixels in step 2?
Depending on the case, you could find the bounding rectangle of the region that you're interested in. If the number of color pixels is much higher than the number of black pixels inside that bounding rectangle, black shouldn't be detected as the dominant color.
Call findContours(binaryMask) on the binary image of your mask. Make sure you found just the contour you were looking for. If not, filter them to get the best one for the application. Then call boundingRect(cnt) on the contour. Then crop the image using that rectangle and run your function. If that's insufficient, try minAreaRect(cnt), but the cropping is a bit trickier: see this answer.
If that doesn't work, I'd probably go for the "dumb" solution, by changing the color of the mask to a color that will for 99% not appear on a dress and then - knowing it exact RGB values - filter it out from the results.
Next time please remember to provide an image of your case, so the answers may be more accurate.
One easy way to do it would be to simply discard black as a dominant colour. Grab one more cluster than you really want, ignore black. If black may genuinely be the dominant colour, repeat the operation with a different background colour and discard that; compare results. This would be slow, but simple to do.
Alternatively, you could only sample from pixels in your foreground. From your foreground extraction method, you should have a binary black and white foreground/background mask. If you only sample from white areas of the mask, then only these colours should be taken into consideration.
I have a rough C++ implementation of this, but it's almost certainly not the most efficient possible. Maybe it's a start you could work from?
Mat src; //Your source image
Mat mask; //Your black & white foreground/background image
Mat samples(src.rows * src.cols, 3, CV_32F);
//Set up samples with only foreground pixels
for (int y = 0; y < src.rows; y++) {
for (int x = 0; x < src.cols; x++) {
if (mask.at<uchar>(y, x) == 255) {
for (int z = 0; z < 3; z++) {
samples.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y, x)[z];
}
}
}
}
int clusterNo = 3;
int attempts = 5;
Mat labels;
Mat centers;
kmeans(samples, clusterNo, labels, TermCriteria(), attempts, KMEANS_RANDOM_CENTERS, centers);
Your dominant colours will be stored in the rows of centres, where you can do what you want with them.
Remove the background. That gives you a binary image - foreground and background pixels. Now do a morphological closing to close up little holes in foreground images and generally clean up the contours. Finally substitute pixels back in again to get a colour foreground image.

Remove Boxes/rectangles from image

I have the following image.
this image
I would like to remove the orange boxes/rectangle around numbers and keep the original image clean without any orange grid/rectangle.
Below is my current code but it does not remove it.
Mat mask = new Mat();
Mat src = new Mat();
src = Imgcodecs.imread("enveloppe.jpg",Imgcodecs.CV_LOAD_IMAGE_COLOR);
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
Scalar lowerThreshold = new Scalar(0, 50, 50);
Scalar upperThreshold = new Scalar(25, 255, 255);
Mat mask = new Mat();
Core.inRange(hsvMat, lowerThreshold, upperThreshold, mask);
//src.setTo(new scalar(255,255,255),mask);
what to do next ?
How can i remove the orange boxes/rectangle from the original images ?
Update:
For information , the mask contains exactly all the boxes/rectangle that i want to remove. I don't know how to use this mask to remove boxes/rectangle from the source (src) image as if they were not present.
This is what I did to solve the problem. I solved the problem in C++ and I used OpenCV.
Part 1: Find box candidates
Firstly I wanted to isolate the signal that was specific for red channel. I splitted the image into three channels. I then subtracted the red channel from blue channel and the red from green channel. After that I subtracted both previous subtraction results from one another. The final subtraction result is shown on the image below.
using namespace cv;
using namespace std;
Mat src_rgb = imread("image.jpg");
std::vector<Mat> channels;
split(src_rgb, channels);
Mat diff_rb, diff_rg;
subtract(channels[2], channels[0], diff_rb);
subtract(channels[2], channels[1], diff_rg);
Mat diff;
subtract(diff_rb, diff_rg, diff);
My next goal was to divide the parts of obtained image into separate "groups". To do that, I smoothed the image a little bit with a Gaussian filter. Then I applied a threshold to obtain a binary image; finally I looked for external contours within that image.
GaussianBlur(diff, diff, cv::Size(11, 11), 2.0, 2.0);
threshold(diff, diff, 5, 255, THRESH_BINARY);
vector<vector<Point>> contours;
findContours(diff, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
Click to see subtraction result, Gaussian blurred image, thresholded image and detected contours.
Part 2: Inspect box candidates
After that, I had to make an estimate whether the interior of each contour contained a number or something else. I made an assumption that numbers will always be printed with black ink and that they will have sharp edges. Therefore I took a blue channel image and I applied just a little bit of Gaussian smoothing and convolved it with a Laplacian operator.
Mat blurred_ch2;
GaussianBlur(channels[2], blurred_ch2, cv::Size(7, 7), 1, 1);
Mat laplace_result;
Laplacian(blurred_ch2, laplace_result, -1, 1);
I then took the resulting image and applied the following procedure for every contour separately. I computed a standard deviation of the pixel values within the contour interior. Standard deviation was high inside the contours that surrounded numbers; and it was low inside the two contours that surrounded the dog's head and the letters on top of the stamp.
That is why I could appliy the standard deviation threshold. Standard deviation was approx. twice larger for contours containing numbers so this was an easy way to only select the contours that contained numbers. Then I drew the contour interior mask. I used erosion and subtraction to obtain the "box edge mask".
The final step was fairly easy. I computed an estimate of average pixel value nearby the box on every channel of the image. Then I changed all pixel values under the "box edge mask" to those values on every channel. After I repeated that procedure for every box contour, I merged all three channels into one.
Mat mask(src_rgb.size(), CV_8UC1);
for (int i = 0; i < contours.size(); ++i)
{
mask.setTo(0);
drawContours(mask, contours, i, cv::Scalar(200), -1);
Scalar mean, stdev;
meanStdDev(laplace_result, mean, stdev, mask);
if (stdev.val[0] < 10.0) continue;
Mat eroded;
erode(mask, eroded, cv::Mat(), cv::Point(-1, -1), 6);
subtract(mask, eroded, mask);
for (int c = 0; c < src_rgb.channels(); ++c)
{
erode(mask, eroded, cv::Mat());
subtract(mask, eroded, eroded);
Scalar mean, stdev;
meanStdDev(channels[c], mean, stdev, eroded);
channels[c].setTo(mean, mask);
}
}
Mat final_result;
merge(channels, final_result);
imshow("Final Result", final_result);
Click to see red channel of the image, the result of convolution with Laplacian operator, drawn mask of the box edges and the final result.
Please note
This code is far from being optimal, especially the last loop does quite a lot of unnecessary work. But I think that in this case readability is more important (and the author of the question did not request an optimized solution anyway).
Looking towards more general solution
After I posted the initial reply, the author of the question noted that the digits can be of any color and their edges are not necessarily sharp. That means that above procedure can fail because of various reasons. I altered the input image so that it contains different kinds of numbers (click to see the image) and you can run my algorithm on this input and analyze what goes wrong.
The way I see it, one of these approaches is needed (or perhaps a mixture of both) to obtain a more "general" solution:
concentrate only on rectangle shape and color (confirm that the box candidate is really an orange box and remove it regardless of what is inside)
concentrate on numbers only (run a proper number detection algorithm inside the interior of every box candidate; if it contains a single number, remove the box)
I will give a trivial example of the first approach. If you can assume that orange box size will always be the same, just check the box size instead of standard deviation of the signal in the last loop of the algorithm:
Rect rect = boundingRect(contours[i]);
float area = rect.area();
if (area < 1000 || area > 1200) continue;
Warning: actual area of rectangles is around 600Px^2, but I took into account the Gaussian Blurring, which caused the contour to expand. Please also note that if you use this approach you no longer need to perform blurring or laplace operations on blue channel image.
You can also add other simple constraints to that condition; ratio between width and height is the first one that comes to my mind. Geometric properties can also be a good option (right angles, straight edges, convexness ...).

Automatic color calibration for object tracker

This is my first post, so forgive me if I miss something.
I have been playing around with OpenCV2 with Visual Studio C++. I have a basic object tracker working. By applying a Gaussian Blur, Converting to HSV, Thresholding with Trackbars, Eroding then Dilating. Now I want to set up some way of easily calibrating the color to be thresholded without using the Trackbars.
I've tried setting up an area of interest and taking the average BGR or HSV values (I've tried both ways). Then if needed use trackbars to make finer adjustments, but it does not seem to work. Am I on the right track, or is there a better way?
I have basically followed this video to get where I am.
https://www.youtube.com/watch?v=bSeFrPrqZ2A
I am not looking for a code to copy and paste. I am just looking for an Algorithm or explanation of a way to do it. Cheers
EDIT
Sorry I'll try and clear it up. What I have done is written an object tracking program for a home robot vision project. I just want to make it easier to calibrate what color is to be thresholded. At the moment I use trackbars to set the min and max HSV values for thresholding. Then use Erode and Dilate to clear up the binary image. Before using cv::findConturs and cv::moments to find the centroid for the largest contour.
What I have tried is setting a small 40x40pixel square in the center of the screen. When, for example, I hold a green ball in this square and hit spacebar. I cycle through each pixel in the square and get each separate Hue, Saturation and Value um...value. Then take the mode of each and use that to set the min and max threshold values.
Here is a segment of the code
if(cv::waitKey(20) == 32){ // wait for spacebar
int count = 0;
cv::Mat roi_Crop = frame_HSV(roi); //create cropped image from frame_HSV
for(int i=0; i<roi_Crop.rows; i++) // cycle through each pixel
{
for(int j=0; j<roi_Crop.cols; j++)
{
Hue[count] = roi_Crop.at<cv::Vec3b>(i,j)[0];
Sat[count] = roi_Crop.at<cv::Vec3b>(i,j)[1];
Val[count] = roi_Crop.at<cv::Vec3b>(i,j)[2];
count++;
}
}
HSV_Mode[0] = findMode(Hue);
HSV_Mode[1] = findMode(Sat);
HSV_Mode[2] = findMode(Val);
}
I hope this helps.

Determining the average distance of pixels (to the centre of an image) in OpenCV

I'm trying to figure out how to do the following calculation in OpenCV.
Assuming a binary image (black/white):
Average distance of white pixels from the centre of the image. An image with most of its white pixels near the edges will have a high score, whereas an image with most white pixels near the centre will have a low score.
I know how to do this manually with loops, but since I'm working Java I'd rather offload it to a set of high-performance OpenCV calls which are native.
Thanks
distanceTransform() is almost what you want. Unfortunately, it only calculates distance to the nearest black pixel, which means the data must be massaged a little bit. The image needs to contain only a single black pixel at the center for distanceTransform() to work properly.
My method is as follows:
Set all black pixels to an intermediate value
Set the center pixel to black
Call distanceTransform() on the modified image
Calculate the mean distance via mean(), using the white pixels in the binary image as a mask
Example code is below. It's in C++, but you should be able to get the idea:
cv::Mat img; // binary image
img.setTo(128, img == 0);
img.at<uchar>(img.rows/2, img.cols/2) = 0; // Set center point to zero
cv::Mat dist;
cv::distanceTransform(img, dist, CV_DIST_L2, 3); // Can be tweaked for desired accuracy
cv::Scalar val = cv::mean(dist, img == 255);
double mean = val[0];
With that said, I recommend you test whether this method is actually any faster than iterating in a loop. This method does a fair bit more processing than necessary to accommodate the API call.

Edge change Ratio

I am working on Edge change Ratio Algorithm for Video shot Detection. I have the basic idea of the algorithm and have implemented a part of it using OpenCV which includes identifying edges using Canny's Algorithm.
But I am confused about how to find the edge pixels and number of entering and exiting pixels between two video frames.
I am working on OpenCV
Please help me with some code or logic or OpenCV functions to do it
Thanks
As far as I have understood your problem...If your gray image is frameg then the following API produces the image with edges..
Canny(frameg,frameEdge,50,150,3,false);
where frameEdge is the image containing the edges. frameEdge is a binary image with edge pixels being white (255) and the other pixels are black(0).
for(int r = 0;r<frameEdge.rows;r++)
{
for(int c=0;c<frameEdge.cols;c++)
{
if( *(frameEdge.data + frameEdge.cols*r + c) == (uchar)255 )
{
Point edgepixel;
edgepixel.x = c; edgepixel.y = r;
myedges.push_back(edgepixel);
}
}
}
So you can easily scan the image and find the white pixels ans store their locations. That way you find the edge pixels. Make a an array vector<Point> myedges to store the edge pixel locations. Do this for each frame in your video and do the necessary comparisons. Note : I have taken the images as cv::Mat. You can use IplImage also.

Resources