extracting the pieces and positions from a boardgame - opencv

So I am using OpenCV (in Go with OpenCV) to attempt to extract the pieces from a boardgame. Originally I was approaching this problem with somewhat success by manually finding the HSV values for each player piece colour and the board positions. I managed to get this working, and a programmatic representation of every piece and its position on the board. The downside being that this requires quite serious human interaction if using a different board - "finding" all the correct HSV values. I asked here and got a suggestion to start by ignoring the colour, find all the pieces and then using a clustering algorithm on colour to work out which player it is. I might have to do something for the positions as well but thats stage two.
So now I am attempting to just extract all pieces regardless of colour.
I started out trying to use the NewSimpleBlobDetectorWithParams - however made little progress it seems to struggle alot on false negatives/positives.
I tried HoughCirclesWithParams but again this seems very dependant on the parameters and I wasn't making much progress in the actual pieces being detected. Currently I am using FindContours and that seems to be giving me some reasonable accuracy. Lets look at the picures.
The original image looks like this:
I have built a "dashboard" of controls and the thing that seems to be most "useful" is erosion, dilation and threshold.
My current setup is a load of trackerbars/sliders to adjust the values and then
gocv.CvtColor(clone, &clone, gocv.ColorRGBToGray)
erodeKernel := gocv.GetStructuringElement(gocv.MorphRect, image.Pt(trackers.erosionValue, trackers.erosionValue))
gocv.Erode(clone, &clone, erodeKernel)
dilateKernel := gocv.GetStructuringElement(gocv.MorphRect, image.Pt(trackers.dilateValue, trackers.dilateValue))
gocv.Dilate(clone, &clone, dilateKernel)
gocv.Threshold(clone, &clone, float32(trackers.thresTruncValue), 255, gocv.ThresholdTrunc)
gocv.Threshold(clone, &clone, float32(trackers.threshBinaryValue), 255, gocv.ThresholdBinary)
cannies := gocv.NewMat()
gocv.Canny(clone, &cannies, float32(trackers.cannyMin), float32(trackers.cannyMax))
cnts := gocv.FindContours(cannies, gocv.RetrievalTree, gocv.ChainApproxSimple)
followed by
for i := 0; i < cnts.Size(); i++ {
cnt := cnts.At(i)
if len(cnt.ToPoints()) < 5 {
continue
}
rect := gocv.FitEllipse(cnt)
gocv.Circle(&colorImage, image.Pt(rect.Center.X, rect.Center.Y), (rect.Height + rect.Width)/4, cntColor, 3)
if gocv.ContourArea(cnt) < gocv.ArcLength(cnt, false) {
continue
}
gocv.Rectangle(&colorImage, rect.BoundingRect, rectColor, 2)
psVector := gocv.NewPointsVector()
psVector.Append(cnt)
gocv.DrawContours(&clone, psVector, 0, rectColor, 3)
if rect.Center.X == (rect.BoundingRect.Max.X + rect.BoundingRect.Min.X) / 2 && rect.Center.Y == (rect.BoundingRect.Min.Y + rect.BoundingRect.Max.Y) / 2 {
//Does the circle fit inside the square?
if float64(rect.Width * rect.Height) > math.Pi * math.Pow(float64((rect.Height+rect.Width)/4), 2) {
gocv.Circle(&colorImage, image.Pt(rect.Center.X, rect.Center.Y), 2, matchColor, 3)
pieces = append(pieces, image.Pt(rect.Center.X, rect.Center.Y))
}
}
}
The idea being if the contour has 5 points then you can find the bounding bounding rectangle, then if the contour is closed, draw a circle at the center of the contour and if it fits inside the bounding rectangle, and they share the same center, its probably a playing piece. Note - I came up with this principle based on seeing where the circles and bounding rectangles were lying and when they matched up it more often than not seemed to be a playing piece.
So I am making some nice progress. However my questions are help with approaches to dig out the other colour pieces and perhaps more "robustly" dig out the white pieces. I feel that I don't quite have enough tools at my disposal as if i increase one thing i have to decrease another and I for some reason feel finding 30 round chequers on a board should be reasonably robust.
When I adjust the values looking for the maroon pieces I can get a few of them
but as you can see the diference when playing with threshold/erosion/dilation is not doing a wonderful job of finding them.
EDIT:
I have added the hough circle algorithm back in to sort of show that it hits on false negatives alot - in this case it got 1.
gocv.HoughCirclesWithParams(
clone,
&circles,
gocv.HoughGradient,
1, // dp
15, // minDist
75, // param1
20, // param2
20, // minRadius
45, // maxRadius
)
blue := color.RGBA{0, 0, 255, 0}
for i := 0; i < circles.Cols(); i++ {
v := circles.GetVecfAt(0, i)
// if circles are found
if len(v) > 2 {
x := int(v[0])
y := int(v[1])
r := int(v[2])
gocv.Circle(&colorImage, image.Pt(x, y), r, blue, 2)
}
}
Here is the threshold I was using.
So I realise I have said a lot here. I am looking for some help to detect all the playing pieces on the board.
I am doing this in go with gocv, but I can use python/convert python code if anyone has a good reference or something.
The original image without any ammendments is here. As I say my goal is to automatically detect the 30 pieces on the board and then i can use a clustering algo to work out which group they are in (I think...) I want to do it with the least amount of human interaction dragging sliders as that is not a fun/nice user experience....
Thoughts I had
the user could drag bounding boxes around groups and then that would make the computers job easier knowing it had to find pieces in there.
the user could select a colour of the page and that would tell the computer what roughly HSV values it should be looking in
the user could calibrate against a known start position of the pieces so the computer knew where to look.

Not exactly an answer to your questions, but this would be so much easier if you used object detection instead. Same way in my tutorials I find different objects. In this case, I would have 2 or possibly 3 classes: light pieces, dark pieces, and possibly another class for the empty spaces.
I usually use OpenCV and Darknet/YOLO to solve these kinds of things. I have many tutorials on my youtube channel. Here is a simple one to detect a few shapes: https://www.youtube.com/watch?v=yOJIRArZeig Here is another that shows OpenCV and Darknet/YOLO used to solve Sudoku: https://www.youtube.com/watch?v=BUG7HlhuArw
Your case would be similar to that last one. You'd get back a vector of objects detected, with the bounding box coordinates of each one within the image or video frame. If interested, this is the tutorial video I recommend to start: https://www.youtube.com/watch?v=pJ2iyf_E9PM

Related

custom image filter

1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?

Edge detection on pool table

I am currently working on an algorithm to detect the playing area of a pool table. For this purpose, I captured an image, transformed it to grayscale, and used a Sobel operator on it. Now I want to detect the playing area as a box with 4 corners located in the 4 corners of the table.
Detecting the edges of the table is quite straightforward, however, it turns out that detecting the 4 corners is not so easy, as there are pockets in the pool table. Now I just want to fit a line to each of the side edges, and from those lines, I can compute the intersects, which are the corners for my table.
I am stuck here, because I could not yet come up with a good solution to find these lines in my image. I can see it very easily when I used the Sobel operator. But what would be a good way of detecting it and computing the position of the corners?
EDIT: I added some sample Images
Basic Image:
Grayscale Image
Sobel Filter (horizontal only)
For a general solution, there will be many sources of noise: problems with cloth around the rails, wood texture (or no texture) on the rails, varying lighting, shadows, stains on the cloth, chalk on the rails, and so on.
When color and lighting aren't dependable, and when you want to find the edges of geometric objects, then it's best to think in terms of edge pixels rather than gray/color pixels.
A while back I was thinking of making a phone-based app to save ball positions for later review, including online, so I've though a bit about this problem. Although I can provide some guidance for your current question, it occurs to me you'll run into new problems each step of the way, so I'll try to provide a more complete answer.
Convert the image to grayscale. If we can't get an algorithm to work in grayscale, we'll inevitably run into problems with color. (See below)
[TBD] Do some preprocessing to reduce noise.
Find edge points using Sobel or (if you must) Canny.
Run Hough lines detection, but with a few caveats and parameterizations as described below.
Find the lines described a keystone-shaped quadrilateral. (This will likely be the inner quadrilateral of two: one inside the rail on the bed, and the other slightly larger quadrilateral at the cloth/wood rail edge at top.)
(Optional) Use the side pockets to help determine the orientation of the quadrilateral.
Use an affine transform to map the perspective-distorted table bed to a rectangle of [thankfully] known relative dimensions. We know the bed sizes in advance, so you can remap the distorted rectangle to a proper rectangle. (We'll ignore some optical effects for now.)
Remap the color image to the perspective-corrected rectangle. You'll probably need to tweak the positions of some balls.
General notes:
Filtering by color in the general sense can be difficult. It's tempting to think of the cloth as being simply green, blue, or red (or some other color), but when you look at the actual RGB values and try to separate colors you'll begin to appreciate what a nightmare working in color can be.
Optical distortion might throw off some edges.
The far short rail may be difficult to detect, BUT you do this: find the inside lines for the two long rails, then search vertically between the two rails for the first strong horizontal-ish edge at the far side of the image. That'll be the far short rail.
Although you probably want to use your phone camera for convenience, using a Kinect camera or similar (preferably smaller) device would make the problem easier. Not only would you have both color data and 3D data, but you would eliminate some problems with lighting since the depth data wouldn't depend on visible lighting.
For your app, consider limiting the search region for rail edges to a perspective-distorted rectangle. The user might be able to adjust the search region. This could greatly simplify the processing, and could help you work around problems if the table isn't lit well (as can be the case).
If color segmentation (as suggested by #Dima) works, get the outline of the blob using contour following. Then simplify the outline to a quadrilateral (or a polygon of few sides) by the Douglas-Peucker algorithm. You should find the four table edges this way.
For more accuracy, you can refine the edge location by local search of transitions across it and perform line fitting. Then intersect the lines to get the corners.
The following answer assumes you have already found the positions of the lines in the image. This however can be done "easily" by directly looking at the pixels and seeing if they are in a "line". Usually it is easier to detect this if the image has been deskewed first as well, i.e. Rotated so the rectangle (pool table) is more like this: [] than like /=/. Then it is just a case of scanning the pixels and if there are ones of similar colour alongside it assuming a line is between them.
The code works by looping over the lines found in the image. Whenever the end points of each line falls within a tolerance on within the x and y coordinates it is marked as a corner. Once the corners are found I take the average value between them to find where the corner lies. For example:
A horizontal line ending at 10, 10 and a vertical line starting at 12, 12 will be found to be a corner if there is a tolerance of 2 or more. The corner found will be at: 11, 11
NOTE: This is just to find Top Left corners but can easily be adapted to find all of them. The reason it has been done like this is because in the application where I use it, it is faster to sort each array first into an order where relevant values will be found first, see: Why is processing a sorted array faster than an unsorted array?.
Also note that my code finds the first corner for each line which might not be applicable for you, this is mainly for performance reasons. However the code can easily be adapted to find all the corners with all the lines then either select the "more likely" corner or average through them all.
Also note my answer is written in C#.
private IEnumerable<Point> FindTopLeftCorners(IEnumerable<Line> horizontalLines, IEnumerable<Line> verticalLines)
{
List<Point> TopLeftCorners = new List<Point>();
Line[] laHorizontalLines = horizontalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
Line[] laVerticalLines = verticalLines.OrderBy(l => l.StartPoint.X).ThenBy(l => l.StartPoint.Y).ToArray();
foreach (Line verticalLine in laVerticalLines)
{
foreach (Line horizontalLine in laHorizontalLines)
{
if (verticalLine.StartPoint.X <= (horizontalLine.StartPoint.X + _nCornerTolerance) && verticalLine.StartPoint.X >= (horizontalLine.StartPoint.X - _nCornerTolerance))
{
if (horizontalLine.StartPoint.Y <= (verticalLine.StartPoint.Y + _nCornerTolerance) && horizontalLine.StartPoint.Y >= (verticalLine.StartPoint.Y - _nCornerTolerance))
{
int nX = (verticalLine.StartPoint.X + horizontalLine.StartPoint.X) / 2;
int nY = (verticalLine.StartPoint.Y + horizontalLine.StartPoint.Y) / 2;
TopLeftCorners.Add(new Point(nX, nY));
break;
}
}
}
}
return TopLeftCorners;
}
Where Line is the following class:
public class Line
{
public Point StartPoint { get; private set; }
public Point EndPoint { get; private set; }
public Line(Point startPoint, Point endPoint)
{
this.StartPoint = startPoint;
this.EndPoint = endPoint;
}
}
And _nCornerTolerance is an int of a configurable amount.
A playing area of a pool table typically has a distinctive color, like green or blue. I would try a color-based segmentation approach first. The Color Thresholder app in MATLAB gives you an easy way to try different color spaces and thresholds.

Automatic approach for removing colord object shadow on white background?

I am working on some leaf images using OpenCV (Java). The leaves are captured on a white paper and some has shadows like this one:
Of course, it's somehow the extreme case (there are milder shadows).
Now, I want to threshold the leaf and also remove the shadow (while reserving the leaf's details).
My current flow is this:
1) Converting to HSV and extracting the Saturation channel:
Imgproc.cvtColor(colorMat, colorMat, Imgproc.COLOR_RGB2HSV);
ArrayList<Mat> channels = new ArrayList<Mat>();
Core.split(colorMat, channels);
satImg = channels.get(1);
2) De-noising (median) and applying adaptiveThreshold:
Imgproc.medianBlur(satImg , satImg , 11);
Imgproc.adaptiveThreshold(satImg , satImg , 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 401, -10);
And the result is this:
It looks OK, but the shadow is causing some anomalies along the left boundary. Also, I have this feeling that I am not using the white background to my benefit.
Now, I have 2 questions:
1) How can I improve the result and get rid of the shadow?
2) Can I get good results without working on saturation channel?. The reason I ask is that on most of my images, working on L channel (from HLS) gives way better results (apart from the shadow, of course).
Update: Using the Hue channel makes threshdolding better, but makes the shadow situation worse:
Update2: In some cases, the assumption that the shadow is darker than the leaf doesn't always hold. So, working on intensities won't help. I'm looking more toward a color channels approach.
I don't use opencv, instead I was trying to use matlab image processing toolbox to extract the leaf. Hopefully opencv has all the processing functions for you. Please see my result below. I did all the operations in your original image channel 3 and channel 1.
First I used your channel 3, threshold it with 100 (left top). Then I remove the regions on the border and regions with the pixel size smaller than 100, filling in the hole in the leaf, the result is shown in right top.
Next I used your channel 1, did the same thing as I did in channel 3, the result is shown in left bottom. Then I found out the connected regions (there are only two as you can see in the left bottom figure), remove the one with smaller area (shown in right bottom).
Suppose the right top image is I1, and the right bottom image is I, the leaf is extracted by implement ~I && I1. The leaf is:
Hope it helps. Thanks
I tried two different things:
1. other thresholding on the saturation channel
2. try to find two contours: shadow and leaf
I use c++ so your code snippets will look a little different.
trying otsu-thresholding instead of adaptive thresholding:
cv::threshold(hsv_imgs,mask,0,255,CV_THRESH_BINARY|CV_THRESH_OTSU);
leading to following images (just OTSU thresholding on saturation channel):
the other thing is computing gradient information (i used sobel, see oppenCV documentation), thresholding that and after an opening-operator I used findContours giving something like this, not useable yet (gradient contour approach):
I'm trying to do the same thing with photos of butterflies, but with more uneven and unpredictable backgrounds such as this. Once you've identified a good portion of the background (e.g. via thresholding, or as we do, flood filling from random points), what works well is to use the GrabCut algorithm to get all those bits you might miss on the initial pass. In python, assuming you still want to identify an initial area of background by thresholding on the saturation channel, try something like
import cv2
import numpy as np
img = cv2.imread("leaf.jpg")
sat = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)[:,:,1]
sat = cv2.medianBlur(sat, 11)
thresh = cv2.adaptiveThreshold(sat , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh.jpg", thresh)
h, w = img.shape[:2]
bgdModel = np.zeros((1,65),np.float64)
fgdModel = np.zeros((1,65),np.float64)
grabcut_mask = thresh/255*3 #background should be 0, probable foreground = 3
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("GrabCut1.jpg", img*grabcut_mask[...,None])
This actually gets rid of the shadows for you in this case, because the edge of the shadow actually has high saturation levels, so is included in the grab cut deletion. (I would post images, but don't have enough reputation)
Usually, however, you can't trust shadows to be included in the background detection. In this case you probably want to compare areas in the image with colour of the now-known background using the chromacity distortion measure proposed by Horprasert et. al. (1999) in "A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection". This measure takes account of the fact that for desaturated colours, hue is not a relevant measure.
Note that the pdf of the preprint you find online has a mistake (no + signs) in equation 6. You can use the version re-quoted in Rodriguez-Gomez et al (2012), equations 1 & 2. Or you can use my python code below:
def brightness_distortion(I, mu, sigma):
return np.sum(I*mu/sigma**2, axis=-1) / np.sum((mu/sigma)**2, axis=-1)
def chromacity_distortion(I, mu, sigma):
alpha = brightness_distortion(I, mu, sigma)[...,None]
return np.sqrt(np.sum(((I - alpha * mu)/sigma)**2, axis=-1))
You can feed the known background mean & stdev as the last two parameters of the chromacity_distortion function, and the RGB pixel image as the first parameter, which should show you that the shadow is basically the same chromacity as the background, and very different from the leaf. In the code below, I've then thresholded on chromacity, and done another grabcut pass. This works to remove the shadow even if the first grabcut pass doesn't (e.g. if you originally thresholded on hue)
mean, stdev = cv2.meanStdDev(img, mask = 255-thresh)
mean = mean.ravel() #bizarrely, meanStdDev returns an array of size [3,1], not [3], so flatten it
stdev = stdev.ravel()
chrom = chromacity_distortion(img, mean, stdev)
chrom255 = cv2.normalize(chrom, alpha=0, beta=255, norm_type=cv2.NORM_MINMAX).astype(np.uint8)[:,:,None]
cv2.imwrite("ChromacityDistortionFromBackground.jpg", chrom255)
thresh2 = cv2.adaptiveThreshold(chrom255 , 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 401, 10);
cv2.imwrite("thresh2.jpg", thresh2)
grabcut_mask[...] = 3
grabcut_mask[thresh==0] = 0 #where thresh == 0, definitely background, set to 0
grabcut_mask[np.logical_and(thresh == 255, thresh2 == 0)] = 2 #could try setting this to 2 or 0
cv2.grabCut(img, grabcut_mask,(0,0,w,h),bgdModel,fgdModel,5,cv2.GC_INIT_WITH_MASK)
grabcut_mask = np.where((grabcut_mask ==2)|(grabcut_mask ==0),0,1).astype('uint8')
cv2.imwrite("final_leaf.jpg", grabcut_mask[...,None]*img)
I'm afraid with the parameters I tried, this still removes the stalk, though. I think that's because GrabCut thinks that it looks a similar colour to the shadows. Let me know if you find a way to keep it.

How can I remove noise from this video sequence?

Hello I am trying to do some image processing. I use Microsoft Kinect to detect humans on a room. I get depth data, do some background subtraction work and end up with a video sequence like this when a person enters the scene and walks around:
http://www.screenr.com/h7f8
I put a video so that you can see the behaviour of the noise in the video. Different colors represent different levels of depth. White represents empty. As you can see it is pretty noisy, especially the red noises.
I need to get rid of everything except the human as much as possible. When I do erosion/dilation (using a very big window size) I can get rid of a lot of the noise but I wondered if there are other methods I can use. Especially the red noise in the video is hard to remove using erosion/dilation.
Some notes:
1) A better background subtraction could be done if we knew when there are no humans in the scene but the background subtraction we do is fully automatic and it works even when there are humans in the scene and even when the camera is moved etc. so this is the best background subtraction we can get right now.
2) The algorithm will work on an embedded system, real time. So the more efficient and easy the algorithm the better. And it doesn't have to be perfect. Though complicated signal processing techniques are also welcome (maybe we might use them on another project who does not need embedded, real time processing).
3) I don't need an actual code. Just ideas.
Just my two cents:
If you don't mind using the SDK for that, then you can very easily keep only the person pixels using the PlayerIndexBitmask as Outlaw Lemur shows.
Now you may not want to be dependable on the drivers for that and want to do it in an image processing level. An approach that we had tried in a project and worked pretty good was contour based. We began by a background subtraction and then we detected the largest contour in the image assuming that this was the person (since usually the noise that remained was very small blobs) and we filled that contour and kept that. You could also use some kind of median filtering as a first pass.
Of course, this is not perfect nor suitable in every case and probably there are a lot better methods. But I'm just throwing it out there in case it helps you come up with any ideas.
Take a look at the eyesweb.
It is a platform for designing that supports kinect device and you can apply noise filters on the outputs. It is a very usefull and simple tool for multimodal systems designing.
I may be wrong (I'd need the video without processing for that) but I'd tend to say that you are trying to get rid of illumination changes.
This is what makes people detection really difficult in 'real' environmnents.
You can check out this other SO question for some links.
I used to detect humans real-time in the same configuration than you, but with monocular vision.
In my case, a really good descriptor was the LBPs, that is mainly used for texture classification.
This is quite simple to put into practice (there are implementations all over the web).
The LBPs where basically used to define an area of interest where movement is detected, so that I can process only part of the image and get rid of all that noise.
This paper for example uses LBP for grayscale correction of images.
Hope that brings some new ideas.
This is pretty simple assuming you are using the Kinect SDK. I would follow this video for Depth basics, and do something like this:
private byte[] GenerateColoredBytes(DepthImageFrame depthFrame)
{
//get the raw data from kinect with the depth for every pixel
short[] rawDepthData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(rawDepthData);
//use depthFrame to create the image to display on-screen
//depthFrame contains color information for all pixels in image
//Height x Width x 4 (Red, Green, Blue, empty byte)
Byte[] pixels = new byte[depthFrame.Height * depthFrame.Width * 4];
//Bgr32 - Blue, Green, Red, empty byte
//Bgra32 - Blue, Green, Red, transparency
//You must set transparency for Bgra as .NET defaults a byte to 0 = fully transparent
//hardcoded locations to Blue, Green, Red (BGR) index positions
const int BlueIndex = 0;
const int GreenIndex = 1;
const int RedIndex = 2;
//loop through all distances
//pick a RGB color based on distance
for (int depthIndex = 0, colorIndex = 0;
depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
depthIndex++, colorIndex += 4)
{
//get the player (requires skeleton tracking enabled for values)
int player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;
//gets the depth value
int depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//.9M or 2.95'
if (depth <= 900)
{
//we are very close
pixels[colorIndex + BlueIndex] = Colors.White.B;
pixels[colorIndex + GreenIndex] = Colors.White.G;
pixels[colorIndex + RedIndex] = Colors.White.R;
}
// .9M - 2M or 2.95' - 6.56'
else if (depth > 900 && depth < 2000)
{
//we are a bit further away
pixels[colorIndex + BlueIndex] = Colors.White.B;
pixels[colorIndex + GreenIndex] = Colors.White.G;
pixels[colorIndex + RedIndex] = Colors.White.R;
}
// 2M+ or 6.56'+
else if (depth > 2000)
{
//we are the farthest
pixels[colorIndex + BlueIndex] = Colors.White.B;
pixels[colorIndex + GreenIndex] = Colors.White.G;
pixels[colorIndex + RedIndex] = Colors.White.R;
}
////equal coloring for monochromatic histogram
//byte intensity = CalculateIntensityFromDepth(depth);
//pixels[colorIndex + BlueIndex] = intensity;
//pixels[colorIndex + GreenIndex] = intensity;
//pixels[colorIndex + RedIndex] = intensity;
//Color all players "gold"
if (player > 0)
{
pixels[colorIndex + BlueIndex] = Colors.Gold.B;
pixels[colorIndex + GreenIndex] = Colors.Gold.G;
pixels[colorIndex + RedIndex] = Colors.Gold.R;
}
}
return pixels;
}
This turns everything except humans white, and the humans are gold. Hope this helps!
EDIT
I know you didn't necessarily want code just ideas, so I would say find an algorithm that finds the depth, and one that finds the amount of humans, and color everything white except the humans. I have provided all of this, but I didn't know if you knew what was going on. Also I have an image of the final program.
Note: I added the second depth frame for perspective

How to define the markers for Watershed in OpenCV?

I'm writing for Android with OpenCV. I'm segmenting an image similar to below using marker-controlled watershed, without the user manually marking the image. I'm planning to use the regional maxima as markers.
minMaxLoc() would give me the value, but how can I restrict it to the blobs which is what I'm interested in? Can I utilize the results from findContours() or cvBlob blobs to restrict the ROI and apply maxima to each blob?
First of all: the function minMaxLoc finds only the global minimum and global maximum for a given input, so it is mostly useless for determining regional minima and/or regional maxima. But your idea is right, extracting markers based on regional minima/maxima for performing a Watershed Transform based on markers is totally fine. Let me try to clarify what is the Watershed Transform and how you should correctly use the implementation present in OpenCV.
Some decent amount of papers that deal with watershed describe it similarly to what follows (I might miss some detail, if you are unsure: ask). Consider the surface of some region you know, it contains valleys and peaks (among other details that are irrelevant for us here). Suppose below this surface all you have is water, colored water. Now, make holes in each valley of your surface and then the water starts to fill all the area. At some point, differently colored waters will meet, and when this happen, you construct a dam such that they don't touch each other. In the end you have a collection of dams, which is the watershed separating all the different colored water.
Now, if you make too many holes in that surface, you end up with too many regions: over-segmentation. If you make too few you get an under-segmentation. So, virtually any paper that suggests using watershed actually presents techniques to avoid these problems for the application the paper is dealing with.
I wrote all this (which is possibly too naïve for anyone that knows what the Watershed Transform is) because it reflects directly on how you should use watershed implementations (which the current accepted answer is doing in a completely wrong manner). Let us start on the OpenCV example now, using the Python bindings.
The image presented in the question is composed of many objects that are mostly too close and in some instances overlapping. The usefulness of watershed here is to separate correctly these objects, not to group them into a single component. So you need at least one marker for each object and good markers for the background. As an example, first binarize the input image by Otsu and perform a morphological opening for removing small objects. The result of this step is shown below in the left image. Now with the binary image consider applying the distance transform to it, result at right.
With the distance transform result, we can consider some threshold such that we consider only the regions most distant to the background (left image below). Doing this, we can obtain a marker for each object by labeling the different regions after the earlier threshold. Now, we can also consider the border of a dilated version of the left image above to compose our marker. The complete marker is shown below at right (some markers are too dark to be seen, but each white region in the left image is represented at the right image).
This marker we have here makes a lot of sense. Each colored water == one marker will start to fill the region, and the watershed transformation will construct dams to impede that the different "colors" merge. If we do the transform, we get the image at left. Considering only the dams by composing them with the original image, we get the result at right.
import sys
import cv2
import numpy
from scipy.ndimage import label
def segment_on_dt(a, img):
border = cv2.dilate(img, None, iterations=5)
border = border - cv2.erode(border, None)
dt = cv2.distanceTransform(img, 2, 3)
dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8)
_, dt = cv2.threshold(dt, 180, 255, cv2.THRESH_BINARY)
lbl, ncc = label(dt)
lbl = lbl * (255 / (ncc + 1))
# Completing the markers now.
lbl[border == 255] = 255
lbl = lbl.astype(numpy.int32)
cv2.watershed(a, lbl)
lbl[lbl == -1] = 0
lbl = lbl.astype(numpy.uint8)
return 255 - lbl
img = cv2.imread(sys.argv[1])
# Pre-processing.
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, img_bin = cv2.threshold(img_gray, 0, 255,
cv2.THRESH_OTSU)
img_bin = cv2.morphologyEx(img_bin, cv2.MORPH_OPEN,
numpy.ones((3, 3), dtype=int))
result = segment_on_dt(img, img_bin)
cv2.imwrite(sys.argv[2], result)
result[result != 255] = 0
result = cv2.dilate(result, None)
img[result == 255] = (0, 0, 255)
cv2.imwrite(sys.argv[3], img)
I would like to explain a simple code on how to use watershed here. I am using OpenCV-Python, but i hope you won't have any difficulty to understand.
In this code, I will be using watershed as a tool for foreground-background extraction. (This example is the python counterpart of the C++ code in OpenCV cookbook). This is a simple case to understand watershed. Apart from that, you can use watershed to count the number of objects in this image. That will be a slightly advanced version of this code.
1 - First we load our image, convert it to grayscale, and threshold it with a suitable value. I took Otsu's binarization, so it would find the best threshold value.
import cv2
import numpy as np
img = cv2.imread('sofwatershed.jpg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret,thresh = cv2.threshold(gray,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
Below is the result I got:
( even that result is good, because great contrast between foreground and background images)
2 - Now we have to create the marker. Marker is the image with same size as that of original image which is 32SC1 (32 bit signed single channel).
Now there will be some regions in the original image where you are simply sure, that part belong to foreground. Mark such region with 255 in marker image. Now the region where you are sure to be the background are marked with 128. The region you are not sure are marked with 0. That is we are going to do next.
A - Foreground region:- We have already got a threshold image where pills are white color. We erode them a little, so that we are sure remaining region belongs to foreground.
fg = cv2.erode(thresh,None,iterations = 2)
fg :
B - Background region :- Here we dilate the thresholded image so that background region is reduced. But we are sure remaining black region is 100% background. We set it to 128.
bgt = cv2.dilate(thresh,None,iterations = 3)
ret,bg = cv2.threshold(bgt,1,128,1)
Now we get bg as follows :
C - Now we add both fg and bg :
marker = cv2.add(fg,bg)
Below is what we get :
Now we can clearly understand from above image, that white region is 100% foreground, gray region is 100% background, and black region we are not sure.
Then we convert it into 32SC1 :
marker32 = np.int32(marker)
3 - Finally we apply watershed and convert result back into uint8 image:
cv2.watershed(img,marker32)
m = cv2.convertScaleAbs(marker32)
m :
4 - We threshold it properly to get the mask and perform bitwise_and with the input image:
ret,thresh = cv2.threshold(m,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
res = cv2.bitwise_and(img,img,mask = thresh)
res :
Hope it helps!!!
ARK
Foreword
I'm chiming in mostly because I found both the watershed tutorial in the OpenCV documentation (and C++ example) as well as mmgp's answer above to be quite confusing. I revisited a watershed approach multiple times to ultimately give up out of frustration. I finally realized I needed to at least give this approach a try and see it in action. This is what I've come up with after sorting out all of the tutorials I've come across.
Aside from being a computer vision novice, most of my trouble probably had to do with my requirement to use the OpenCVSharp library rather than Python. C# doesn't have baked-in high-power array operators like those found in NumPy (though I realize this has been ported via IronPython), so I struggled quite a bit in both understanding and implementing these operations in C#. Also, for the record, I really despise the nuances of, and inconsistencies in most of these function calls. OpenCVSharp is one of the most fragile libraries I've ever worked with. But hey, it's a port, so what was I expecting? Best of all, though -- it's free.
Without further ado, let's talk about my OpenCVSharp implementation of the watershed, and hopefully clarify some of the stickier points of watershed implementation in general.
Application
First of all, make sure watershed is what you want and understand its use. I am using stained cell plates, like this one:
It took me a good while to figure out I couldn't just make one watershed call to differentiate every cell in the field. On the contrary, I first had to isolate a portion of the field, then call watershed on that small portion. I isolated my region of interest (ROI) via a number of filters, which I will explain briefly here:
Start with source image (left, cropped for demonstration purposes)
Isolate the red channel (left middle)
Apply adaptive threshold (right middle)
Find contours then eliminate those with small areas (right)
Once we have cleaned the contours resulting from the above thresholding operations, it is time to find candidates for watershed. In my case, I simply iterated through all contours greater than a certain area.
Code
Say we've isolated this contour from the above field as our ROI:
Let's take a look at how we'll code up a watershed.
We'll start with a blank mat and draw only the contour defining our ROI:
var isolatedContour = new Mat(source.Size(), MatType.CV_8UC1, new Scalar(0, 0, 0));
Cv2.DrawContours(isolatedContour, new List<List<Point>> { contour }, -1, new Scalar(255, 255, 255), -1);
In order for the watershed call to work, it will need a couple of "hints" about the ROI. If you're a complete beginner like me, I recommend checking out the CMM watershed page for a quick primer. Suffice to say we're going to create hints about the ROI on the left by creating the shape on the right:
To create the white part (or "background") of this "hint" shape, we'll just Dilate the isolated shape like so:
var kernel = Cv2.GetStructuringElement(MorphShapes.Ellipse, new Size(2, 2));
var background = new Mat();
Cv2.Dilate(isolatedContour, background, kernel, iterations: 8);
To create the black part in the middle (or "foreground"), we'll use a distance transform followed by threshold, which takes us from the shape on the left to the shape on the right:
This takes a few steps, and you may need to play around with the lower bound of your threshold to get results that work for you:
var foreground = new Mat(source.Size(), MatType.CV_8UC1);
Cv2.DistanceTransform(isolatedContour, foreground, DistanceTypes.L2, DistanceMaskSize.Mask5);
Cv2.Normalize(foreground, foreground, 0, 1, NormTypes.MinMax); //Remember to normalize!
foreground.ConvertTo(foreground, MatType.CV_8UC1, 255, 0);
Cv2.Threshold(foreground, foreground, 150, 255, ThresholdTypes.Binary);
Then we'll subtract these two mats to get the final result of our "hint" shape:
var unknown = new Mat(); //this variable is also named "border" in some examples
Cv2.Subtract(background, foreground, unknown);
Again, if we Cv2.ImShow unknown, it would look like this:
Nice! This was easy for me to wrap my head around. The next part, however, got me quite puzzled. Let's look at turning our "hint" into something the Watershed function can use. For this we need to use ConnectedComponents, which is basically a big matrix of pixels grouped by the virtue of their index. For example, if we had a mat with the letters "HI", ConnectedComponents might return this matrix:
0 0 0 0 0 0 0 0 0
0 1 0 1 0 2 2 2 0
0 1 0 1 0 0 2 0 0
0 1 1 1 0 0 2 0 0
0 1 0 1 0 0 2 0 0
0 1 0 1 0 2 2 2 0
0 0 0 0 0 0 0 0 0
So, 0 is the background, 1 is the letter "H", and 2 is the letter "I". (If you get to this point and want to visualize your matrix, I recommend checking out this instructive answer.) Now, here's how we'll utilize ConnectedComponents to create the markers (or labels) for watershed:
var labels = new Mat(); //also called "markers" in some examples
Cv2.ConnectedComponents(foreground, labels);
labels = labels + 1;
//this is a much more verbose port of numpy's: labels[unknown==255] = 0
for (int x = 0; x < labels.Width; x++)
{
for (int y = 0; y < labels.Height; y++)
{
//You may be able to just send "int" in rather than "char" here:
var labelPixel = (int)labels.At<char>(y, x); //note: x and y are inexplicably
var borderPixel = (int)unknown.At<char>(y, x); //and infuriatingly reversed
if (borderPixel == 255)
labels.Set(y, x, 0);
}
}
Note that the Watershed function requires the border area to be marked by 0. So, we've set any border pixels to 0 in the label/marker array.
At this point, we should be all set to call Watershed. However, in my particular application, it is useful just to visualize a small portion of the entire source image during this call. This may be optional for you, but I first just mask off a small bit of the source by dilating it:
var mask = new Mat();
Cv2.Dilate(isolatedContour, mask, new Mat(), iterations: 20);
var sourceCrop = new Mat(source.Size(), source.Type(), new Scalar(0, 0, 0));
source.CopyTo(sourceCrop, mask);
And then make the magic call:
Cv2.Watershed(sourceCrop, labels);
Results
The above Watershed call will modify labels in place. You'll have to go back to remembering about the matrix resulting from ConnectedComponents. The difference here is, if watershed found any dams between watersheds, they will be marked as "-1" in that matrix. Like the ConnectedComponents result, different watersheds will be marked in a similar fashion of incrementing numbers. For my purposes, I wanted to store these into separate contours, so I created this loop to split them up:
var watershedContours = new List<Tuple<int, List<Point>>>();
for (int x = 0; x < labels.Width; x++)
{
for (int y = 0; y < labels.Height; y++)
{
var labelPixel = labels.At<Int32>(y, x); //note: x, y switched
var connected = watershedContours.Where(t => t.Item1 == labelPixel).FirstOrDefault();
if (connected == null)
{
connected = new Tuple<int, List<Point>>(labelPixel, new List<Point>());
watershedContours.Add(connected);
}
connected.Item2.Add(new Point(x, y));
if (labelPixel == -1)
sourceCrop.Set(y, x, new Vec3b(0, 255, 255));
}
}
Then, I wanted to print these contours with random colors, so I created the following mat:
var watershed = new Mat(source.Size(), MatType.CV_8UC3, new Scalar(0, 0, 0));
foreach (var component in watershedContours)
{
if (component.Item2.Count < (labels.Width * labels.Height) / 4 && component.Item1 >= 0)
{
var color = GetRandomColor();
foreach (var point in component.Item2)
watershed.Set(point.Y, point.X, color);
}
}
Which yields the following when shown:
If we draw on the source image the dams that were marked by a -1 earlier, we get this:
Edits:
I forgot to note: make sure you're cleaning up your mats after you're done with them. They WILL stay in memory and OpenCVSharp may present with some unintelligible error message. I should really be using using above, but mat.Release() is an option as well.
Also, mmgp's answer above includes this line: dt = ((dt - dt.min()) / (dt.max() - dt.min()) * 255).astype(numpy.uint8), which is a histogram stretching step applied to the results of the distance transform. I omitted this step for a number of reasons (mostly because I didn't think the histograms I saw were too narrow to begin with), but your mileage may vary.

Resources