Opencv cvHoughCircles-Parameters doubts - opencv

I have some doubts about cvHoughCircles parameters. I have an image that has some circles and I want to count them, the count gives me a wrong number of circles.
So I don't know how to choose some function's parameters like:
dp,min_dist,param1,param2,min_radius, max_radius.
I don't know what numbers I use in this parameters. How Can I calculate this?

Choosing the parameters depends on the images you are using. An explanation of the parameters themselves can be found in the reference here
http://opencv.willowgarage.com/documentation/cpp/imgproc_feature_detection.html#cv-houghcircles
Using the function with the following parameters
HoughCircles(gray, circles, CV_HOUGH_GRADIENT,2, gray->rows/4, 200, 100, 10, 50);
Will make it search for circles with a dp of 2, a minimum distance between the circles of 1/4 of the image and accumulator values of max 200,100 that determine what to accept as a circle. The 10 and 50 are minimum and maximum radius for the circles to accept.
If you have trouble finding these parameters try to make a test program that attaches these values to sliders so you can see the result on a test image.
Especially param2 is something that is difficult to determine by measuring. Because you know how many circles are in your image you can do a parameter crawl in the following way:
for(int i=0;i<200;i++) {
cv::HoughCircles(gray, circles, CV_HOUGH_GRADIENT,2, gray->rows/4, 200, i, 10, 50);
std::cout<<"HoughCircles with param2="<<i<<" gives "<<circles.size()<<" circles"<<endl;
}
I don't know how param1 and 2 are exactly related but you could do the same with a double for loop to find the optimum. The other values need to be measured from the picture. In stead of making a screenshot you can also save this image with the function:
cvSaveImage("image.jpg",gray);
To make sure you are really measuring the exact picture.

Related

extracting the pieces and positions from a boardgame

So I am using OpenCV (in Go with OpenCV) to attempt to extract the pieces from a boardgame. Originally I was approaching this problem with somewhat success by manually finding the HSV values for each player piece colour and the board positions. I managed to get this working, and a programmatic representation of every piece and its position on the board. The downside being that this requires quite serious human interaction if using a different board - "finding" all the correct HSV values. I asked here and got a suggestion to start by ignoring the colour, find all the pieces and then using a clustering algorithm on colour to work out which player it is. I might have to do something for the positions as well but thats stage two.
So now I am attempting to just extract all pieces regardless of colour.
I started out trying to use the NewSimpleBlobDetectorWithParams - however made little progress it seems to struggle alot on false negatives/positives.
I tried HoughCirclesWithParams but again this seems very dependant on the parameters and I wasn't making much progress in the actual pieces being detected. Currently I am using FindContours and that seems to be giving me some reasonable accuracy. Lets look at the picures.
The original image looks like this:
I have built a "dashboard" of controls and the thing that seems to be most "useful" is erosion, dilation and threshold.
My current setup is a load of trackerbars/sliders to adjust the values and then
gocv.CvtColor(clone, &clone, gocv.ColorRGBToGray)
erodeKernel := gocv.GetStructuringElement(gocv.MorphRect, image.Pt(trackers.erosionValue, trackers.erosionValue))
gocv.Erode(clone, &clone, erodeKernel)
dilateKernel := gocv.GetStructuringElement(gocv.MorphRect, image.Pt(trackers.dilateValue, trackers.dilateValue))
gocv.Dilate(clone, &clone, dilateKernel)
gocv.Threshold(clone, &clone, float32(trackers.thresTruncValue), 255, gocv.ThresholdTrunc)
gocv.Threshold(clone, &clone, float32(trackers.threshBinaryValue), 255, gocv.ThresholdBinary)
cannies := gocv.NewMat()
gocv.Canny(clone, &cannies, float32(trackers.cannyMin), float32(trackers.cannyMax))
cnts := gocv.FindContours(cannies, gocv.RetrievalTree, gocv.ChainApproxSimple)
followed by
for i := 0; i < cnts.Size(); i++ {
cnt := cnts.At(i)
if len(cnt.ToPoints()) < 5 {
continue
}
rect := gocv.FitEllipse(cnt)
gocv.Circle(&colorImage, image.Pt(rect.Center.X, rect.Center.Y), (rect.Height + rect.Width)/4, cntColor, 3)
if gocv.ContourArea(cnt) < gocv.ArcLength(cnt, false) {
continue
}
gocv.Rectangle(&colorImage, rect.BoundingRect, rectColor, 2)
psVector := gocv.NewPointsVector()
psVector.Append(cnt)
gocv.DrawContours(&clone, psVector, 0, rectColor, 3)
if rect.Center.X == (rect.BoundingRect.Max.X + rect.BoundingRect.Min.X) / 2 && rect.Center.Y == (rect.BoundingRect.Min.Y + rect.BoundingRect.Max.Y) / 2 {
//Does the circle fit inside the square?
if float64(rect.Width * rect.Height) > math.Pi * math.Pow(float64((rect.Height+rect.Width)/4), 2) {
gocv.Circle(&colorImage, image.Pt(rect.Center.X, rect.Center.Y), 2, matchColor, 3)
pieces = append(pieces, image.Pt(rect.Center.X, rect.Center.Y))
}
}
}
The idea being if the contour has 5 points then you can find the bounding bounding rectangle, then if the contour is closed, draw a circle at the center of the contour and if it fits inside the bounding rectangle, and they share the same center, its probably a playing piece. Note - I came up with this principle based on seeing where the circles and bounding rectangles were lying and when they matched up it more often than not seemed to be a playing piece.
So I am making some nice progress. However my questions are help with approaches to dig out the other colour pieces and perhaps more "robustly" dig out the white pieces. I feel that I don't quite have enough tools at my disposal as if i increase one thing i have to decrease another and I for some reason feel finding 30 round chequers on a board should be reasonably robust.
When I adjust the values looking for the maroon pieces I can get a few of them
but as you can see the diference when playing with threshold/erosion/dilation is not doing a wonderful job of finding them.
EDIT:
I have added the hough circle algorithm back in to sort of show that it hits on false negatives alot - in this case it got 1.
gocv.HoughCirclesWithParams(
clone,
&circles,
gocv.HoughGradient,
1, // dp
15, // minDist
75, // param1
20, // param2
20, // minRadius
45, // maxRadius
)
blue := color.RGBA{0, 0, 255, 0}
for i := 0; i < circles.Cols(); i++ {
v := circles.GetVecfAt(0, i)
// if circles are found
if len(v) > 2 {
x := int(v[0])
y := int(v[1])
r := int(v[2])
gocv.Circle(&colorImage, image.Pt(x, y), r, blue, 2)
}
}
Here is the threshold I was using.
So I realise I have said a lot here. I am looking for some help to detect all the playing pieces on the board.
I am doing this in go with gocv, but I can use python/convert python code if anyone has a good reference or something.
The original image without any ammendments is here. As I say my goal is to automatically detect the 30 pieces on the board and then i can use a clustering algo to work out which group they are in (I think...) I want to do it with the least amount of human interaction dragging sliders as that is not a fun/nice user experience....
Thoughts I had
the user could drag bounding boxes around groups and then that would make the computers job easier knowing it had to find pieces in there.
the user could select a colour of the page and that would tell the computer what roughly HSV values it should be looking in
the user could calibrate against a known start position of the pieces so the computer knew where to look.
Not exactly an answer to your questions, but this would be so much easier if you used object detection instead. Same way in my tutorials I find different objects. In this case, I would have 2 or possibly 3 classes: light pieces, dark pieces, and possibly another class for the empty spaces.
I usually use OpenCV and Darknet/YOLO to solve these kinds of things. I have many tutorials on my youtube channel. Here is a simple one to detect a few shapes: https://www.youtube.com/watch?v=yOJIRArZeig Here is another that shows OpenCV and Darknet/YOLO used to solve Sudoku: https://www.youtube.com/watch?v=BUG7HlhuArw
Your case would be similar to that last one. You'd get back a vector of objects detected, with the bounding box coordinates of each one within the image or video frame. If interested, this is the tutorial video I recommend to start: https://www.youtube.com/watch?v=pJ2iyf_E9PM

custom image filter

1.Introduction:
So I want to develop a special filter method for uiimages - my idea is to change from one picture all the colors to black except a certain color, which should keep their appearance.
Images are always nice, so look at this image to get what I'd like to achieve:
2.Explanation:
I'd like to apply a filter (algorithm) that is able to find specific colors in an image. The algorithm must be able to replace all colors that are not matching to the reference colors with e.g "black".
I've developed a simple code that is able to replace specific colors (color ranges with threshold) in any image.
But tbh this solution doesn't seems to be a fast & efficient way at all!
func colorFilter(image: UIImage, findcolor: String, threshold: Int) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let binaryData = context.data!.assumingMemoryBound(to: UInt8.self),
referenceColor = HEXtoHSL(findcolor) // [h, s, l] integer array
for i in 0..<img.height {
for j in 0..<img.width {
let pixel = 4 * (i * img.width + j)
let pixelColor = RGBtoHSL([Int(binaryData[pixel]), Int(binaryData[pixel+1]), Int(binaryData[pixel+2])]) // [h, s, l] integer array
let distance = calculateHSLDistance(pixelColor, referenceColor) // value between 0 and 100
if (distance > threshold) {
let setValue: UInt8 = 255
binaryData[pixel] = setValue; binaryData[pixel+1] = setValue; binaryData[pixel+2] = setValue; binaryData[pixel+3] = 255
}
}
}
let outputImg = context.makeImage()!
return UIImage(cgImage: outputImg, scale: image.scale, orientation: image.imageOrientation)
}
3.Code Information The code above is working quite fine but is absolutely ineffective. Because of all the calculation (especially color conversion, etc.) this code is taking a LONG (too long) time, so have a look at this screenshot:
My question I'm pretty sure there is a WAY simpler solution of filtering a specific color (with a given threshold #c6456f is similar to #C6476f, ...) instead of looping trough EVERY single pixel to compare it's color.
So what I was thinking about was something like a filter (CIFilter-method) as alternative way to the code on top.
Some Notes
So I do not ask you to post any replies that contain suggestions to use the openCV libary. I would like to develop this "algorithm" exclusively with Swift.
The size of the image from which the screenshot was taken over time had a resolution of 500 * 800px
Thats all
Did you really read this far? - congratulation, however - any help how to speed up my code would be very appreciated! (Maybe theres a better way to get the pixel color instead of looping trough every pixel) Thanks a million in advance :)
First thing to do - profile (measure time consumption of different parts of your function). It often shows that time is spent in some unexpected place, and always suggests where to direct your optimization effort. It doesn't mean that you have to focus on that most time consuming thing though, but it will show you where the time is spent. Unfortunately I'm not familiar with Swift so cannot recommend any specific tool.
Regarding iterating through all pixels - depends on the image structure and your assumptions about input data. I see two cases when you can avoid this:
When there is some optimized data structure built over your image (e.g. some statistics in its areas). That usually makes sense when you process the same image with same (or similar) algorithm with different parameters. If you process every image only once, likely it will not help you.
When you know that the green pixels always exist in a group, so there cannot be an isolated single pixel. In that case you can skip one or more pixels and when you find a green pixel, analyze its neighbourhood.
I do not code on your platform but...
Well I assume your masked areas (with the specific color) are continuous and large enough ... that means you got groups of pixels together with big enough areas (not just few pixels thick stuff). With this assumption you can create a density map for your color. What I mean if min detail size of your specific color stuff is 10 pixels then you can inspect every 8th pixel in each axis speeding up the initial scan ~64 times. And then use the full scan only for regions containing your color. Here is what you have to do:
determine properties
You need to set the step for each axis (how many pixels you can skip without missing your colored zone). Let call this dx,dy.
create density map
simply create 2D array that will hold info if center pixel of region is set with your specific color. so if your image has xs,ys resolution than your map will be:
int mx=xs/dx;
int my=ys/dy;
int map[mx][my],x,y,xx,yy;
for (yy=0,y=dy>>1;y<ys;y+=dy,yy++)
for (xx=0,x=dx>>1;x<xs;x+=dx,xx++)
map[xx][yy]=compare(pixel(x,y) , specific_color)<threshold;
enlarge map set areas
now you should enlarge the set areas in map[][] to neighboring cells because #2 could miss edge of your color region.
process all set regions
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy])
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
If you want to speed up this even more than you need to detect set map[][] cells that are on edge (have at least one zero neighbor) you can distinquish the cells like:
0 - no specific color is present
1 - inside of color area
2 - edge of color area
That can be done by simply in O(mx*my). After that you need to check for color only the edge regions so:
for (yy=0;yy<my;yy++)
for (xx=0;xx<mx;xx++)
if (map[xx][yy]==2)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
if (compare(pixel(x,y) , specific_color)>=threshold) pixel(x,y)=0x00000000;
} else if (map[xx][yy]==0)
{
for (y=yy*dy,y<(yy+1)*dy;y++)
for (x=xx*dx,x<(xx+1)*dx;x++)
pixel(x,y)=0x00000000;
}
This should be even faster. In case your image resolution xs,ys is not a multiple of region size mx,my you should handle the outer edge of image either by zero padding or by special loops for that missing part of image...
btw how long it takes to read and set your whole image?
for (y=0;y<ys;y++)
for (x=0;x<xs;x++)
pixel(x,y)=pixel(x,y)^0x00FFFFFF;
if this alone is slow than it means your pixel access is too slow and you should use different api for this. That is very common mistake on Windows GDI platform as people usually use Pixels[][] which is slower than crawling snail. there are other ways like bitlocking/blitting,ScanLine etc so in such case you need to look for something fast on your platform. If you are not able to speed even this stuff than you can not do anything else ... btw what HW is this run on?

PaintCode - move object on the path

I would like draw a curved line and attach an object to it. Is it possible to create fraction (from 0.0 to 1.0) which makes move my object on the path? When fraction is 0 then object is on the beginning, when 0.5 is on half way and finally when is on 1.0 it is at the end. Of course i want a curved path, not a straight line :) Is it possible to do in PaintCode?
If you need it only as a progress bar, it is possible in PaintCode. The trick is to use dashed stroke with very large Gap and then just change the Dash.
Then just attach a Variable and you are done.
Edit: Regarding the discussion under the original post, this solution uses points as the unit, so it will be distributed equally along the curve, no matter how curved the bezier is.
Based on the fact that you're going to walk along the curve using linear distance, a thing Bezier curves are terrible for, you need to build the linear mapping yourself. That's fairly simple though:
When you draw the curve, also build a look-up table that samples the curve once, at say 100 points (t=0, t=0.01, t=0.02, etc). In pseudocode:
lut = [];
lut[0] = 0;
tlen = curve.length();
for(v=0; v<=100; v++) {
t = v/100;
clen = curve.split(0,t).length();
percent = 100*clen/tlen;
lut[percent] = t;
}
This may leave gaps in your LUT - you can either fix those as a secondary step, or just leave them in and do a binary scan on your array to find the nearest "does have a value" percentage.
Then, when you need to show your progress as some percentage value, you just look up the corresponding t value: say you need to show 83%, you look up lut[83] and draw your object at the value that gives you.

How can I use bwlabel or regionprops to extract set of pixels of each label?

I'm following this tutorial
The goal is to be able to spit out either:
a. the center of each labeled object
b. all pixels associated with each labeled object
in a way that I have an array of either 'a.' for each object, or 'b.' for each object
I'm really not sure how to go about this. Are there matlabl tools to help extract these set of pixels or centers - per - label?
Update
I did manage to circle 80% of what I wanted using reigionprops, however it doesn't capture label precisely, just sets a circle around them while capturing the background as well, is that really unavoidable? I'm just not sure how to access the set of pixel per each circled item.
r=regionprops(L, 'All'); imshow(imagergb); areas={r.Area}; Bboxes={r.BoundingBox};
for k=2:numel(r)
if areas{k}>50 && areas{k} < 1100
rectangle('Position',Bboxes{k}, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
So what I'm trying to do is for example.
I thought it might just be
r = regionprops(L, 'PixelIdxList')
then
element1 = r(1).PixelIdxList
but couldn't figure out how to get the position of each pixel
I also tried
Z= bwlabel(L);
but imshow(Z==1) spits out all labels and imshow(Z==2) spits out background, all labels and background. couldn't test bwlabeln since I'm not exactly sure what to enter for r and c arguments.
Using regionprops(L, 'PixelIdxList') is correct. It gives you lists of pixel indices for each label. You can then convert them to [x,y] coordinates using (for the first label, for example)
[y,x] = ind2sub(size(L), r(1).PixelIdxList)
You can get label centers by using regionprops(L, 'Centroid'). This already gives you [x,y] coordinates for each label. Note that these are subpixel coordinates, so you may need to round them if you want to use them as indices.

OpenCV: Generating points from image after thinning

I've ran in to an issue concerning generating floating point coordinates from an image.
The original problem is as follows:
the input image is handwritten text. From this I want to generate a set of points (just x,y coordinates) that make up the individual characters.
At first I used findContours in order to generate the points. Since this finds the edges of the characters it first needs to be ran through a thinning algorithm, since I'm not interested in the shape of the characters, only the lines or as in this case, points.
Input:
thinning:
So, I run my input through the thinning algorithm and all is fine, output looks good. Running findContours on this however does not work out so good, it skips a lot of stuff and I end up with something unusable.
The second idea was to generate bounding boxes (with findContours), use these bounding boxes to grab the characters from the thinning process and grab all none-white pixel indices as "points" and offset them by the bounding box position. This generates even worse output, and seems like a bad method.
Horrible code for this:
Mat temp = new Mat(edges, bb);
byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())];
temp.get(0, 0, roi_buff);
int COLS = temp.cols();
List<Point> preArrayList = new ArrayList<Point>();
for(int i = 0; i < roi_buff.length; i++)
{
if(roi_buff[i] != 0)
{
Point tempP = bb.tl();
tempP.x += i%COLS;
tempP.y += i/COLS;
preArrayList.add(tempP);
}
}
Is there any alternatives or am I overlooking something?
UPDATE:
I overlooked the fact that I need the points (pixels) to be ordered. In the method above I simply do scanline approach to grabbing all the pixels. If you look at the 'o' for example, it would grab first the point on the left hand side, then the one on the right hand side. I would need them to be ordered by their neighbouring pixels since I want to draw paths with the points later on (outside of opencv).
Is this possible?
You should look into implementing your own connected components labelling. The concept is very simple: you scan the first line and assign unique labels to each horizontally connected strip of pixels. You basically check for every pixel if it is connected to its left neighbour and assign it either that neighbour's label or a new label. In the second row you do the same, but you also check against the pixels above it. Sometimes you need a label merge: two strips that were not connected in the previous row are joined in the current row. The way to deal with this is either to keep a list of label equivalences or use pointers to labels (so you can easily do a complete label change for an object).
This is basically what findContours does, but if you implement it yourself you have the freedom to go for 8-connectedness and even bridge a single-pixel or two-pixel gap. That way you get "almost-connected components labelling". It looks like you need this for the "w" in your example picture.
Once you have the image labelled this way, you can push all the pixels of a single label to a vector, and order them something like this. Find the top left pixel, push it to a new vector and erase it from the original vector. Now find the pixel in the original vector closest to it, push it to the new vector and erase from the original. Continue until all pixels have been transferred.
It will not be very fast this way, but it should be a start.

Resources