Let's say we have a dark donut on a white background. What is a good way, looking only at any pixel's value (0 = not white, 1 = white) and any neighbouring pixel values, to determine which one of the two white regions found on the image is inside the donut?
(source: 123rf.com)
In computer graphics, this problem has been extensively studied in the context of geometry processing. The goal is to know whether a point is outside or outside a polygon (possibly with holes), and has been used for color filling for example.
The most common solution is to throw a line in a random direction from your current point (for simplicity, you can take an horizontal scan line), and count the number of intersections with the boundary. If this number is even, you are outside, and if it is odd, your are inside.
In the context of image processing, finding the boundary can be done with edge finding techniques (for instance, the Sobel operator). You can now just walk on a single row from your given point to the right (for instance) and count how many edges you found.
WhitAngl's answer is correct, so my answer is simply bringing in context some problems involved in Image Processing. If you are aware of these, sorry for the naïveness.
Given your initial image, simply considering its edges is bound to give incorrect results due to the own problems of detecting edges. They might be broken, they might not be detected, etc.. Also, from your own considerations, we cannot simple use 0 = not white, 1 = white. With your original image, this is the result of such consideration:
If we suppose you have a better binary representation as this one:
Then WhitAngl's answer applies perfectly. Also, in this case, the answer can be simplified to: if there is a black pixel of the exterior edge touching a white pixel, then this white pixel is not an interior one. This gives:
Where every white pixel is interior to your donut.
Related
I have the following problem, best explained with this picture:
I have a hole (blue) with an edge (white line).
I now want to fill the hole with the color of the region next to it.
So above the white line it shoud be yellow and below the white line red.
Is there an algorithm which does something similar i could adapt?
Possibly even an implementation in openCV?
EDIT
Ok, maybe to specify: The white line is from an edge detection and irregular. Also there are many blue spots like this on a big image and it needs to compute the color for each spot according to the adjacent colors
EDIT2: added a better example image containing the whole scene:
To further clarify: Only the blue "holes" should be filled, because they are the regions of error we know. The white object edges are taken from a ground truth for this example which is more precise than the data we can actually work with. It is possible to get a aproximation of that edge though.
The data is a depth map from a multi camera scan by the way. Goal is to fill the error regions cause by overshadowing of the objects. If an object can't be viewed by 2 camera views, because it is obfuscated, no depth estimation is possible.
Maye you want to have a look at the OpenCV's function called floodFill.
This function has an input mask parameter that you could use to specify the white line between the two colored regions.
I kinda found a solution to my problem: openCV has a inpaint function.
I'm gonna modify this according to this paper. Should work fine for my case.
Using GPUImage, I am able to detect corners of a book/page in an image. But sometimes, it will pass more than 4 points, in which case I will need to process and figure out the best rectangle out of these points. Here's an example:
What's the most efficient way to figure out the best rectangle in this case?
Thanks
If you're using a corner detection algorithm, then you can filter results based on the relative strength of the detected corner. The contrast at the book corners relative to your current background appears to be much stronger than the contrast at the point found in the wood grain. Are there relative magnitudes associated with each point, or do you just get the points? Setting thresholds for edge strengths can mean a lot of fiddling unless the intensities of the foreground and background are relatively constant.
Your sample image could be blurred or morphed. For example, the right morphological "close" on light pixels could eliminate the texture in the wood grain without having an effect on the size and shape of the book. (http://en.wikipedia.org/wiki/Mathematical_morphology)
Another possibility is to shrink the image to a much smaller size and then perform detection on that. Resizing the image will tend to wipe out tiny details such as whatever wood grain pattern is currently being detected.
Picking the right lens and lighting can make the image easier to process. Try to simplify the image as much as possible before processing it. As mentioned above, "dark field" lighting that would illuminate just the book edges would present a much simpler image for processing. Writing down the constraints can make it more obvious which solution will be most robust and simplest to implement. Finding any rectangle anywhere in an image is very difficult; it's much easier to find a light rectangle on a dark background if the rectangle is at least 100 x 100 pixels in size, rotated no more than 15 degrees from square to the image edges, etc.
More involved solutions can be split into two approaches:
Solving the problem using given only 4 or more (x,y) points.
Using a different image processing technique altogether for the sample image.
1. Solving the program given only the points
If you generally only have 5 or 6 points, and if you are confident that 4 of those points will belong to the corners of the rectangles that you want, then you can try this:
Find the convex hull of all points. The convex hull is the N-gon that completely encompasses all points. If the points were pegs sticking up, and if you stretched a rubber band around them and let it snap into place, then the final shape of the rubber band is a convex hull. Algorithms that find convex hulls typically return a list of points that ordered counterclockwise from the bottom leftmost point.
Make a copy of your point list and remove points from the copy until only four points remain. These four remaining points will still be ordered counterclockwise.
Calculate the angle formed by each set of three successive points: points 1, 2, 3, then 2, 3, 4, then 3, 4, 1, and so on.
If an angle is outside a reasonable tolerance--less than 70 degrees or greater than 110 degrees--skip back to step 2 and remove the next point (or set of points).
Store the min and max angles for each set of 4 points.
Repeat steps 2 - 6, removing a different point (or points) each time.
Track the set of points for which the min and max angles are closest to 90 degrees.
http://en.wikipedia.org/wiki/Convex_hull
There are a number of other checks and constraints that could be introduced. For example, if the point-to-point distances for 3 successive points in the convex hull (pts N to N+1, and N+1 to N+2) are close to the expected width and height of the book, then you might mark these as known good points and only test the remaining points to see which is the fourth point.
The technique above can get unwieldy if you get quite a few points, but it may work if two or three of the book corner points are expected to be found on the convex hull.
For any geometric problem, I always recommend checking out GeometricTools.com, which has a lot of great, optimized source code for all sorts of problems. It's very handy to have the book as well, especially if you can find a cheap copy using AddAll.com.
http://www.geometrictools.com/
2. Other image processing techniques for your sample image
Although I could be wrong, it appears that GPUImage doesn't have many general-purpose image processing algorithms. Some other image processing algorithms could make this problem much simpler to solve.
Though there isn't space to go into it here, one of the keys to successful image processing is appropriate lighting. Make sure you're lighting is consistent. A diffuse light that evenly illuminates the book and the background would work well. You can simplify the problem using funkier lighting: if you have four lights (or a special ring light), you can provide horizontal illumination from the top, bottom, left, and right that will cause the edges of the book to appear bright and other surfaces to appear dark.
http://www.benderassoc.com/mic/lighting/nerlite/Darkfield.htm
If you can use some other GPU libraries to do image processing, then one of the following techniques could work nicely:
Connected component labeling (a.k.a. finding blobs). It shouldn't be too hard to use either binary thresholding or a watershed algorithm to separate the white blob that is the book from the rest of the background. Once the blob for the book is identified, finding the corners is easier. (http://en.wikipedia.org/wiki/Connected-component_labeling) In OpenCV you can find the "contours."
Generate an list of edge points, then have four separate line-fitting tools search from top to bottom, right to left, bottom to top, and left to right to find the four strong (and mostly straight) edges associated with the book. In your sample image, though, either the book cover is slightly warped or the camera lens has introduced barrel distortion.
Use a corner detector designed to find light corners on a dark background. If you will always be looking for a white book on a wood grain background, you can create a detector to find white corners on a brown background.
Use a Hough technique to find the four strongest lines in the image. (http://en.wikipedia.org/wiki/Hough_transform)
The algorithmic technique that works best will depend on your constraints: are you looking for rectangles only of a certain size? is the contrast between foreground and background consistent? can you introduce lighting to simplify the appearance of the image? and so on.
I have a bunch of "simple" images and I want to compare if they are similar together. I compare them to each other using template matching (cv::matchTemplate) and results are quite good.
Now I want to fine tune my program and I face a problem. For example I have two images which look very much alike. Only differences they have is that another one has thicker line and the digit front of item is different. When both images are small, one pixell difference in line thickness makes big result differences when doing template matching. When line thicknesses are same and only difference is the front digit, I get template matching result something like 0.98 with CV_TM_CCORR_NORMED when match successful. When line thickness is different matching result is something like 0.95.
I cannot decrease my threshold value below 0.98 because some other similar images have same line thickness.
Here are example images:
So what options do I have?
I have tried:
dilate the original and template
erode also both
morphologyEx both
calculating keypoints and comparing them
finding corners
But no big success yet. Are those images too simple that detecting "good features" is hard?
Any help is very wellcome.
Thank you!
EDIT:
Here are some other example images. What my program consider as similar are put in same zip-folder.
ZIP
A possible way might be thinning the two images, so that every line is of one pixel width, since the differing thickness is causing you the main problem with similarity.
The procedure would be to first binarize/threshold the images, then apply a thinning operation on both images, so both are now having the same thickness of 1 px. Then use the usual template matching that you used before with good results.
In case you'd like more details on the thinning/skeletonization of binary images here are a few OpenCV implementations posted on various discussion forums and OpenCV groups:
OpenCV code for thinning (Guo and Hall algo, works with CvMat inputs)
The JR Parker implementation using OpenCV
Possibly more efficient code here (uses OpenCV optimized access methods a lot, however most of the page is in Japanese!)
And lastly a brief overview of thinning in case you're interested.
You need something more elementary here, there isn't much reason to go for fancy methods. Your figures are already binary ones, and their shapes are very similar overall.
One initial idea: consider the upper points and bottom points in a certain image and form a upper hull and a bottom hull (simply a hull, not a convex hull or anything else). A point is said to be an upper point (respec. bottom point) if, given a column i, it is the first point starting at the top (bottom) of the image that is not a background point in i. Also, your image is mostly one single connected component (in some cases there are vertical bars separated, but that is fine), so you can discard small components easily. This step is important for your situation because I saw there are some figures with some form of noise that is irrelevant to the rest of the image. Considering that a connected component with less than 100 points is small, these are the hulls you get for the respective images included in the question:
The blue line is indicating the upper hull, the green line the bottom hull. If it is not apparent, when we consider the regional maxima and regional minima of these hulls we obtain the same amount in both of them. Furthermore, they are all very close except for some displacement in the y axis. If we consider the mean x position of the extrema and plot the lines of both images together we get the following figure. In this case, the lines in blue and green are for the second image, and the lines in red and cyan for the first. Red dots are in the mean x coordinate of some regional minima, and blue dots the same but for regional maxima (these are our points of interest). (The following image has been resized for better visualization)
As you can see, you get many nearly overlapping points without doing anything. If we do even less, i.e. not even care about this overlapping, and proceed to classify your images in the trivial way: if an image a and another image b have the same amount of regional maxima in the upper hull, the same amount of regional minima in the upper hull, the same amount of regional maxima in the bottom hull, and the same amount of regional minima in the bottom hull, then a and b belong to the same class. Doing this for all your images, all images are correctly grouped except for the following situation:
In this case we have only 3 maxima and 3 minima for the upper hull in the first image, while there are 4 maxima and 4 minima for the second. Following you see the plots for the hulls and points of interest obtained:
As you can notice, in the second upper hull there are two extrema very close. Smoothing this curve eliminates both extrema, making the images match by the trivial method. Also, note that if you draw a rectangle around your images, then this method will tell they are all equal. In that case you will want to compare multiple hulls, discarding the points in the current hull and constructing other ones. Nevertheless, this method is able to group all your images correctly given they are all very simple and mostly noisy-free.
From as much as I can get, the difficulty is when the shape is the same, just size is different. A simple hack approach could be:
- subtract the images, then erode. If the shapes were the same but one slightly bigger, subtracting will leave only the edges, which will be thin an vanish with erosion as noise.
Somewhat more formal, would be to take the contours and then the approximate polygons and do a invariants comparison (Hu Moments etc.)
I am currently facing a, in my opinion, rather common problem which should be quite easy to solve but so far all my approached have failed so I am turning to you for help.
I think the problem is explained best with some illustrations. I have some Patterns like these two:
I also have an Image like (probably better, because the photo this one originated from was quite poorly lit) this:
(Note how the Template was scaled to kinda fit the size of the image)
The ultimate goal is a tool which determines whether the user shows a thumb up/thumbs down gesture and also some angles in between. So I want to match the patterns against the image and see which one resembles the picture the most (or to be more precise, the angle the hand is showing). I know the direction in which the thumb is showing in the pattern, so if i find the pattern which looks identical I also have the angle.
I am working with OpenCV (with Python Bindings) and already tried cvMatchTemplate and MatchShapes but so far its not really working reliably.
I can only guess why MatchTemplate failed but I think that a smaller pattern with a smaller white are fits fully into the white area of a picture thus creating the best matching factor although its obvious that they dont really look the same.
Are there some Methods hidden in OpenCV I havent found yet or is there a known algorithm for those kinds of problem I should reimplement?
Happy New Year.
A few simple techniques could work:
After binarization and segmentation, find Feret's diameter of the blob (a.k.a. the farthest distance between points, or the major axis).
Find the convex hull of the point set, flood fill it, and treat it as a connected region. Subtract the original image with the thumb. The difference will be the area between the thumb and fist, and the position of that area relative to the center of mass should give you an indication of rotation.
Use a watershed algorithm on the distances of each point to the blob edge. This can help identify the connected thin region (the thumb).
Fit the largest circle (or largest inscribed polygon) within the blob. Dilate this circle or polygon until some fraction of its edge overlaps the background. Subtract this dilated figure from the original image; only the thumb will remain.
If the size of the hand is consistent (or relatively consistent), then you could also perform N morphological erode operations until the thumb disappears, then N dilate operations to grow the fist back to its original approximate size. Subtract this fist-only blob from the original blob to get the thumb blob. Then uses the thumb blob direction (Feret's diameter) and/or center of mass relative to the fist blob center of mass to determine direction.
Techniques to find critical points (regions of strong direction change) are trickier. At the simplest, you might also use corner detectors and then check the distance from one corner to another to identify the place when the inner edge of the thumb meets the fist.
For more complex methods, look into papers about shape decomposition by authors such as Kimia, Siddiqi, and Xiaofing Mi.
MatchTemplate seems like a good fit for the problem you describe. In what way is it failing for you? If you are actually masking the thumbs-up/thumbs-down/thumbs-in-between signs as nicely as you show in your sample image then you have already done the most difficult part.
MatchTemplate does not include rotation and scaling in the search space, so you should generate more templates from your reference image at all rotations you'd like to detect, and you should scale your templates to match the general size of the found thumbs up/thumbs down signs.
[edit]
The result array for MatchTemplate contains an integer value that specifies how well the fit of template in image is at that location. If you use CV_TM_SQDIFF then the lowest value in the result array is the location of best fit, if you use CV_TM_CCORR or CV_TM_CCOEFF then it is the highest value. If your scaled and rotated template images all have the same number of white pixels then you can compare the value of best fit you find for all different template images, and the template image that has the best fit overall is the one you want to select.
There are tons of rotation/scaling independent detection functions that could conceivably help you, but normalizing your problem to work with MatchTemplate is by far the easiest.
For the more advanced stuff, check out SIFT, Haar feature based classifiers, or one of the others available in OpenCV
I think you can get excellent results if you just compute the two points that have the furthest shortest path going through white. The direction in which the thumb is pointing is just the direction of the line that joins the two points.
You can do this easily by sampling points on the white area and using Floyd-Warshall.
I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html