Detecting a triangle mesh fence in the foreground of an image - opencv

I'm interesting in detecting a triangular mesh fence present in the foreground of a sequence of images. I've included an example image below. Ideally I'd like to output the grid of intersection points; this would then provide me with the distance to and orientation of the mesh (since the dimensions of the mesh are known and fixed).
As in the example image, the mesh can be obscured (by the thick black bars going horizontally and vertically) or can be confused with the background (see the black-lines of the structure in the top-left of the image). But the mesh will always completely cover the image, i.e. the edges or the outside of the mesh are never in view.
Any ideas on how one might begin to tackle a vision problem like this?

Find edge pixels
Hough transform to find lines in the image
Use ransac to find a model that describes the homography of the lines to the triangle grid.
Without more examples, it's hard to tell how difficult it would be to do.

Related

Detect semi-transparent rectangular overlays on images

I have images that contain transparent rectangular overlay similar to the following images: Image1 Image2. My goal is to detect if these rectangular boxes exist (location doesn't matter). These boxes will always have edges parallel to the sides of the image.
Assumptions:
The transfer function of how the transparent rectangles are drawn is now known
The sides of the rectangles will also be parallel to the image
Attempted Solution 1: Color Detection
So far, I've tried color detection via cv.threshold as well as using band-pass filters with the cv2.inRange() on multiple color spaces (HSV, LUV, XYZ etc). The issue with color detection is that I am also capturing too much noise to effectively just tune for the pixels for the transparent area. I tried laying the masks using cv2.bitwiseAnd but still can't tune the noise down to a negligible state. I tried only isolating for large groups of pixels using morphological transformations but this still fails.
Attempted Solution 2: Edge Detection + Edge Validation
My second try at detecting the box involved applying cv2.bilateralFilter and then generating hough lines via cv2.Canny,cv2.HoughLinesP. Although I am detecting a significant number of edges related to the transparent box, I also get many miscellaneous edges.
To filter out false edges, I take each line segment and check a few sample pixels to the left and right sides. By applying something something similar to what I believe the transfer function is (cv2.addWeighted) I checked to see if if I can reproduce the similar values. Unfortunately, this also doesn't work well enough to tell the difference between edges from the transparent box vs "real edges." Result From Edge Detection
Any thoughts on how I might detect these boxes is highly appreciated!

how to manage countor bounding rect in opencv

I have been testing background subtraction using gaussian state model. I am using opencv
2.1.0. I can generate binary image of foreground of the scene. Now all I want to do is Draw
countour bounding rectangle to highlight the moving object. I have used cvCountourBoundingRect
to obtain the rectangle covering countour. The issue I am facing is in case of multiple
countour, sometime nearby rectangle overlaps. Here, can anyone suggest me to prevent
overlapping of rectangle? In ideal case, two rectangle should not be overlapped. It rather
should be draw a bigger rectagle which covers all two rectangles.
Any suggetion will be greatful.
There's no ready possibility to do this in OpenCV. But actually the algorithm is very easy:
Cycle through all rectangles and check if two rectangles overlap each other. This topic will be useful: Determine if two rectangles overlap each other?
For every overlapped pair of rectangles create rectangle that contains both of them. To do this you should select one corner from first rectangle and another corner from second rectangle and these two corners will create rectangle for you. I don't think that it's a hard task - just simple math.

Good Method for Processing some Similar Images

I need to process some images in a real-time situation. I am receiving the images from a camera using OpenCV. The language I use is C++. An example of the images is attached. After applying some threshold filters I have an image like this, Of course there may be some pixel noises here and there, but not that much.
I need to detect the center and the rotation of the squares, and the center of the white circles. I'm totally clueless about how to do it, as it needs to be really fast. The number of the squares can be predefined. Any help would be great, thanks in advance.
Is the following straight forward approch too slow?
Binarize the image, so that the originally green background is black and the rest (black squares are white dots) are white.
Use cv::findContours.
Get the centers.
Binarize the image, so that the everything except the white dots is black.
Use cv::findContours.
Get the centers.
Assign every dot contours to the squate contour, for that is an inlier.
Calculate the squares rotations by the angle of the line between their centers and the centers of their dots.

Get rectangle out of array of points

Using GPUImage, I am able to detect corners of a book/page in an image. But sometimes, it will pass more than 4 points, in which case I will need to process and figure out the best rectangle out of these points. Here's an example:
What's the most efficient way to figure out the best rectangle in this case?
Thanks
If you're using a corner detection algorithm, then you can filter results based on the relative strength of the detected corner. The contrast at the book corners relative to your current background appears to be much stronger than the contrast at the point found in the wood grain. Are there relative magnitudes associated with each point, or do you just get the points? Setting thresholds for edge strengths can mean a lot of fiddling unless the intensities of the foreground and background are relatively constant.
Your sample image could be blurred or morphed. For example, the right morphological "close" on light pixels could eliminate the texture in the wood grain without having an effect on the size and shape of the book. (http://en.wikipedia.org/wiki/Mathematical_morphology)
Another possibility is to shrink the image to a much smaller size and then perform detection on that. Resizing the image will tend to wipe out tiny details such as whatever wood grain pattern is currently being detected.
Picking the right lens and lighting can make the image easier to process. Try to simplify the image as much as possible before processing it. As mentioned above, "dark field" lighting that would illuminate just the book edges would present a much simpler image for processing. Writing down the constraints can make it more obvious which solution will be most robust and simplest to implement. Finding any rectangle anywhere in an image is very difficult; it's much easier to find a light rectangle on a dark background if the rectangle is at least 100 x 100 pixels in size, rotated no more than 15 degrees from square to the image edges, etc.
More involved solutions can be split into two approaches:
Solving the problem using given only 4 or more (x,y) points.
Using a different image processing technique altogether for the sample image.
1. Solving the program given only the points
If you generally only have 5 or 6 points, and if you are confident that 4 of those points will belong to the corners of the rectangles that you want, then you can try this:
Find the convex hull of all points. The convex hull is the N-gon that completely encompasses all points. If the points were pegs sticking up, and if you stretched a rubber band around them and let it snap into place, then the final shape of the rubber band is a convex hull. Algorithms that find convex hulls typically return a list of points that ordered counterclockwise from the bottom leftmost point.
Make a copy of your point list and remove points from the copy until only four points remain. These four remaining points will still be ordered counterclockwise.
Calculate the angle formed by each set of three successive points: points 1, 2, 3, then 2, 3, 4, then 3, 4, 1, and so on.
If an angle is outside a reasonable tolerance--less than 70 degrees or greater than 110 degrees--skip back to step 2 and remove the next point (or set of points).
Store the min and max angles for each set of 4 points.
Repeat steps 2 - 6, removing a different point (or points) each time.
Track the set of points for which the min and max angles are closest to 90 degrees.
http://en.wikipedia.org/wiki/Convex_hull
There are a number of other checks and constraints that could be introduced. For example, if the point-to-point distances for 3 successive points in the convex hull (pts N to N+1, and N+1 to N+2) are close to the expected width and height of the book, then you might mark these as known good points and only test the remaining points to see which is the fourth point.
The technique above can get unwieldy if you get quite a few points, but it may work if two or three of the book corner points are expected to be found on the convex hull.
For any geometric problem, I always recommend checking out GeometricTools.com, which has a lot of great, optimized source code for all sorts of problems. It's very handy to have the book as well, especially if you can find a cheap copy using AddAll.com.
http://www.geometrictools.com/
2. Other image processing techniques for your sample image
Although I could be wrong, it appears that GPUImage doesn't have many general-purpose image processing algorithms. Some other image processing algorithms could make this problem much simpler to solve.
Though there isn't space to go into it here, one of the keys to successful image processing is appropriate lighting. Make sure you're lighting is consistent. A diffuse light that evenly illuminates the book and the background would work well. You can simplify the problem using funkier lighting: if you have four lights (or a special ring light), you can provide horizontal illumination from the top, bottom, left, and right that will cause the edges of the book to appear bright and other surfaces to appear dark.
http://www.benderassoc.com/mic/lighting/nerlite/Darkfield.htm
If you can use some other GPU libraries to do image processing, then one of the following techniques could work nicely:
Connected component labeling (a.k.a. finding blobs). It shouldn't be too hard to use either binary thresholding or a watershed algorithm to separate the white blob that is the book from the rest of the background. Once the blob for the book is identified, finding the corners is easier. (http://en.wikipedia.org/wiki/Connected-component_labeling) In OpenCV you can find the "contours."
Generate an list of edge points, then have four separate line-fitting tools search from top to bottom, right to left, bottom to top, and left to right to find the four strong (and mostly straight) edges associated with the book. In your sample image, though, either the book cover is slightly warped or the camera lens has introduced barrel distortion.
Use a corner detector designed to find light corners on a dark background. If you will always be looking for a white book on a wood grain background, you can create a detector to find white corners on a brown background.
Use a Hough technique to find the four strongest lines in the image. (http://en.wikipedia.org/wiki/Hough_transform)
The algorithmic technique that works best will depend on your constraints: are you looking for rectangles only of a certain size? is the contrast between foreground and background consistent? can you introduce lighting to simplify the appearance of the image? and so on.

OpenCV: Detect a black to white gradient in an area

I uploaded an example image for better understanding: http://www.imagebanana.com/view/kaja46ko/test.jpg
In the image you can see some scanlines and a marker (the white retangle with the circle in it). I want OpenCV to go along a specified area (in the example outlined trough the scanlines) that should be around 5x5. If that area contains a gradient from black to white, I want OpenCV to save the position of that area, so that I can work with it later.
The final result would be to differentiate between the marker and the other retangles separated trough black and white lines.
Is something like that possible? I googled a lot but I only found edge detectors but that's not what I want, I really need the detection of the black to white gradient only.
Thanks in advance.
it would be a good idea to filter out some of the areas by calculating their histogram.
You can use cvCalcHist for the task, then you can establish some threshold to determine if the black-white pixels percentage corresponds to that of a gradient. This will not solve the task but it will help you in reducing complexity.
Then, you can erode the image to merge all the white areas. After applying threshold, it would be possible to find connected components (using cvFindContours) that will separate images in black zones or white zones. You can then detect gradients by finding 5x5 areas that contain both a piece of a white zone and black zone simultaneously.
hope it helps.
Thanks for your answerer dnul, but it didn't really help me work this out. I though about a histogram to approach the problem but it's not quite what I want.
I solved this problem by creating a 40x40 matrix which holds 5x5 matrix's containing the raw pixel data in all 3 channels. I iterated trough each 40px-area and inside iterated trough each border of 5px-area. I checked each pixel and saved the ones which are darker then a certain threshold a storage.
After the iteration I had a rough idea of how many black pixels their are, so I checked each one of them for neighbors with white-pixels in all 3 channels. I then marked each of those pixels and saved them to another storage.
I then used the ransac algorithm to construct lines out of these points. It constructs about 5-20 lines per marker edge. I then looked at the lines which meet each other and saved the position of those that meet in a square angle.
The 4 points I get from that are the edges of the marker.
If you want to reproduce this you would have to filter the image in beforehand and apply a threshold to make it easier to distinguish between black and white pixels.
A sample picture, save after finding the points and before constructing the lines:
http://www.imagebanana.com/view/i6gfe6qi/9.jpg
What you are describing is edge detection. This is exactly how, say, the Canny edge detector works. It looks for dark pixels near light pixels, and based on a threshold that you pass in (There is also the adaptive canny, which figures out the threshold for you), and sets them to all black or all white (aka 'marks' them).
See here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.html

Resources