I have an image with only black and white pixels. The image contains edges (the black pixels) with the width of one pixel (each black pixel has exactly one or two black neighbourpixels). Now i want to group the edges into different shape classes (e.g. line, triangle, ellipse). Problem: the edges are not perfect lines, triangles or ellipses.
I think i can partially solve the problem by logical thinking. But i also have more complex geometries where this will be more difficult.
Does anyone know how to solve this kind of problem? Or can anyone give me some ideas?
A general way to find the shape of the edges will be to find the convex hull of the points. After that you can try to discard sides in the convex hull which are small than a certain threshold.
Related
I have a contour in Opencv with a convexity defect (the one in red) and I want to cut that contour in two parts, horizontally traversing that point, is there anyway to do it, so I just get the contour marked in yellow?
Image describing the problem
That's an interesting question. There are some solutions based on how the concavity points are distributed in your image.
1) If such points does not occur at the bottom of the contour (like your simple example). Then here is a pseudo-code.
Find convex hull C of the image I.
Subtract I from C, that will give you the concavity areas (like the black triangle between two white triangles in your example).
The point with the minimum y value in that area gives you the horizontal line to cut.
2) If such points can occur anywhere, you need a more intelligent algorithm which has cut lines that are not constrained by only being horizontal (because the min-y point of that difference will be the min-y of the image). You can find the "inner-most" corner points, and connect them to each other. You can recursively cut the remainder in y-,x+,y+,x- directions. It really depends on the specs of your input.
I'm trying to align two rectangles using the perspectiveTransform. In the image below there are the two orange rectangles (I know their dimensions and locations) and I want to warp the perspective so that they are of approximately the same size and in line (the green ones in the image). A perspectiveTransform that e.g. makes the small one equal in size with the big one doesn't really do the trick, as the size of the big one changes too. Any help much appreciated!
I want to fit an image of a clown like face into a contour of another face (a person).
I am detecting the persons face and getting a elliptical-like contour.
I can figure out the center, radius, highest, lowest, left-most and right-most points.
How do I fit the clown face (a square image which I can make elliptical by cutting the face out of the empty background of a png and then detecting the contour) into the persons face?
Or at the least, how do I fit a polygon into another polygon.
I can fit a rectangular image into a rectangular contour with ease, but faces aren't that shape.
Python preferable, but C++ is also manageable, thank you.
Edit: Visual representation as requested:
I have
and I want to make it like this:
but I want the clown face to stretch over the guys face and fit within the blue contour.
I think the keyword you are looking for is Active Appearance Models. First, you need to fit a model to first face (such as this one), which lays inside the contour. Then, you should fit the same model to the clown face. After that, since you have fitted same model to both faces, you can stretch it as you need.
I haven't use AAM myself and I'm not an expert about it, so my explanation might not be enough or might not be exactly correct, but I'm sure it will give you some insight.
A simple and good answer to this question is to find the extreme top, bottom, left, and right points on your contour (head) and then resize your mask to match the aspect ration and place it to cover the 4 points.
Because human heads are elliptical you can use fitEllipse() to give you those 4 points. This will automagically fix any problems with the person tilting their head because regardless of the angle you will know which point is top, bottom, left, and right.
The relevant code for finding the ellipse is:
vector<Point> contour;
// Do whatever you are doing to populate this vector
RotatedRect ellipse = fitEllipse(Mat(contour));
There is also an example as well as documentation for RotatedRect.
// Resize your mask with these sizes for optimum fit
ellipse.size.width
ellipse.size.height
You can rotate your image like this.
UPDATE:
You may also want to find the contour's extreme points to know how much you need to scale your image to ensure that all of the face is covered.
I'm going to find the most look-like rectangles among shapes. The first image is the original image with shapes which possibly be rectangles but they are not. The green rectangles in the second image is what I want. So is there a way to do this with opencv? I've tried hough lines but the result's not good
The source image:
And what I want is to find out the most look-like rectangle among these shapes, like the rectangles in green.
What I want:
A very simple approach is, after you have a rectangle bounding box around your shape, count the percentage of pixels inside the box which are white.
The higher the percentage of white pixels, the closest to a rectangle it is.
To get the bounding boxes you should take a look at either findContours from opencv, or some Blob extracting algorithm, you will find plenty of questions regarding those.
Edit:
Maybe you should first get the Minimum bounding rectangles of the shapes and then do this kind of heuristic:
Shrink the rectangle dimensions until the white-pixel percentage inside the rectangle reaches some threshold defined by you (like 90% of white pixels inside the rectangle).
To get the Minimum bounding rectangle (the smallest rectangle which contains the whole shape), you might check this tutorial:
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
One thing that might also help is doing the difference of sizes from the minimum bounding rectangle and the maximum inner rectangle (the biggest rectangle you can fit inside the white shape). The less difference there is between those rectangle's properties (width, height, area, center coordinates) the closest is the shape to a rectangle.
After some color detection, binary thresholding, and using cvFindContours() and drawing the contours and detected blue rectangle on the image I have:
My problem is to some simple collision avoidance (the blue rectangle in the center cannot hit the red "walls"). It would be helpful for my purposes to have the red wall contours be approximated as with rectangles. However, using a simple cvBoundingRect and drawing red rectangles around the white contours I get:
The edges are a little cropped off, but you may get the idea of what we would expect using a bounding rectangle for the contours, as the entire contour is used for the approximation of the bounding rectangle and hence the large overlapping rectangles. What I would like to have is the wall contours be divided into multiple bounding rectangles, such as the the left wall be approximated as one rectangle, the right wall, the forward wall, etc...as in my illustrative rendition below:
Any help in doing so would be greatly appreciated.
Detecting lines (typically Hough, RANSAC) together with some other information you have about the problem should be enough, maybe even overkill. For instance, starting with the below image at left, we get the below image at right.
But if you have the above image at left (which you should have already), the problem is already solved. Just draw both internal and external contours of the walls and you are set.